This website does readability filtering of other pages. All styles, scripts, forms and ads are stripped. If you want your website excluded or have other feedback, use this form.

Schneier on Security: Blog Entries Tagged cryptography

Schneier on Security

Blog > Entries by Tag >

Entries Tagged “cryptography”

Page 15 of 44

Protecting E-Mail from Eavesdropping

In the wake of the Snowden NSA documents, reporters have been asking me whether encryption can solve the problem. Leaving aside the fact that much of what the NSA is collecting can't be encrypted by the user -- telephone metadata, e-mail headers, phone calling records, e-mail you're reading from a phone or tablet or cloud provider, anything you post on Facebook -- it's hard to give good advice.

In theory, an e-mail program will protect you, but the reality is much more complicated.

  • The program has to be vulnerability-free. If there is some back door in the program that bypasses, or weakens, the encryption, it's not secure. It's very difficult, almost impossible, to verify that a program is vulnerability-free.

  • The user has to choose a secure password. Luckily, there's advice on how to do this.

  • The password has to be managed securely. The user can't store it in a file somewhere. If he's worried about security for after the FBI has arrested him and searched his house, he shouldn't write it on a piece of paper, either.

  • Actually, he should understand the threat model he's operating under. Is it the NSA trying to eavesdrop on everything, or an FBI investigation that specifically targets him -- or a targeted attack, like dropping a Trojan on his computer, that bypasses e-mail encryption entirely?

This is simply too much for the poor reporter, who wants an easy-to-transcribe answer.

We've known how to send cryptographically secure e-mail since the early 1990s. Twenty years later, we're still working on the security engineering of e-mail programs. And if the NSA is eavesdropping on encrypted e-mail, and if the FBI is decrypting messages from suspects' hard drives, they're both breaking the engineering, not the underlying cryptographic algorithms.

On the other hand, the two adversaries can be very different. The NSA has to process a ginormous amount of traffic. It's the "drinking from a fire hose" problem; they cannot afford to devote a lot of time to decrypting everything, because they simply don't have the computing resources. There's just too much data to collect. In these situations, even a modest level of encryption is enough -- until you are specifically targeted. This is why the NSA saves all encrypted data it encounters; it might want to devote cryptanalysis resources to it at some later time.

Posted on July 8, 2013 at 6:43 AMView Comments

Is Cryptography Engineering or Science?

Responding to a tweet by Thomas Ptacek saying, "If you're not learning crypto by coding attacks, you might not actually be learning crypto," Colin Percival published a well-thought-out rebuttal, saying in part:

If we were still in the 1990s, I would agree with Thomas. 1990s cryptography was full of holes, and the best you could hope for was to know how your tools were broken so you could try to work around their deficiencies. This was a time when DES and RC4 were widely used, despite having well-known flaws. This was a time when people avoided using CTR mode to convert block ciphers into stream ciphers, due to concern that a weak block cipher could break if fed input blocks which shared many (zero) bytes in common. This was a time when people cared about the "error propagation" properties of block ciphers ­ that is, how much of the output would be mangled if a small number of bits in the ciphertext are flipped. This was a time when people routinely advised compressing data before encrypting it, because that "compacted" the entropy in the message, and thus made it "more difficult for an attacker to identify when he found the right key". It should come as no surprise that SSL, designed during this era, has had a long list of design flaws.

Cryptography in the 2010s is different. Now we start with basic components which are believed to be highly secure ­ e.g., block ciphers which are believed to be indistinguishable from random permutations ­ and which have been mathematically proven to be secure against certain types of attacks ­ e.g., AES is known to be immune to differential cryptanalysis. From those components, we then build higher-order systems using mechanisms which have been proven to not introduce vulnerabilities. For example, if you generate an ordered sequence of packets by encrypting data using an indistinguishable-from-random-permutation block cipher (e.g., AES) in CTR mode using a packet sequence number as the CTR nonce, and then append a weakly-unforgeable MAC (e.g., HMAC-SHA256) of the encrypted data and the packet sequence number, the packets both preserve privacy and do not permit any undetected tampering (including replays and reordering of packets). Life will become even better once Keccak (aka. SHA-3) becomes more widely reviewed and trusted, as its "sponge" construction can be used to construct -- with provable security -- a very wide range of important cryptographic components.

He recommends a more modern approach to cryptography: "studying the theory and designing systems which you can prove are secure."

I think both of statements are true -- and not contradictory at all. The apparent disagreement stems from differing definitions of cryptography.

Many years ago, on the Cryptographer's Panel at an RSA conference, then-chief scientist for RSA Bert Kaliski talked about the rise of something he called the "crypto engineer." His point was that the practice of cryptography was changing. There was the traditional mathematical cryptography -- designing and analyzing algorithms and protocols, and building up cryptographic theory -- but there was also a more practice-oriented cryptography: taking existing cryptographic building blocks and creating secure systems out of them. It's this latter group he called crypto engineers. It's the group of people I wrote Applied Cryptography, and, most recently, co-wrote Cryptography Engineering, for. Colin knows this, directing his advice to "developers" -- Kaliski's crypto engineers.

Traditional cryptography is a science -- applied mathematics -- and applied cryptography is engineering. I prefer the term "security engineering," because it necessarily encompasses a lot more than cryptography -- see Ross Andersen's great book of that name. And mistakes in engineering are where a lot of real-world cryptographic systems break.

Provable security has its limitations. Cryptographer Lars Knudsen once said: "If it's provably secure, it probably isn't." Yes, we have provably secure cryptography, but those proofs take very specific forms against very specific attacks. They reduce the number of security assumptions we have to make about a system, but we still have to make a lot of security assumptions.

And cryptography has its limitations in general, despite the apparent strengths. Cryptography's great strength is that it gives the defender a natural advantage: adding a single bit to a cryptographic key increases the work to encrypt by only a small amount, but doubles the work required to break the encryption. This is how we design algorithms that -- in theory -- can't be broken until the universe collapses back on itself.

Despite this, cryptographic systems are broken all the time: well before the heat death of the universe. They're broken because of software mistakes in coding the algorithms. They're broken because the computer’s memory management system left a stray copy of the key lying around, and the operating system automatically copied it to disk. They're broken because of buffer overflows and other security flaws. They're broken by side-channel attacks. They're broken because of bad user interfaces, or insecure user practices.

Lots of people have said: "In theory, theory and practice are the same. But in practice, they are not." It’s true about cryptography. If you want to be a cryptographer, study mathematics. Study the mathematics of cryptography, and especially cryptanalysis. There's a lot of art to the science, and you won't be able to design good algorithms and protocols until you gain experience in breaking existing ones. If you want to be a security engineer, study implementations and coding. Take the tools cryptographers create, and learn how to use them well.

The world needs security engineers even more than it needs cryptographers. We're great at mathematically secure cryptography, and terrible at using those tools to engineer secure systems.

After writing this, I found a conversation between the two where they both basically agreed with me.

Posted on July 5, 2013 at 7:04 AMView Comments

SIMON and SPECK: New NSA Encryption Algorithms

The NSA has published some new symmetric algorithms:

Abstract: In this paper we propose two families of block ciphers, SIMON and SPECK, each of which comes in a variety of widths and key sizes. While many lightweight block ciphers exist, most were designed to perform well on a single platform and were not meant to provide high performance across a range of devices. The aim of SIMON and SPECK is to fill the need for secure, flexible, and analyzable lightweight block ciphers. Each offers excellent performance on hardware and software platforms, is flexible enough to admit a variety of implementations on a given platform, and is amenable to analysis using existing techniques. Both perform exceptionally well across the full spectrum of lightweight applications, but SIMON is tuned for optimal performance in hardware, and SPECK for optimal performance in software.

It's always fascinating to study NSA-designed ciphers. I was particularly interested in the algorithms' similarity to Threefish, and how they improved on what we did. I was most impressed with their key schedule. I am always impressed with how the NSA does key schedules. And I enjoyed the discussion of requirements. Missing, of course, is any cryptanalytic analysis.

I don't know anything about the context of this paper. Why was the work done, and why is it being made public? I'm curious.

Posted on July 1, 2013 at 6:24 AMView Comments

The Problems with CALEA-II

The FBI wants a new law that will make it easier to wiretap the Internet. Although its claim is that the new law will only maintain the status quo, it's really much worse than that. This law will result in less-secure Internet products and create a foreign industry in more-secure alternatives. It will impose costly burdens on affected companies. It will assist totalitarian governments in spying on their own citizens. And it won't do much to hinder actual criminals and terrorists.

As the FBI sees it, the problem is that people are moving away from traditional communication systems like telephones onto computer systems like Skype. Eavesdropping on telephones used to be easy. The FBI would call the phone company, which would bring agents into a switching room and allow them to literally tap the wires with a pair of alligator clips and a tape recorder. In the 1990s, the government forced phone companies to provide an analogous capability on digital switches; but today, more and more communications happens over the Internet.

What the FBI wants is the ability to eavesdrop on everything. Depending on the system, this ranges from easy to impossible. E-mail systems like Gmail are easy. The mail resides in Google's servers, and the company has an office full of people who respond to requests for lawful access to individual accounts from governments all over the world. Encrypted voice systems like Silent Circle are impossible to eavesdrop on—the calls are encrypted from one computer to the other, and there's no central node to eavesdrop from. In those cases, the only way to make the system eavesdroppable is to add a backdoor to the user software. This is precisely the FBI's proposal. Companies that refuse to comply would be fined $25,000 a day.

The FBI believes it can have it both ways: that it can open systems to its eavesdropping, but keep them secure from anyone else's eavesdropping. That's just not possible. It's impossible to build a communications system that allows the FBI surreptitious access but doesn't allow similar access by others. When it comes to security, we have two options: We can build our systems to be as secure as possible from eavesdropping, or we can deliberately weaken their security. We have to choose one or the other.

This is an old debate, and one we've been through many times. The NSA even has a name for it: the equities issue. In the 1980s, the equities debate was about export control of cryptography. The government deliberately weakened U.S. cryptography products because it didn't want foreign groups to have access to secure systems. Two things resulted: fewer Internet products with cryptography, to the insecurity of everybody, and a vibrant foreign security industry based on the unofficial slogan "Don't buy the U.S. stuff -- it's lousy."

In 1993, the debate was about the Clipper Chip. This was another deliberately weakened security product, an encrypted telephone. The FBI convinced AT&T to add a backdoor that allowed for surreptitious wiretapping. The product was a complete failure. Again, why would anyone buy a deliberately weakened security system?

In 1994, the Communications Assistance for Law Enforcement Act mandated that U.S. companies build eavesdropping capabilities into phone switches. These were sold internationally; some countries liked having the ability to spy on their citizens. Of course, so did criminals, and there were public scandals in Greece (2005) and Italy (2006) as a result.

In 2012, we learned that every phone switch sold to the Department of Defense had security vulnerabilities in its surveillance system. And just this May, we learned that Chinese hackers breached Google's system for providing surveillance data for the FBI.

The new FBI proposal will fail in all these ways and more. The bad guys will be able to get around the eavesdropping capability, either by building their own security systems -- not very difficult -- or buying the more-secure foreign products that will inevitably be made available. Most of the good guys, who don't understand the risks or the technology, will not know enough to bother and will be less secure. The eavesdropping functions will 1) result in more obscure -- and less secure -- product designs, and 2) be vulnerable to exploitation by criminals, spies, and everyone else. U.S. companies will be forced to compete at a disadvantage; smart customers won't buy the substandard stuff when there are more-secure foreign alternatives. Even worse, there are lots of foreign governments who want to use these sorts of systems to spy on their own citizens. Do we really want to be exporting surveillance technology to the likes of China, Syria, and Saudi Arabia?

The FBI's short-sighted agenda also works against the parts of the government that are still working to secure the Internet for everyone. Initiatives within the NSA, the DOD, and DHS to do everything from securing computer operating systems to enabling anonymous web browsing will all be harmed by this.

What to do, then? The FBI claims that the Internet is "going dark," and that it's simply trying to maintain the status quo of being able to eavesdrop. This characterization is disingenuous at best. We are entering a golden age of surveillance; there's more electronic communications available for eavesdropping than ever before, including whole new classes of information: location tracking, financial tracking, and vast databases of historical communications such as e-mails and text messages. The FBI's surveillance department has it better than ever. With regard to voice communications, yes, software phone calls will be harder to eavesdrop upon. (Although there are questions about Skype's security.) That's just part of the evolution of technology, and one that on balance is a positive thing.

Think of it this way: We don't hand the government copies of our house keys and safe combinations. If agents want access, they get a warrant and then pick the locks or bust open the doors, just as a criminal would do. A similar system would work on computers. The FBI, with its increasingly non-transparent procedures and systems, has failed to make the case that this isn't good enough.

Finally there's a general principle at work that's worth explicitly stating. All tools can be used by the good guys and the bad guys. Cars have enormous societal value, even though bank robbers can use them as getaway cars. Cash is no different. Both good guys and bad guys send e-mails, use Skype, and eat at all-night restaurants. But because society consists overwhelmingly of good guys, the good uses of these dual-use technologies greatly outweigh the bad uses. Strong Internet security makes us all safer, even though it helps the bad guys as well. And it makes no sense to harm all of us in an attempt to harm a small subset of us.

This essay originally appeared in Foreign Policy.

Posted on June 4, 2013 at 12:44 PMView Comments

Nice Security Mindset Example

A real-world one-way function:

Alice and Bob procure the same edition of the white pages book for a particular town, say Cambridge. For each letter Alice wants to encrypt, she finds a person in the book whose last name starts with this letter and uses his/her phone number as the encryption of that letter.

To decrypt the message Bob has to read through the whole book to find all the numbers.

And a way to break it:

I still use this example, with an assumption that there is no reverse look-up. I recently taught it to my AMSA students. And one of my 8th graders said, "If I were Bob, I would just call all the phone numbers and ask their last names."

In the fifteen years since I've been using this example, this idea never occurred to me. I am very shy so it would never enter my mind to call a stranger and ask for their last name. My student made me realize that my own personality affected my mathematical inventiveness.

I've written about the security mindset in the past, and this is a great example of it.

Posted on April 9, 2013 at 1:49 PMView Comments

←Previous 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 Next→

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.