This website does readability filtering of other pages. All styles, scripts, forms and ads are stripped. If you want your website excluded or have other feedback, use this form.

Schneier on Security: Blog Entries Tagged cryptography

Schneier on Security

Blog > Entries by Tag >

Entries Tagged “cryptography”

Page 13 of 45

NSA/GCHQ Accused of Hacking Belgian Cryptographer

There has been a lot of news about Belgian cryptographer Jean-Jacques Quisquater having his computer hacked, and whether the NSA or GCHQ is to blame. There have been a lot of assumptions and hyperbole, mostly related to the GCHQ attack against the Belgian telecom operator Belgacom.

I'm skeptical. Not about the attack, but about the NSA's or GCHQ's involvement. I don't think there's a lot of operational value in most academic cryptographic research, and Quisquater wasn't involved in practical cryptanalysis of operational ciphers. I wouldn't put it past a less-clued nation-state to spy on academic cryptographers, but it's likelier this is a more conventional criminal attack. But who knows? Weirder things have happened.

Posted on February 10, 2014 at 6:57 AMView Comments

PowerLocker uses Blowfish

There's a new piece of ransomware out there, PowerLocker (also called PrisonLocker), that uses Blowfish:

PowerLocker could prove an even more potent threat because it would be sold in underground forums as a DIY malware kit to anyone who can afford the $100 for a license, Friday's post warned. CryptoLocker, by contrast, was custom built for use by a single crime gang. What's more, PowerLocker might also offer several advanced features, including the ability to disable the task manager, registry editor, and other administration functions built into the Windows operating system. Screen shots and online discussions also indicate the newer malware may contain protections that prevent it from being reverse engineered when run on virtual machines.

PowerLocker encrypts files using keys based on the Blowfish algorithm. Each key is then encrypted to a file that can only be unlocked by a 2048-bit private RSA key. The Malware Must Die researchers said they had been monitoring the discussions for the past few months. The possibility of a new crypto-based ransomware threat comes as developers continue to make improvements to the older CryptoLocker title. Late last month, for instance, researchers at antivirus provider Trend Micro said newer versions gave the CryptoLocker self-replicating abilities that allowed it to spread through USB thumb drives.

Posted on January 17, 2014 at 2:57 PMView Comments

Defending Against Crypto Backdoors

We already know the NSA wants to eavesdrop on the Internet. It has secret agreements with telcos to get direct access to bulk Internet traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to sift through it all. And it can identify ciphertext -- encrypted information -- and figure out which programs could have created it.

But what the NSA wants is to be able to read that encrypted information in as close to real-time as possible. It wants backdoors, just like the cybercriminals and less benevolent governments do.

And we have to figure out how to make it harder for them, or anyone else, to insert those backdoors.

How the NSA Gets Its Backdoors

The FBI tried to get backdoor access embedded in an AT&T secure telephone system in the mid-1990s. The Clipper Chip included something called a LEAF: a Law Enforcement Access Field. It was the key used to encrypt the phone conversation, itself encrypted in a special key known to the FBI, and it was transmitted along with the phone conversation. An FBI eavesdropper could intercept the LEAF and decrypt it, then use the data to eavesdrop on the phone call.

But the Clipper Chip faced severe backlash, and became defunct a few years after being announced.

Having lost that public battle, the NSA decided to get its backdoors through subterfuge: by asking nicely, pressuring, threatening, bribing, or mandating through secret order. The general name for this program is BULLRUN.

Defending against these attacks is difficult. We know from subliminal channel and kleptography research that it's pretty much impossible to guarantee that a complex piece of software isn't leaking secret information. We know from Ken Thompson's famous talk on "trusting trust" (first delivered in the ACM Turing Award Lectures) that you can never be totally sure if there's a security flaw in your software.

Since BULLRUN became public last month, the security community has been examining security flaws discovered over the past several years, looking for signs of deliberate tampering. The Debian random number flaw was probably not deliberate, but the 2003 Linux security vulnerability probably was. The DUAL_EC_DRBG random number generator may or may not have been a backdoor. The SSL 2.0 flaw was probably an honest mistake. The GSM A5/1 encryption algorithm was almost certainly deliberately weakened. All the common RSA moduli out there in the wild: we don't know. Microsoft's _NSAKEY looks like a smoking gun, but honestly, we don't know.

How the NSA Designs Backdoors

While a separate program that sends our data to some IP address somewhere is certainly how any hacker -- from the lowliest script kiddie up to the NSA -- spies on our computers, it's too labor-intensive to work in the general case.

For government eavesdroppers like the NSA, subtlety is critical. In particular, three characteristics are important:

  • Low discoverability. The less the backdoor affects the normal operations of the program, the better. Ideally, it shouldn't affect functionality at all. The smaller the backdoor is, the better. Ideally, it should just look like normal functional code. As a blatant example, an email encryption backdoor that appends a plaintext copy to the encrypted copy is much less desirable than a backdoor that reuses most of the key bits in a public IV (initialization vector).

  • High deniability. If discovered, the backdoor should look like a mistake. It could be a single opcode change. Or maybe a "mistyped" constant. Or "accidentally" reusing a single-use key multiple times. This is the main reason I am skeptical about _NSAKEY as a deliberate backdoor, and why so many people don't believe the DUAL_EC_DRBG backdoor is real: they're both too obvious.

  • Minimal conspiracy. The more people who know about the backdoor, the more likely the secret is to get out. So any good backdoor should be known to very few people. That's why the recently described potential vulnerability in Intel's random number generator worries me so much; one person could make this change during mask generation, and no one else would know.

These characteristics imply several things:

  • A closed-source system is safer to subvert, because an open-source system comes with a greater risk of that subversion being discovered. On the other hand, a big open-source system with a lot of developers and sloppy version control is easier to subvert.

  • If a software system only has to interoperate with itself, then it is easier to subvert. For example, a closed VPN encryption system only has to interoperate with other instances of that same proprietary system. This is easier to subvert than an industry-wide VPN standard that has to interoperate with equipment from other vendors.

  • A commercial software system is easier to subvert, because the profit motive provides a strong incentive for the company to go along with the NSA's requests.

  • Protocols developed by large open standards bodies are harder to influence, because a lot of eyes are paying attention. Systems designed by closed standards bodies are easier to influence, especially if the people involved in the standards don't really understand security.

  • Systems that send seemingly random information in the clear are easier to subvert. One of the most effective ways of subverting a system is by leaking key information -- recall the LEAF -- and modifying random nonces or header information is the easiest way to do that.

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of them is foolproof, but they are all useful. I'm sure there's more; this list isn't meant to be exhaustive, nor the final word on the topic. It's simply a starting place for discussion. But it won't work unless customers start demanding software with this sort of transparency.

  • Vendors should make their encryption code public, including the protocol specifications. This will allow others to examine the code for vulnerabilities. It's true we won't know for sure if the code we're seeing is the code that's actually used in the application, but surreptitious substitution is hard to do, forces the company to outright lie, and increases the number of people required for the conspiracy to work.

  • The community should create independent compatible versions of encryption systems, to verify they are operating properly. I envision companies paying for these independent versions, and universities accepting this sort of work as good practice for their students. And yes, I know this can be very hard in practice.

  • There should be no master secrets. These are just too vulnerable.

  • All random number generators should conform to published and accepted standards. Breaking the random number generator is the easiest difficult-to-detect method of subverting an encryption system. A corollary: we need better published and accepted RNG standards.

  • Encryption protocols should be designed so as not to leak any random information. Nonces should be considered part of the key or public predictable counters if possible. Again, the goal is to make it harder to subtly leak key bits in this information.

This is a hard problem. We don't have any technical controls that protect users from the authors of their software.

And the current state of software makes the problem even harder: Modern apps chatter endlessly on the Internet, providing noise and cover for covert communications. Feature bloat provides a greater "attack surface" for anyone wanting to install a backdoor.

In general, what we need is assurance: methodologies for ensuring that a piece of software does what it's supposed to do and nothing more. Unfortunately, we're terrible at this. Even worse, there's not a lot of practical research in this area -- and it's hurting us badly right now.

Yes, we need legal prohibitions against the NSA trying to subvert authors and deliberately weaken cryptography. But this isn't just about the NSA, and legal controls won't protect against those who don't follow the law and ignore international agreements. We need to make their job harder by increasing their risk of discovery. Against a risk-averse adversary, it might be good enough.

This essay previously appeared on Wired.com.

EDITED TO ADD: I am looking for other examples of known or plausible instances of intentional vulnerabilities for a paper I am writing on this topic. If you can think of an example, please post a description and reference in the comments below. Please explain why you think the vulnerability could be intentional. Thank you.

Posted on October 22, 2013 at 6:15 AMView Comments

Massive MIMO Cryptosystem

New paper: "Physical-Layer Cryptography Through Massive MIMO."

Abstract: We propose the new technique of physical-layer cryptography based on using a massive MIMO channel as a key between the sender and desired receiver, which need not be secret. The goal is for low-complexity encoding and decoding by the desired transmitter-receiver pair, whereas decoding by an eavesdropper is hard in terms of prohibitive complexity. The decoding complexity is analyzed by mapping the massive MIMO system to a lattice. We show that the eavesdropper's decoder for the MIMO system with M-PAM modulation is equivalent to solving standard lattice problems that are conjectured to be of exponential complexity for both classical and quantum computers. Hence, under the widely-held conjecture that standard lattice problems are hard to solve in the worst-case, the proposed encryption scheme has a more robust notion of security than that of the most common encryption methods used today such as RSA and Diffie-Hellman. Additionally, we show that this scheme could be used to securely communicate without a pre-shared secret and little computational overhead. Thus, the massive MIMO system provides for low-complexity encryption commensurate with the most sophisticated forms of application-layer encryption by exploiting the physical layer properties of the radio channel.

MIMO stands for "multiple-input multiple-output." I had to look that up.

In general, I'm not optimistic about the security of these sorts of systems. Whenever non-cryptographers come up with cryptographic algorithms based on some novel problem that's hard in their area of research, invariably there are pretty easy cryptographic attacks.

So consider this a good research exercise for all budding cryptanalysts out there.

Posted on October 15, 2013 at 6:27 AMView Comments

Insecurities in the Linux /dev/random

New paper: "Security Analysis of Pseudo-Random Number Generators with Input: /dev/random is not Robust, by Yevgeniy Dodis, David Pointcheval, Sylvain Ruhault, Damien Vergnaud, and Daniel Wichs.

Abstract: A pseudo-random number generator (PRNG) is a deterministic algorithm that produces numbers whose distribution is indistinguishable from uniform. A formal security model for PRNGs with input was proposed in 2005 by Barak and Halevi (BH). This model involves an internal state that is refreshed with a (potentially biased) external random source, and a cryptographic function that outputs random numbers from the continually internal state. In this work we extend the BH model to also include a new security property capturing how it should accumulate the entropy of the input data into the internal state after state compromise. This property states that a good PRNG should be able to eventually recover from compromise even if the entropy is injected into the system at a very slow pace, and expresses the real-life expected behavior of existing PRNG designs. Unfortunately, we show that neither the model nor the specific PRNG construction proposed by Barak and Halevi meet this new property, despite meeting a weaker robustness notion introduced by BH. From a practical side, we also give a precise assessment of the security of the two Linux PRNGs, /dev/random and /dev/urandom. In particular, we show several attacks proving that these PRNGs are not robust according to our definition, and do not accumulate entropy properly. These attacks are due to the vulnerabilities of the entropy estimator and the internal mixing function of the Linux PRNGs. These attacks against the Linux PRNG show that it does not satisfy the "robustness" notion of security, but it remains unclear if these attacks lead to actual exploitable vulnerabilities in practice. Finally, we propose a simple and very efficient PRNG construction that is provably robust in our new and stronger adversarial model. We present benchmarks between this construction and the Linux PRNG that show that this construction is on average more efficient when recovering from a compromised internal state and when generating cryptographic keys. We therefore recommend to use this construction whenever a PRNG with input is used for cryptography.

Posted on October 14, 2013 at 1:06 PMView Comments

Will Keccak = SHA-3?

Last year, NIST selected Keccak as the winner of the SHA-3 hash function competition. Yes, I would have rather my own Skein had won, but it was a good choice.

But last August, John Kelsey announced some changes to Keccak in a talk (slides 44-48 are relevant). Basically, the security levels were reduced and some internal changes to the algorithm were made, all in the name of software performance.

Normally, this wouldn't be a big deal. But in light of the Snowden documents that reveal that the NSA has attempted to intentionally weaken cryptographic standards, this is a huge deal. There is too much mistrust in the air. NIST risks publishing an algorithm that no one will trust and no one (except those forced) will use.

At this point, they simply have to standardize on Keccak as submitted and as selected.

CDT has a great post about this.

Also this Slashdot thread.

EDITED TO ADD (10/5): It's worth reading the response from the Keccak team on this issue.

I misspoke when I wrote that NIST made "internal changes" to the algorithm. That was sloppy of me. The Keccak permutation remains unchanged. What NIST proposed was reducing the hash function's capacity in the name of performance. One of Keccak's nice features is that it's highly tunable.

I do not believe that the NIST changes were suggested by the NSA. Nor do I believe that the changes make the algorithm easier to break by the NSA. I believe NIST made the changes in good faith, and the result is a better security/performance trade-off. My problem with the changes isn't cryptographic, it's perceptual. There is so little trust in the NSA right now, and that mistrust is reflecting on NIST. I worry that the changed algorithm won't be accepted by an understandably skeptical security community, and that no one will use SHA-3 as a result.

This is a lousy outcome. NIST has done a great job with cryptographic competitions: both a decade ago with AES and now with SHA-3. This is just another effect of the NSA's actions draining the trust out of the Internet.

Posted on October 1, 2013 at 10:50 AMView Comments

←Previous 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 Next→

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.