This website does readability filtering of other pages. All styles, scripts, forms and ads are stripped. If you want your website excluded or have other feedback, use this form.

Schneier on Security: Blog Entries Tagged cryptography

Schneier on Security

Blog > Entries by Tag >

Entries Tagged “cryptography”

Page 31 of 44

Adi Shamir's Cube Attacks

At this moment, Adi Shamir is giving an invited talk at the Crypto 2008 conference about a new type of cryptanalytic attack called "cube attacks." He claims very broad applicability to stream and block ciphers.

My personal joke -- at least I hope it's a joke -- is that he's going to break every NIST hash submission without ever seeing any of them. (Note: The attack, at least at this point, doesn't apply to hash functions.)

More later.

EDITED TO ADD (8/19): AES is immune to this attack -- the degree of the algebraic polynomial is too high -- and all the block ciphers we use have a higher degree. But, in general, anything that can be described with a low-degree polynomial equation is vulnerable: that's pretty much every LFSR scheme.

EDITED TO ADD (8/19): The typo that amused you all below has been fixed. And this attack doesn't apply to any block cipher -- DES, AES, Blowfish, Twofish, anything else -- in common use; their degree is much too high. It doesn't apply to hash functions at all, at least not yet -- but again, the degree of all the common ones is much too high. I will post a link to the paper when it becomes available; I assume Adi will post it soon. (The paper was rejected from Asiacrypt, demonstrating yet again that the conference review process is broken.)

EDITED TO ADD (8/19): Adi's coauthor is Itai Dinur. Their plan is to submit the paper to Eurocrypt 2009. They will publish it as soon as they can, depending on the Eurocrypt rules about prepublication.

EDITED TO ADD (8/26): Two news articles with not a lot of information.

EDITED TO ADD (9/4): Some more details.

EDITED TO ADD (9/14): The paper is online.

Posted on August 19, 2008 at 1:15 PMView Comments

Hacking Mifare Transport Cards

London's Oyster card has been cracked, and the final details will become public in October. NXP Semiconductors, the Philips spin-off that makes the system, lost a court battle to prevent the researchers from publishing. People might be able to use this information to ride for free, but the sky won't be falling. And the publication of this serious vulnerability actually makes us all safer in the long run.

Here's the story. Every Oyster card has a radio-frequency identification chip that communicates with readers mounted on the ticket barrier. That chip, the "Mifare Classic" chip, is used in hundreds of other transport systems as well — Boston, Los Angeles, Brisbane, Amsterdam, Taipei, Shanghai, Rio de Janeiro — and as an access pass in thousands of companies, schools, hospitals, and government buildings around Britain and the rest of the world.

The security of Mifare Classic is terrible. This is not an exaggeration; it's kindergarten cryptography. Anyone with any security experience would be embarrassed to put his name to the design. NXP attempted to deal with this embarrassment by keeping the design secret.

The group that broke Mifare Classic is from Radboud University Nijmegen in the Netherlands. They demonstrated the attack by riding the Underground for free, and by breaking into a building. Their two papers (one is already online) will be published at two conferences this autumn.

The second paper is the one that NXP sued over. They called disclosure of the attack "irresponsible," warned that it will cause "immense damages," and claimed that it "will jeopardize the security of assets protected with systems incorporating the Mifare IC." The Dutch court would have none of it: "Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings."

Exactly right. More generally, the notion that secrecy supports security is inherently flawed. Whenever you see an organization claiming that design secrecy is necessary for security — in ID cards, in voting machines, in airport security — it invariably means that its security is lousy and it has no choice but to hide it. Any competent cryptographer would have designed Mifare's security with an open and public design.

Secrecy is fragile. Mifare's security was based on the belief that no one would discover how it worked; that's why NXP had to muzzle the Dutch researchers. But that's just wrong. Reverse-engineering isn't hard. Other researchers had already exposed Mifare's lousy security. A Chinese company even sells a compatible chip. Is there any doubt that the bad guys already know about this, or will soon enough?

Publication of this attack might be expensive for NXP and its customers, but it's good for security overall. Companies will only design security as good as their customers know to ask for. NXP's security was so bad because customers didn't know how to evaluate security: either they don't know what questions to ask, or didn't know enough to distrust the marketing answers they were given. This court ruling encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers.

It's unclear how this break will affect Transport for London. Cloning takes only a few seconds, and the thief only has to brush up against someone carrying a legitimate Oyster card. But it requires an RFID reader and a small piece of software which, while feasible for a techie, are too complicated for the average fare dodger. The police are likely to quickly arrest anyone who tries to sell cloned cards on any scale. TfL promises to turn off any cloned cards within 24 hours, but that will hurt the innocent victim who had his card cloned more than the thief.

The vulnerability is far more serious to the companies that use Mifare Classic as an access pass. It would be very interesting to know how NXP presented the system's security to them.

And while these attacks only pertain to the Mifare Classic chip, it makes me suspicious of the entire product line. NXP sells a more secure chip and has another on the way, but given the number of basic cryptography mistakes NXP made with Mifare Classic, one has to wonder whether the "more secure" versions will be sufficiently so.

This essay originally appeared in the Guardian.

Posted on August 7, 2008 at 6:07 AMView Comments

TrueCrypt's Deniable File System

Together with Tadayoshi Kohno, Steve Gribble, and three of their students at the University of Washington, I have a new paper that breaks the deniable encryption feature of TrueCrypt version 5.1a. Basically, modern operating systems leak information like mad, making deniability a very difficult requirement to satisfy.

ABSTRACT: We examine the security requirements for creating a Deniable File System (DFS), and the efficacy with which the TrueCrypt disk-encryption software meets those requirements. We find that the Windows Vista operating system itself, Microsoft Word, and Google Desktop all compromise the deniability of a TrueCrypt DFS. While staged in the context of TrueCrypt, our research highlights several fundamental challenges to the creation and use of any DFS: even when the file system may be deniable in the pure, mathematical sense, we find that the environment surrounding that file system can undermine its deniability, as well as its contents. Finally, we suggest approaches for overcoming these challenges on modern operating systems like Windows.

The students did most of the actual work. I helped with the basic ideas, and contributed the threat model. Deniability is a very hard feature to achieve.

There are several threat models against which a DFS could potentially be secure:

  • One-Time Access. The attacker has a single snapshot of the disk image. An example would be when the secret police seize Alice’s computer.
  • Intermittent Access. The attacker has several snapshots of the disk image, taken at different times. An example would be border guards who make a copy of Alice’s hard drive every time she enters or leaves the country.
  • Regular Access. The attacker has many snapshots of the disk image, taken in short intervals. An example would be if the secret police break into Alice’s apartment every day when she is away, and make a copy of the disk each time.

Since we wrote our paper, TrueCrypt released version 6.0 of its software, which claims to have addressed many of the issues we've uncovered. In the paper, we said:

We analyzed the most current version of TrueCrypt available at the writing of the paper, version 5.1a. We shared a draft of our paper with the TrueCrypt development team in May 2008. TrueCrypt version 6.0 was released in July 2008. We have not analyzed version 6.0, but observe that TrueCrypt v6.0 does take new steps to improve TrueCrypt’s deniability properties (e.g., via the creation of deniable operating systems, which we also recommend in Section 5). We suggest that the breadth of our results for TrueCrypt v5.1a highlight the challenges to creating deniable file systems. Given these potential challenges, we encourage the users not to blindly trust the deniability of such systems. Rather, we encourage further research evaluating the deniability of such systems, as well as research on new yet light-weight methods for improving deniability.

So we cannot break the deniability feature in TrueCrypt 6.0. But, honestly, I wouldn't trust it.

There have been two news articles (and a Slashdot thread) about the paper.

One talks about a generalization to encrypted partitions. If you don't encrypt the entire drive, there is the possibility -- and it seems very probable -- that information about the encrypted partition will leak onto the unencrypted rest of the drive. Whole disk encryption is the smartest option.

Our paper will be presented at the 3rd USENIX Workshop on Hot Topics in Security (HotSec '08). I've written about deniability before.

Posted on July 18, 2008 at 6:56 AMView Comments

Eavesdropping on Encrypted Compressed Voice

Traffic analysis works even through the encryption:

The new compression technique, called variable bitrate compression produces different size packets of data for different sounds.

That happens because the sampling rate is kept high for long complex sounds like "ow", but cut down for simple consonants like "c". This variable method saves on bandwidth, while maintaining sound quality.

VoIP streams are encrypted to prevent eavesdropping. However, a team from John Hopkins University in Baltimore, Maryland, US, has shown that simply measuring the size of packets without decoding them can identify whole words and phrases with a high rate of accuracy.

The technique isn't good enough to decode entire conversations, but it's pretty impressive.

Posted on June 19, 2008 at 6:27 AMView Comments

Ransomware

I've never figured out the fuss over ransomware:

Some day soon, you may go in and turn on your Windows PC and find your most valuable files locked up tighter than Fort Knox.

You'll also see this message appear on your screen:

"Your files are encrypted with RSA-1024 algorithm. To recovery your files you need to buy our decryptor. To buy decrypting tool contact us at: ********@yahoo.com"

How is this any worse than the old hacker viruses that put a funny message on your screen and erased your hard drive?

Here's how I see it, if someone actually manages to pull this up and put it into circulation, we're looking at malware Armegeddon. Instead of losing 'just' your credit card numbers or having your PC turned into a spam factory, you could lose vital files forever.

Of course, you could keep current back-ups. I do, but I've been around this track way too many times to think that many companies, much less individual users, actually keep real back-ups. Oh, you may think you do, but when was the last time you checked to see if the data you saved could actually be restored?

The single most important thing any company or individual can do to improve security is have a good backup strategy. It's been true for decades, and it's still true today.

Posted on June 16, 2008 at 1:09 PMView Comments

Kaspersky Labs Trying to Crack 1024-bit RSA

I can't figure this story out. Kaspersky Lab is launching an international distributed effort to crack a 1024-bit RSA key used by the Gpcode Virus. From their website:

We estimate it would take around 15 million modern computers, running for about a year, to crack such a key.

What are they smoking at Kaspersky? We've never factored a 1024-bit number -- at least, not outside any secret government agency -- and it's likely to require a lot more than 15 million computer years of work. The current factoring record is a 1023-bit number, but it was a special number that's easier to factor than a product-of-two-primes number used in RSA. Breaking that Gpcode key will take a lot more mathematical prowess than you can reasonably expect to find by asking nicely on the Internet. You've got to understand the current best mathematical and computational optimizations of the Number Field Sieve, and cleverly distribute the parts that can be distributed. You can't just post the products and hope for the best.

Is this just a way for Kaspersky to generate itself some nice press, or are they confused in Moscow?

EDITED TO ADD (6/15): Kaspersky now says:

The company clarified, however, that it's more interested in getting help in finding flaws in the encryption implementation.

"We are not trying to crack the key," Roel Schouwenberg, senior antivirus researcher with Kaspersky Lab, told SecurityFocus. "We want to see collectively whether there are implementation errors, so we can do what we did with previous versions and find a mistake to help us find the key."

Schouwenberg agrees that, if no implementation flaw is found, searching for the decryption key using brute-force computing power is unlikely to work.

"Clarified" is overly kind. There was nothing confusing about Kaspersky's post that needed clarification, and what they're saying now completely contradicts what they did post. Seems to me like they're trying to pretend it never happened.

EDITED TO ADD (6/30): A Kaspersky virus analyst comments on this entry.

Posted on June 12, 2008 at 12:30 PMView Comments

Random Number Bug in Debian Linux

This is a big deal:

On May 13th, 2008 the Debian project announced that Luciano Bello found an interesting vulnerability in the OpenSSL package they were distributing. The bug in question was caused by the removal of the following line of code from md_rand.c

	MD_Update(&m,buf,j);
	[ .. ]
	MD_Update(&m,buf,j); /* purify complains */

These lines were removed because they caused the Valgrind and Purify tools to produce warnings about the use of uninitialized data in any code that was linked to OpenSSL. You can see one such report to the OpenSSL team here. Removing this code has the side effect of crippling the seeding process for the OpenSSL PRNG. Instead of mixing in random data for the initial seed, the only "random" value that was used was the current process ID. On the Linux platform, the default maximum process ID is 32,768, resulting in a very small number of seed values being used for all PRNG operations.

More info, from Debian, here. And from the hacker community here. Seems that the bug was introduced in September 2006.

More analysis here. And a cartoon.

Random numbers are used everywhere in cryptography, for both short- and long-term security. And, as we've seen here, security flaws in random number generators are really easy to accidently create and really hard to discover after the fact. Back when the NSA was routinely weakening commercial cryptography, their favorite technique was reducing the entropy of the random number generator.

Posted on May 19, 2008 at 6:07 AMView Comments

←Previous 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 Next→

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.