This website does readability filtering of other pages. All styles, scripts, forms and ads are stripped. If you want your website excluded or have other feedback, use this form.

Schneier on Security: Blog Entries Tagged cryptography

Schneier on Security

Blog > Entries by Tag >

Entries Tagged “cryptography”

Page 21 of 45

World War II Tunny Cryptanalysis Machine Rebuilt at Bletchley Park

Neat:

The rebuild team had only a few photographs, partial circuit diagrams and the fading memories of a few original Tunny operators to go on. Nonetheless a team led by John Pether and John Whetter was able to complete this restoration work.

Pether explained that getting the electronics to work proved to be the most difficult part of the restoration process.

"We've succeeded in rebuilding Tunny with scraps of evidence, and although we are very proud of our work it is rather different from the truly astonishing achievement of Bill Tutte's re-engineering of the Lorenz machine," he said. "Sourcing 200 suitable relays and dealing with the complex wiring schedules was difficult, but we really got in tune with the original team when we had to set up the electronic timing circuits. They were a continuous source of problems then as they are even now for the rebuild team -- except the original team didn't even have the benefit of digital storage oscilloscopes."

The rebuild took place in four stages: the construction of a one-wheel Tunny to ensure that timing circuits and relays worked correctly, followed by progressively more complex five-, seven- and 12-wheel Tunny. At each stage, the rebuilds were tested. Key components for the Tunny rebuild were salvaged from decommissioned analogue telephone exchanges, donated by BT. The same components were used to complete the earlier Colussus rebuild project.

Now they have a working Tunny to complement their working Colossus and working Bombe.

Posted on June 3, 2011 at 1:49 PMView Comments

Nikon Image Authentication System Cracked

Not a lot of details:

ElcomSoft research shows that image metadata and image data are processed independently with a SHA-1 hash function. There are two 160-bit hash values produced, which are later encrypted with a secret (private) key by using an asymmetric RSA-1024 algorithm to create a digital signature. Two 1024-bit (128-byte) signatures are stored in EXIF MakerNote tag 0×0097 (Color Balance).

During validation, Nikon Image Authentication Software calculates two SHA-1 hashes from the same data, and uses the public key to verify the signature by decrypting stored values and comparing the result with newly calculated hash values.

The ultimate vulnerability is that the private (should-be-secret) cryptographic key is handled inappropriately, and can be extracted from camera. After obtaining the private key, it is possible to generate a digital signature value for any image, thus forging the Image Authentication System.

News article.

Canon's system is just as bad, by the way.

Fifteen years ago, I co-authored a paper on the problem. The idea was to use a hash chain to better deal with the possibility of a secret-key compromise.

Posted on May 3, 2011 at 7:54 AMView Comments

"Schneier's Law"

Back in 1998, I wrote:

Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break.

In 2004, Cory Doctorow called this Schneier's law:

...what I think of as Schneier's Law: "any person can invent a security system so clever that she or he can't think of how to break it."

The general idea is older than my writing. Wikipedia points out that in The Codebreakers, David Kahn writes:

Few false ideas have more firmly gripped the minds of so many intelligent men than the one that, if they just tried, they could invent a cipher that no one could break.

The idea is even older. Back in 1864, Charles Babbage wrote:

One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.

My phrasing is different, though. Here's my original quote in context:

Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break. It's not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis. And the only way to prove that is to subject the algorithm to years of analysis by the best cryptographers around.

And here's me in 2006:

Anyone can invent a security system that he himself cannot break. I've said this so often that Cory Doctorow has named it "Schneier's Law": When someone hands you a security system and says, "I believe this is secure," the first thing you have to ask is, "Who the hell are you?" Show me what you've broken to demonstrate that your assertion of the system's security means something.

And that's the point I want to make. It's not that people believe they can create an unbreakable cipher; it's that people create a cipher that they themselves can't break, and then use that as evidence they've created an unbreakable cipher.

EDITED TO ADD (4/16): This is an example of the Dunning-Kruger effect, named after the authors of this paper: "Unskilled and Unaware of It: How Difficulties in recognizing One's Own Incompetence Lead to Inflated Self-Assessments."

Abstract: People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.

EDITED TO ADD (4/18): If I have any contribution to this, it's to generalize it to security systems and not just to cryptographic algorithms. Because anyone can design a security system that he cannot break, evaluating the security credentials of the designer is an essential aspect of evaluating the system's security.

Posted on April 15, 2011 at 1:45 PMView Comments

How Peer Review Doesn't Work

In this amusing story of a terrorist plotter using pencil-and-paper cryptography instead of actually secure cryptography, there's this great paragraph:

Despite urging by the Yemen-based al Qaida leader Anwar Al Anlaki, Karim also rejected the use of a sophisticated code program called "Mujhaddin Secrets", which implements all the AES candidate cyphers, "because 'kaffirs', or non-believers, know about it so it must be less secure".

Posted on March 30, 2011 at 7:14 AMView Comments

Identifying Tor Users Through Insecure Applications

Interesting research: "One Bad Apple Spoils the Bunch: Exploiting P2P Applications to Trace and Profile Tor Users":

Abstract: Tor is a popular low-latency anonymity network. However, Tor does not protect against the exploitation of an insecure application to reveal the IP address of, or trace, a TCP stream. In addition, because of the linkability of Tor streams sent together over a single circuit, tracing one stream sent over a circuit traces them all. Surprisingly, it is unknown whether this linkability allows in practice to trace a significant number of streams originating from secure (i.e., proxied) applications. In this paper, we show that linkability allows us to trace 193% of additional streams, including 27% of HTTP streams possibly originating from ``secure'' browsers. In particular, we traced 9% of Tor streams carried by our instrumented exit nodes. Using BitTorrent as the insecure application, we design two attacks tracing BitTorrent users on Tor. We run these attacks in the wild for 23 days and reveal 10,000 IP addresses of Tor users. Using these IP addresses, we then profile not only the BitTorrent downloads but also the websites visited per country of origin of Tor users. We show that BitTorrent users on Tor are over-represented in some countries as compared to BitTorrent users outside of Tor. By analyzing the type of content downloaded, we then explain the observed behaviors by the higher concentration of pornographic content downloaded at the scale of a country. Finally, we present results suggesting the existence of an underground BitTorrent ecosystem on Tor.

Posted on March 25, 2011 at 6:38 AMView Comments

Detecting Words and Phrases in Encrypted VoIP Calls

Interesting:

Abstract: Although Voice over IP (VoIP) is rapidly being adopted, its security implications are not yet fully understood. Since VoIP calls may traverse untrusted networks, packets should be encrypted to ensure confidentiality. However, we show that it is possible to identify the phrases spoken within encrypted VoIP calls when the audio is encoded using variable bit rate codecs. To do so, we train a hidden Markov model using only knowledge of the phonetic pronunciations of words, such as those provided by a dictionary, and search packet sequences for instances of specified phrases. Our approach does not require examples of the speaker's voice, or even example recordings of the words that make up the target phrase. We evaluate our techniques on a standard speech recognition corpus containing over 2,000 phonetically rich phrases spoken by 630 distinct speakers from across the continental United States. Our results indicate that we can identify phrases within encrypted calls with an average accuracy of 50%, and with accuracy greater than 90% for some phrases. Clearly, such an attack calls into question the efficacy of current VoIP encryption standards. In addition, we examine the impact of various features of the underlying audio on our performance and discuss methods for mitigation.

EDITED TO ADD (4/13): Full paper. I wrote about this in 2008.

Posted on March 24, 2011 at 12:46 PMView Comments

Bioencryption

A group of students at the Chinese University in Hong Kong have figured out how to store data in bacteria. The article talks about how secure it is, and the students even coined the term "bioencryption," but I don't see any encryption. It's just storage.

Another article:

They have also developed a three-tier security fence to encode the data, which may come as welcome news to U.S. diplomats, who have seen their thoughts splashed over the Internet thanks to WikiLeaks.

"Bacteria can't be hacked," points out Allen Yu, another student instructor.

"All kinds of computers are vulnerable to electrical failures or data theft. But bacteria are immune from cyber attacks. You can safeguard the information."

The team have even coined a word for this field -- biocryptography -- and the encoding mechanism contains built-in checks to ensure that mutations in some bacterial cells do not corrupt the data as a whole.

Why can't bacteria be hacked? If the storage system is attached to a network, it's just as vulnerable as anything else attached to a network. And if it's disconnected from any network, then it's just as secure as anything else disconnected from a network. The problem the U.S. diplomats had was authorized access to the WikiLeaks cables by someone who decided to leak them. No cryptography helps against that.

There is cryptography in the project:

In addition we have created an encryption module with the R64 Shufflon-Specific Recombinase to further secure the information.

If the group is smart, this will be some conventional cryptography algorithm used to encrypt the data before it is stored on the bacteria.

In any case, this is fascinating and interesting work. I just don't see any new form of encryption, or anything inherently unhackable.

Posted on January 25, 2011 at 1:40 PMView Comments

←Previous 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 Next→

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.