Schneier on Security

success fail Jan FEB Mar 10 2006 2007 2008 1,774 captures 09 Oct 2004 - 15 Aug 2018 About this capture COLLECTED BY Organization: Alexa Crawls Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period. Collection: 39_crawl this data is currently not publicly accessible. TIMESTAMPS

Bruce Schneier

Home

Weblog

Crypto-Gram Newsletter

Books

Essays and Op Eds

Computer Security Articles

News and Interviews

Speaking Schedule

Password Safe

Cryptography and Computer Security Resources

Contact Information

Schneier on Security

A weblog covering security and security technology.

Friday Squid Blogging: More Squid Comics

What is it about squid that makes them such good comic material?

Posted on February 09, 2007 at 04:04 PM2 Comments


Friday Squid Blogging: Disneyland's "20,000 Leagues Under the Sea" Giant Squid

From 1955. Photo. Context.

Posted on February 09, 2007 at 03:27 PM1 Comments


Schneier on Video: Security Theater Against Movie Plot Threats

On June 10, 2006, I gave a talk at the ACLU New Jersey Membership Conference: "Counterterrorism in America: Security Theater Against Movie-Plot Threats." Here's the video.

Posted on February 09, 2007 at 01:07 PM5 Comments


Survey Paper on the Economics of Information Security

Ross Anderson and Tyler Moore just published their survey paper on the economics of information security. (Here are the slides from the conference talk, and a shorter version from Science.)

Excellent reading.

Posted on February 09, 2007 at 07:47 AM2 Comments


Pipe Bombs Found, Disposed Of

Three pipe bombs were found in the town of Pearblossom, California and -- it seems -- disposed of without causing hysteria.

Boston, are you paying attention?

Posted on February 08, 2007 at 03:43 PM21 Comments


"Stop and Frisks" in New York

Interesting data from New York. The number of people stopped and searched has gone up fivefold since 2002, but the number of arrests due to these stops has only doubled. (The number of "summonses" has also gone up fivefold.)

Good data for the "Is it worth it?" question.

Posted on February 08, 2007 at 01:10 PM24 Comments


A New Secure Hash Standard

The U.S. National Institute of Standards and Technology is having a competition for a new cryptographic hash function.

This matters. The phrase "one-way hash function" might sound arcane and geeky, but hash functions are the workhorses of modern cryptography. They provide web security in SSL. They help with key management in e-mail and voice encryption: PGP, Skype, all the others. They help make it harder to guess passwords. They're used in virtual private networks, help provide DNS security and ensure that your automatic software updates are legitimate. They provide all sorts of security functions in your operating system. Every time you do something with security on the internet, a hash function is involved somewhere.

Basically, a hash function is a fingerprint function. It takes a variable-length input -- anywhere from a single byte to a file terabytes in length -- and converts it to a fixed-length string: 20 bytes, for example.

One-way hash functions are supposed to have two properties. First, they're one-way. This means that it is easy to take an input and compute the hash value, but it's impossible to take a hash value and recreate the original input. By "impossible" I mean "can't be done in any reasonable amount of time."

Second, they're collision-free. This means that even though there are an infinite number of inputs for every hash value, you're never going to find two of them. Again, "never" is defined as above. The cryptographic reasoning behind these two properties is subtle, but any cryptographic text talks about them.

The hash function you're most likely to use routinely is SHA-1. Invented by the National Security Agency, it's been around since 1995. Recently, though, there have been some pretty impressive cryptanalytic attacks against the algorithm. The best attack is barely on the edge of feasibility, and not effective against all applications of SHA-1. But there's an old saying inside the NSA: "Attacks always get better; they never get worse." It's past time to abandon SHA-1.

There are near-term alternatives -- a related algorithm called SHA-256 is the most obvious -- but they're all based on the family of hash functions first developed in 1992. We've learned a lot more about the topic in the past 15 years, and can certainly do better.

Why the National Institute of Standards and Technology, or NIST, though? Because it has exactly the experience and reputation we want. We were in the same position with encryption functions in 1997. We needed to replace the Data Encryption Standard, but it wasn't obvious what should replace it. NIST decided to orchestrate a worldwide competition for a new encryption algorithm. There were 15 submissions from 10 countries -- I was part of the group that submitted Twofish -- and after four years of analysis and cryptanalysis, NIST chose the algorithm Rijndael to become the Advanced Encryption Standard (.pdf), or AES.

The AES competition was the most fun I've ever had in cryptography. Think of it as a giant cryptographic demolition derby: A bunch of us put our best work into the ring, and then we beat on each other until there was only one standing. It was really more academic and structured than that, but the process stimulated a lot of research in block-cipher design and cryptanalysis. I personally learned an enormous amount about those topics from the AES competition, and we as a community benefited immeasurably.

NIST did a great job managing the AES process, so it's the perfect choice to do the same thing with hash functions. And it's doing just that (.pdf). Last year and the year before, NIST sponsored two workshops to discuss the requirements for a new hash function, and last month it announced a competition to choose a replacement for SHA-1. Submissions will be due in fall 2008, and a single standard is scheduled to be chosen by the end of 2011.

Yes, this is a reasonable schedule. Designing a secure hash function seems harder than designing a secure encryption algorithm, although we don't know whether this is inherently true of the mathematics or simply a result of our imperfect knowledge. Producing a new secure hash standard is going to take a while. Luckily, we have an interim solution in SHA-256.

Now, if you'll excuse me, the Twofish team needs to reconstitute and get to work on an Advanced Hash Standard submission.

This essay originally appeared on Wired.com.

EDITED TO ADD (2/8): Every time I write about one-way hash functions, I get responses from people claiming they can't possibly be secure because an infinite number of texts hash to the same short (160-bit, in the case of SHA-1) hash value. Yes, of course an infinite number of texts hash to the same value; that's the way the function works. But the odds of it happening naturally are less than the odds of all the air molecules bunching up in the corner of the room and suffocating you, and you can't force it to happen either. Right now, several groups are trying to implement Xiaoyun Wang's attack against SHA-1. I predict one of them will find two texts that hash to the same value this year -- it will demonstrate that the hash function is broken and be really big news.

Posted on February 08, 2007 at 09:07 AM59 Comments


Corsham Bunker

Fascinating article on the Corsham bunker, the secret underground UK site the government was to retreat to in the event of a nuclear war.

Until two years ago, the existence of this complex, variously codenamed Burlington, Stockwell, Turnstile or 3-Site, was classified. It was a huge yet very secret complex, where the government and 6,000 apparatchiks would have taken refuge for 90 days during all-out thermonuclear war. Solid yet cavernous, surrounded by 100ft-deep reinforced concrete walls within a subterranean 240-acre limestone quarry just outside Corsham, it drives one to imagine the ghosts of people who, thank God, never took refuge here.

Posted on February 07, 2007 at 02:40 PM43 Comments


Eyewitness Identification Reform

According to this article, "Mistaken eyewitness identification is the leading cause of wrongful convictions." Given what I've been reading recently about memory and the brain, this does not surprise me at all.

New Mexico is currently debating a bill reforming eyewitness identification procedures:

Under the proposed regulations, an eyewitness must provide a written description before a lineup takes place; there must be at least six individuals in a live lineup and 10 photos in a photographic line-up; and the members of the lineup must be shown sequentially rather than simultaneously.

The bill would also restrict the amount of time in which law enforcement could bring a suspect by for a physical identification by a victim or witness to within one hour after the crime was reported. Anything beyond one hour would require a lineup with multiple photos or people.

I don't have access to any of the psychological or criminology studies that back these reforms up, but the bill is being supported by the right sorts of people.

Posted on February 07, 2007 at 06:38 AM32 Comments


The Psychology of Security

I just posted a long essay (pdf available here) on my website, exploring how psychology can help explain the difference between the feeling of security and the reality of security.

We make security trade-offs, large and small, every day. We make them when we decide to lock our doors in the morning, when we choose our driving route, and when we decide whether we're going to pay for something via check, credit card, or cash. They're often not the only factor in a decision, but they're a contributing factor. And most of the time, we don't even realize, it. We make security trade-offs intuitively. Most decisions are default decisions, and there have been many popular books that explore reaction, intuition, choice, and decision.

These intuitive choices are central to life on this planet. Every living thing makes security trade-offs, mostly as a species -- evolving this way instead of that way -- but also as individuals. Imagine a rabbit sitting in a field, eating clover. Suddenly, he spies a fox. He's going to make a security trade-off: should I stay or should I flee? The rabbits that are good at making these trade-offs are going to live to reproduce, while the rabbits that are bad at it are going to get eaten or starve. This means that, as a successful species on the planet, humans should be really good at making security trade-offs.

And yet at the same time we seem hopelessly bad at it. We get it wrong all the time. We exaggerate some risks while minimizing others. We exaggerate some costs while minimizing others. Even simple trade-offs we get wrong, wrong, wrong -- again and again. A Vulcan studying human security behavior would shake his head in amazement.

The truth is that we're not hopelessly bad at making security trade-offs. We are very well adapted to dealing with the security environment endemic to hominids living in small family groups on the highland plains of East Africa. It's just that the environment in New York in 2006 is different from Kenya circa 100,000 BC. And so our feeling of security diverges from the reality of security, and we get things wrong.

The essay examines particular brain heuristics, how they work and how they fail, in an attempt to explain why our feeling of security so often diverges from reality. I'm giving a talk on the topic at the RSA Conference today at 3:00 PM. Dark Reading posted an article on this, also discussed on Slashdot. CSO Online also has a podcast interview with me on the topic. I expect there'll be more press coverage this week.

The essay is really still in draft, and I would very much appreciate any and all comments, criticisms, additions, corrections, suggestions for further research, and so on. I think security technology has a lot to learn from psychology, and that I've only scratched the surface of the interesting and relevant research -- and what it means.

EDITED TO ADD (2/7): Two more articles on topic.

Posted on February 06, 2007 at 01:44 PM98 Comments


Random Observation from the RSA Conference

Protegrity? Counterstorm? Authentify?

I officially declare that the industry has run out of good names for security companies.

Posted on February 06, 2007 at 10:03 AM50 Comments


Dave Barry on Super Bowl Security

Funny:

Also, if you are planning to go to the Super Bowl game on Sunday, be aware that additional security measures will be in effect, as follows:
  • WHEN TO ARRIVE: All persons attending the game MUST arrive at the stadium no later than 7:45 a.m. yesterday. There will be NO EXCEPTIONS. I am talking to you, Prince.

  • PERSONAL BELONGINGS: Fans will not be allowed to take anything into the stadium except medically required organs. If you need, for example, both kidneys, you will be required to produce a note from your doctor, as well as your actual doctor.

  • TAILGATING: There will be no tailgating. This is to thwart the terrorists, who are believed to have been planning a tailgate-based attack (code name ''Death Hibachi'') involving the detonation of a nuclear bratwurst capable of leveling South Florida, if South Florida was not already so level to begin with.

  • TALKING: There will be no talking.

  • PERMITTED CHEERS: The National Football League, in conjunction with the Department of Homeland Security, the FBI, the CIA and Vice President Cheney, has approved the following three cheers for use during the game: (1) ''You suck, ref!'' (2) ''Come on, (Name of Team)!'' (3) ``You suck, Prince!''

    Back in 2004, I wrote a more serious essay on security at the World Series.

    Posted on February 06, 2007 at 07:31 AM12 Comments


    Powered by Movable Type 3.2. Photo at top by Steve Woit.

    Schneier.com is a personal website. Opinions expressed are not necessarily those of BT Counterpane.

    Weblog Menu

    Search

    Recent Entries

    Comments

    Archives

    Syndication RSS 1.0 (full text)
    RSS 2.0 (excerpts) Crypto-Gram Newsletter If you prefer to receive Bruce Schneier's comments on security as a monthly e-mail digest, subscribe to Schneier on Security's sister publication, Crypto-Gram.
    read more Books