This website does readability filtering of other pages. All styles, scripts, forms and ads are stripped. If you want your website excluded or have other feedback, use this form.

Schneier on Security: Blog Entries Tagged usability

Schneier on Security

Blog > Entries by Tag >

Entries Tagged “usability”

Page 7 of 8

Educating Users

I've met users, and they're not fluent in security. They might be fluent in spreadsheets, eBay, or sending jokes over e-mail, but they're not technologists, let alone security people. Of course, they're making all sorts of security mistakes. I too have tried educating users, and I agree that it's largely futile.

Part of the problem is generational. We've seen this with all sorts of technologies: electricity, telephones, microwave ovens, VCRs, video games. Older generations approach newfangled technologies with trepidation, distrust and confusion, while the children who grew up with them understand them intuitively.

But while the don't-get-it generation will die off eventually, we won't suddenly enter an era of unprecedented computer security. Technology moves too fast these days; there's no time for any generation to become fluent in anything.

Earlier this year, researchers ran an experiment in London's financial district. Someone stood on a street corner and handed out CDs, saying they were a "special Valentine's Day promotion." Many people, some working at sensitive bank workstations, ran the program on the CDs on their work computers. The program was benign -- all it did was alert some computer on the Internet that it was running -- but it could just have easily been malicious. The researchers concluded that users don't care about security. That's simply not true. Users care about security -- they just don't understand it.

I don't see a failure of education; I see a failure of technology. It shouldn't have been possible for those users to run that CD, or for a random program stuffed into a banking computer to "phone home" across the Internet.

The real problem is that computers don't work well. The industry has convinced everyone that people need a computer to survive, and at the same time it's made computers so complicated that only an expert can maintain them.

If I try to repair my home heating system, I'm likely to break all sorts of safety rules. I have no experience in that sort of thing, and honestly, there's no point in trying to educate me. But the heating system works fine without my having to learn anything about it. I know how to set my thermostat and to call a professional if anything goes wrong.

Punishment isn't something you do instead of education; it's a form of education -- a very primal form of education best suited to children and animals (and experts aren't so sure about children). I say we stop punishing people for failures of technology, and demand that computer companies market secure hardware and software.

This originally appeared in the April 2006 issue of Information Security Magazine, as the second part of a point/counterpoint with Marcus Ranum. You can read Marcus's essay here, if you are a subscriber. (Subscriptions are free to "qualified" people.)

EDITED TO ADD (9/11): Here's Marcus's half.

Posted on August 22, 2006 at 12:35 PMView Comments

Human/Bear Security Trade-Off

Interesting example:

Back in the 1980s, Yosemite National Park was having a serious problem with bears: They would wander into campgrounds and break into the garbage bins. This put both bears and people at risk. So the Park Service started installing armored garbage cans that were tricky to open -- you had to swing a latch, align two bits of handle, that sort of thing. But it turns out it's actually quite tricky to get the design of these cans just right. Make it too complex and people can't get them open to put away their garbage in the first place. Said one park ranger, "There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."

It's a tough balance to strike. People are smart, but they're impatient and unwilling to spend a lot of time solving the problem. Bears are dumb, but they're tenacious and are willing to spend hours solving the problem. Given those two constraints, creating a trash can that can both work for people and not work for bears is not easy.

Posted on August 18, 2006 at 7:02 AMView Comments

When "Off" Doesn't Mean Off

According to the specs of the new Nintendo Wii (their new game machine), "Wii can communicate with the Internet even when the power is turned off." Nintendo accentuates the positive: "This WiiConnect24 service delivers a new surprise or game update, even if users do not play with Wii," while ignoring the possibility that Nintendo can deactivate a game if they choose to do so, or that someone else can deliver a different -- not so wanted -- surprise.

We all know that, but what's interesting here is that Nintendo is changing the meaning of the word "off." We are all conditioned to believe that "off" means off, and therefore safe. But in Nintendo's case, "off" really means something like "on standby." If users expect the Nintendo Wii to be truly off, they need to pull the power plug -- assuming there isn't a battery foiling that tactic. Maybe they need to pull both the power plug and the Ethernet cable. Unless they have a wireless network at home.

Maybe there is no way to turn the Nintendo Wii off.

There's a serious security problem here, made worse by a bad user interface. "Off" should mean off.

Posted on May 10, 2006 at 6:45 AMView Comments

Microsoft Vista's Endless Security Warnings

Paul Thurrott has posted an excellent essay on the problems with Windows Vista. Most interesting to me is how they implement UAP (User Account Protection):

Modern operating systems like Linux and Mac OS X operate under a security model where even administrative users don't get full access to certain features unless they provide an in-place logon before performing any task that might harm the system. This type of security model protects users from themselves, and it is something that Microsoft should have added to Windows years and years ago.

Here's the good news. In Windows Vista, Microsoft is indeed moving to this kind of security model. The feature is called User Account Protection (UAP) and, as you might expect, it prevents even administrative users from performing potentially dangerous tasks without first providing security credentials, thus ensuring that the user understands what they're doing before making a critical mistake. It sounds like a good system. But this is Microsoft, we're talking about here. They completely botched UAP.

The bad news, then, is that UAP is a sad, sad joke. It's the most annoying feature that Microsoft has ever added to any software product, and yes, that includes that ridiculous Clippy character from older Office versions. The problem with UAP is that it throws up an unbelievable number of warning dialogs for even the simplest of tasks. That these dialogs pop up repeatedly for the same action would be comical if it weren't so amazingly frustrating. It would be hilarious if it weren't going to affect hundreds of millions of people in a few short months. It is, in fact, almost criminal in its insidiousness.

Let's look a typical example. One of the first things I do whenever I install a new Windows version is download and install Mozilla Firefox. If we forget, for a moment, the number of warning dialogs we get during the download and install process (including a brazen security warning from Windows Firewall for which Microsoft should be chastised), let's just examine one crucial, often overlooked issue. Once Firefox is installed, there are two icons on my Desktop I'd like to remove: The Setup application itself and a shortcut to Firefox. So I select both icons and drag them to the Recycle Bin. Simple, right?

Wrong. Here's what you have to go through to actually delete those files in Windows Vista. First, you get a File Access Denied dialog (Figure) explaining that you don't, in fact, have permission to delete a ... shortcut?? To an application you just installed??? Seriously?

OK, fine. You can click a Continue button to "complete this operation." But that doesn't complete anything. It just clears the desktop for the next dialog, which is a Windows Security window (Figure). Here, you need to give your permission to continue something opaquely called a "File Operation." Click Allow, and you're done. Hey, that's not too bad, right? Just two dialogs to read, understand, and then respond correctly to. What's the big deal?

What if you're doing something a bit more complicated? Well, lucky you, the dialogs stack right up, one after the other, in a seemingly never-ending display of stupidity. Indeed, sometimes you'll find yourself unable to do certain things for no good reason, and you click Allow buttons until you're blue in the face. It will never stop bothering you, unless you agree to stop your silliness and leave that file on the desktop where it belongs. Mark my words, this will happen to you. And you will hate it.

The problem with lots of warning dialog boxes is that they don't provide security. Users stop reading them. They think of them as annoyances, as an extra click required to get a feature to work. Clicking through gets embedded into muscle memory, and when it actually matters the user won't even realize it.

Jeff Atwood says the same thing:

The problem with the Security Through Endless Warning Dialogs school of thought is that it doesn't work. All those earnest warning dialogs eventually blend together into a giant "click here to get work done" button that nobody bothers to read any more. The operating system cries wolf so much that when a real wolf-- in the form of a virus or malware-- rolls around, you'll mindlessly allow it access to whatever it wants, just out of habit.

So does Rick Strahl:

Then there are the security dialogs. Ah yes, now we're making progress: Ask users on EVERY program you launch that isn't signed whether they want to elevate permissions. Uh huh, this is going to work REAL WELL. We know how well that worked with unsigned ActiveX controls in Internet Explorer ­ so well that even Microsoft isn't signing most of its own ActiveX controls. Give too many warnings that are not quite reasonable and people will never read the dialogs and just click them anyway… I know I started doing that in the short use I've had on Vista.

These dialog boxes are not security for the user, they're CYA security from the user. When some piece of malware trashes your system, Microsoft can say: "You gave the program permission to do that; it's not our fault."

Warning dialog boxes are only effective if the user has the ability to make intelligent decisions about the warnings. If the user cannot do that, they're just annoyances. And they're annoyances that don't improve security.

EDITED TO ADD (5/8): Commentary.

Posted on April 24, 2006 at 1:43 PMView Comments

Why Phishing Works

Interesting paper.

Abstract:

To build systems shielding users from fraudulent (or phishing) websites, designers need to know which attack strategies work and why. This paper provides the first empirical evidence about which malicious strategies are successful at deceiving general users. We first analyzed a large set of captured phishing attacks and developed a set of hypotheses about why these strategies might work. We then assessed these hypotheses with a usability study in which 22 participants were shown 20 web sites and asked to determine which ones were fraudulent. We found that 23% of the participants did not look at browser-based cues such as the address bar, status bar and the security indicators, leading to incorrect choices 40% of the time. We also found that some visual deception attacks can fool even the most sophisticated users. These results illustrate that standard security indicators are not effective for a substantial fraction of users, and suggest that alternative approaches are needed.

Here's an article on the paper.

Posted on April 4, 2006 at 2:18 PMView Comments

The New Internet Explorer

I'm just starting to read about the new security features in Internet Explorer 7. So far, I like what I am reading.

IE 7 requires that all browser windows display an address bar. This helps foil attackers that operate by popping up new windows masquerading as pages on a legitimate site, when in fact the site is fraudulent. By requiring an address bar, users will immediately see the true URL of the displayed page, making these types of attacks more obvious. If you think you're looking at www.microsoft.com, but the browser address bar says www.illhackyou.net, you ought to be suspicious.

I use Opera, and have long used the address bar to "check" on URLs. This is an excellent idea. So is this:

In early November, a bunch of Web browser developers got together and started fleshing out standards for address bar coloring, which can cue users to secured connections. Under the proposal laid out by IE 7 team member Rob Franco, even sites that use a standard SSL certificate will display a standard white address bar. Sites that use a stronger, as yet undetermined level of protection will use a green bar.

I like easy visual indications about what's going on. And I really like that SSL is generic white, because it really doesn't prove that you're communicating with the site you think you're communicating with. This feature helps with that, though:

Franco also said that when navigating to an SSL-protected site, the IE 7 address bar will display the business name and certification authority's name in the address bar.

Some of the security measures in IE7 weaken the integration between the browser and the operating system:

People using Windows Vista beta 2 will find a new feature called Protected Mode, which renders IE 7 unable to modify system files and settings. This essentially breaks down part of the integration between IE and Windows itself.

Think of it is as a wall between IE and the rest of the operating system. No, the code won't be perfect, and yes, there'll be ways found to circumvent this security, but this is an important and long-overdue feature.

The majority of IE's notorious security flaws stem from its pervasive integration with Windows. That is a feature no other Web browser offers -- and an ability that Vista's Protected Mode intends to mitigate. IE 7 obviously won't remove all of that tight integration. Lacking deep architectural changes, the effort has focused instead on hardening or eliminating potential vulnerabilities. Unfortunately, this approach requires Microsoft to anticipate everything that could go wrong and block it in advance -- hardly a surefire way to secure a browser.

That last sentence is about the general Internet attitude to allow everything that is not explicitly denied, rather than deny everything that is not explicitly allowed.

Also, you'll have to wait until Vista to use it:

...this capability will not be available in Windows XP because it's woven directly into Windows Vista itself.

There are also some good changes under the hood:

IE 7 does eliminate a great deal of legacy code that dates back to the IE 4 days, which is a welcome development.

And:

Microsoft has rewritten a good bit of IE 7's core code to help combat attacks that rely on malformed URLs (that typically cause a buffer overflow). It now funnels all URL processing through a single function (thus reducing the amount of code that "looks" at URLs).

All good stuff, but I agree with this conclusion:

IE 7 offers several new security features, but it's hardly a given that the situation will improve. There has already been a set of security updates for IE 7 beta 1 released for both Windows Vista and Windows XP computers. Security vulnerabilities in a beta product shouldn't be alarming (IE 7 is hardly what you'd consider "finished" at this point), but it may be a sign that the product's architecture and design still have fundamental security issues.

I'm not switching from Opera yet, and my second choice is still Firefox. But the masses still use IE, and our security depends in part on those masses keeping their computers worm-free and bot-free.

NOTE: Here's some info on how to get your own copy of Internet Explorer 7 beta 2.

Posted on February 9, 2006 at 3:37 PMView Comments

Petnames

Interesting paper:

Zooko's Triangle argues that names cannot be global, secure, and memorable, all at the same time. Domain names are an example: they are global, and memorable, but as the rapid rise of phishing demonstrates, they are not secure.

Though no single name can have all three properties, the petname system does indeed embody all three properties. Informal experiments with petname-like systems suggest that petnames can be both intuitive and effective. Experimental implementations already exist for simple extensions to existing browsers that could alleviate (possibly dramatically) the problems with phishing. As phishers gain sophistication, it seems compelling to experiment with petname systems as part of the solution.

Posted on February 8, 2006 at 11:25 AMView Comments

Passlogix Misquotes Me in Their PR Material

I recently received a PR e-mail from a company called Passlogix:

Password security is still a very prevalent threat, 2005 had security gurus like Bruce Schneier publicly suggest that you actually write them down on sticky-notes. A recent survey stated 78% of employees use passwords as their primary forms of security, 52% use the same password for their accounts -- yet 77% struggle to remember their passwords.

Actually, I don't. I recommend writing your passwords down and keeping them in your wallet.

I know nothing about this company, but I am unhappy at their misrepresentation of what I said.

Posted on February 7, 2006 at 7:23 AMView Comments

Forged Credentials and Security

In Beyond Fear, I wrote about the difficulty of verifying credentials. Here's a real story about that very problem:

When Frank Coco pulled over a 24-year-old carpenter for driving erratically on Interstate 55, Coco was furious. Coco was driving his white Chevy Caprice with flashing lights and had to race in front of the young man and slam on his brakes to force him to stop.

Coco flashed his badge and shouted at the driver, Joe Lilja: "I'm a cop and when I tell you to pull over, you pull over, you motherf-----!"

Coco punched Lilja in the face and tried to drag him out of his car.

But Lilja wasn't resisting arrest. He wasn't even sure what he'd done wrong.

"I thought, 'Oh my God, I can't believe he's hitting me,' " Lilja recalled.

It was only after Lilja sped off to escape -- leading Coco on a tire-squealing, 90-mph chase through the southwest suburbs -- that Lilja learned the truth.

Coco wasn't a cop at all.

He was a criminal.

There's no obvious way to solve this. This is some of what I wrote in Beyond Fear:

Authentication systems suffer when they are rarely used and when people aren't trained to use them.

[...]

Imagine you're on an airplane, and Man A starts attacking a flight attendant. Man B jumps out of his seat, announces that he's a sky marshal, and that he's taking control of the flight and the attacker. (Presumably, the rest of the plane has subdued Man A by now.) Man C then stands up and says: "Don't believe Man B. He's not a sky marshal. He's one of Man A's cohorts. I'm really the sky marshal."

What do you do? You could ask Man B for his sky marshal identification card, but how do you know what an authentic one looks like? If sky marshals travel completely incognito, perhaps neither the pilots nor the flight attendants know what a sky marshal identification card looks like. It doesn't matter if the identification card is hard to forge if person authenticating the credential doesn't have any idea what a real card looks like.

[...]

Many authentication systems are even more informal. When someone knocks on your door wearing an electric company uniform, you assume she's there to read the meter. Similarly with deliverymen, service workers, and parking lot attendants. When I return my rental car, I don't think twice about giving the keys to someone wearing the correct color uniform. And how often do people inspect a police officer's badge? The potential for intimidation makes this security system even less effective.

Posted on January 13, 2006 at 7:00 AMView Comments

Metadata in MS Office

Hidden metadata is in the news again. The New York Times reported that an unsigned Microsoft Word document being circulated by the Democratic National Committee was actually written by, wait for it, the Democratic National Committee.

Okay, so that's not much of a revelation, but it does serve to remind us that there can be all sorts of unintended information hidden in Microsoft Office documents. The particular bits of unintended information that precipitated this news story is the metadata.

Metadata is information on who created the file, what it was originally called, etc. To see your metadata, open a file, go to the "File" menu, and choose "Properties."

I'll bet at least some of you will be really surprised by what's in there. Not because it's secret, but because it has nothing to do with you or your document. That's because metadata follows the file, and not its contents.

Here's what I do when I want to create a MS Word document. Maybe it's a file I've written, and maybe it's a file I received from someone else. I find some other document that has basically the same style I want, open it up, delete all the contents, and save it under a new filename. MS Word doesn't change the metadata, so whatever was in the "Title," "Subject", "Author," "Company," and other fields of the original document remains in my new document. This means that occasionally those metadata fields are filled with information I've never seen of before and from who knows where. I'm sure I'm not the only one who uses this trick to avoid dealing with MS Word stylesheets. So metadata is much less a smoking gun than many make it out to be.

I don't mean this to minimize the problem of hidden data in Microsoft Office documents. It's not just the metadata, but comments, deleted parts of the document, even parts of other documents (it's happened).

I have two recommendations regarding Microsoft Office and hidden data. The first is to realize that programs like Word and Excel are designed for authoring documents, not for publishing them. Get into the habit of saving your documents into pdf before distributing them. (Although if you're going to redact a pdf document, be smart about it or you'll have similar problems.)

The second is to install Microsoft's tool for deleting hidden data. (Works for Office 2003; there are third-party tools for older versions.) Or at least read the page about deleting private data in MS Office files. And to follow through on deleting data.

This probably won't work for many of us, though. The last sentence of the article explains why:

"The real scandal here," Mr. Max told The Los Angeles Times after Democrats expressed outrage over the White House's fingerprints on the testimony, "is that after 15 years of using Microsoft Word, I don't know how to turn off 'track changes.'"

Posted on November 14, 2005 at 12:07 PMView Comments

←Previous 1 2 3 4 5 6 7 8 Next→

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.