This website does readability filtering of other pages. All styles, scripts, forms and ads are stripped. If you want your website excluded or have other feedback, use this form.

Schneier on Security: Blog Entries Tagged usability

Schneier on Security

Blog > Entries by Tag >

Entries Tagged “usability”

Page 4 of 8

Users Rationally Rejecting Security Advice

This paper, by Cormac Herley at Microsoft Research, sounds like me:

Abstract: It is often suggested that users are hopelessly lazy and unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users' rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.

Sounds like me.

EDITED TO ADD (12/12): Related article on usable security.

Posted on November 24, 2009 at 12:40 PMView Comments

Laissez-Faire Access Control

Recently I wrote about the difficulty of making role-based access control work, and how reasearch at Dartmouth showed that it was better to let people take the access control they need to do their jobs, and audit the results. This interesting paper, "Laissez-Faire File Sharing," tries to formalize the sort of access control.

Abstract: When organizations deploy file systems with access control mechanisms that prevent users from reliably sharing files with others, these users will inevitably find alternative means to share. Alas, these alternatives rarely provide the same level of confidentiality, integrity, or auditability provided by the prescribed file systems. Thus, the imposition of restrictive mechanisms and policies by system designers and administrators may actually reduce the system's security.

We observe that the failure modes of file systems that enforce centrally-imposed access control policies are similar to the failure modes of centrally-planned economies: individuals either learn to circumvent these restrictions as matters of necessity or desert the system entirely, subverting the goals behind the central policy.

We formalize requirements for laissez-faire sharing, which parallel the requirements of free market economies, to better address the file sharing needs of information workers. Because individuals are less likely to feel compelled to circumvent systems that meet these laissez-faire requirements, such systems have the potential to increase both productivity and security.

Think of Wikipedia as the ultimate example of this. Everybody has access to everything, but there are audit mechanisms in place to prevent abuse.

Posted on November 9, 2009 at 6:59 AMView Comments

Ballmer Blames the Failure of Windows Vista on Security

According to the Telegraph:

Mr Ballmer said: "We got some uneven reception when [Vista] first launched in large part because we made some design decisions to improve security at the expense of compatibility. I don't think from a word-of-mouth perspective we ever recovered from that."

Commentary:

Vista's failure and Ballmer's faulting security is a bit of being careful for what you wish. Vista (codename "Longhorn" during its development) was always intended to be a more secure operating system. Following the security disasters and 2000 and 2001 that befell Windows 98 and 2000, Microsoft shut down all software development and launched the Trustworthy Computing Initiative that advocated secure coding practices. Microsoft retrained thousands of programmers to eliminate common security problems such as buffer overflows. The immediate result was a retooling of Windows XP to make it more secure for its 2002 launch. Long-term, though, was to make Vista the most secure operating system in Microsoft's history.

What made XP and Vista more secure? Eliminating (or reducing) buffer overflow errors helped. But what really made a difference is shutting off services by default. Many of the vulnerabilities exploited in Windows 98, NT and 2000 were actually a result of unused services that were active by default. Microsoft's own vulnerability tracking shows that Vista has far less reported vulnerabilities than any of its predecessors. Unfortunately, a Vista locked down out of the box made it less palatable to users.

Now security obstacles aren't the only ills that Vista suffered. Huge memory footprint, incompatible graphics requirements, slow responsiveness and a general sense that it was already behind competing Mac and Linux OSes in functionality and features made Vista thud. In my humble opinion, the security gains in Vista were worth many of the tradeoffs; and it was the other technical requirements and incompatible applications that doomed this operating system.

There was also the problem of Vista's endless security warnings. The problem is that they were almost always false alarms, and there were no adverse effects of ignoring them. So users did, which means they ended up being nothing but an annoyance.

Security warnings are often a way for the developer to avoid making a decision. "We don't know what to do here, so we'll put up a warning and ask the user." But unless the users have the information and the expertise to make the decision, they're not going to be able to. We need user interfaces that only put up warnings when it matters.

I never upgraded to Vista. I'm hoping Windows 7 is worth upgrading to. We'll see.

EDITED TO ADD (10/22): Another opinion.

Posted on October 21, 2009 at 7:46 AMView Comments

Unauthentication

In computer security, a lot of effort is spent on the authentication problem. Whether it's passwords, secure tokens, secret questions, image mnemonics, or something else, engineers are continually coming up with more complicated—and hopefully more secure—ways for you to prove you are who you say you are over the Internet.

This is important stuff, as anyone with an online bank account or remote corporate network knows. But a lot less thought and work have gone into the other end of the problem: how do you tell the system on the other end of the line that you're no longer there? How do you unauthenticate yourself?

My home computer requires me to log out or turn my computer off when I want to unauthenticate. This works for me because I know enough to do it, but lots of people just leave their computers on and running when they walk away. As a result, many office computers are left logged in when people go to lunch, or when they go home for the night. This, obviously, is a security vulnerability.

The most common way to combat this is by having the system time out. I could have my computer log me out automatically after a certain period of inactivity—five minutes, for example. Getting it right requires some fine tuning, though. Log the person out too quickly, and he gets annoyed; wait too long before logging him out, and the system could be vulnerable during that time. My corporate e-mail server logs me out after 10 minutes or so, and I regularly get annoyed at my corporate e-mail system.

Some systems have experimented with a token: a USB authentication token that has to be plugged in for the computer to operate, or an RFID token that logs people out automatically when the token moves more than a certain distance from the computer. Of course, people will be prone to just leave the token plugged in to their computer all the time; but if you attach it to their car keys or the badge they have to wear at all times when walking around the office, the risk is minimized.

That's expensive, though. A research project used a Bluetooth device, like a cellphone, and measured its proximity to a computer. The system could be programmed to lock the computer if the Bluetooth device moved out of range.

Some systems log people out after every transaction. This wouldn't work for computers, but it can work for ATMs. The machine spits my card out before it gives me my cash, or just requires a card swipe, and makes sure I take it out of the machine. If I want to perform another transaction, I have to reinsert my card and enter my PIN a second time.

There's a physical analogue that everyone can explain: door locks. Does your door lock behind you when you close the door, or does it remain unlocked until you lock it? The first instance is a system that automatically logs you out, and the second requires you to log out manually. Both types of locks are sold and used, and which one you choose depends on both how you use the door and who you expect to try to break in.

Designing systems for usability is hard, especially when security is involved. Almost by definition, making something secure makes it less usable. Choosing an unauthentication method depends a lot on how the system is used as well as the threat model. You have to balance increasing security with pissing the users off, and getting that balance right takes time and testing, and is much more an art than a science.

This essay originally appeared on ThreatPost.

Posted on September 28, 2009 at 1:34 PMView Comments

Password Advice

Here's some complicated advice on securing passwords that -- I'll bet -- no one follows.

  • DO use a password manager such as those reviewed by Scott Dunn in his Sept. 18, 2008, Insider Tips column. Although Scott focused on free programs, I really like CallPod's Keeper, a $15 utility that comes in Windows, Mac, and iPhone versions and allows you to keep all your passwords in sync. Find more information about the program and a download link for the 15-day free-trial version on the vendor's site.

  • DO change passwords frequently. I change mine every six months or whenever I sign in to a site I haven't visited in long time. Don't reuse old passwords. Password managers can assign expiration dates to your passwords and remind you when the passwords are about to expire.

  • DO keep your passwords secret. Putting them into a file on your computer, e-mailing them to others, or writing them on a piece of paper in your desk is tantamount to giving them away. If you must allow someone else access to an account, create a temporary password just for them and then change it back immediately afterward.

    No matter how much you may trust your friends or colleagues, you can't trust their computers. If they need ongoing access, consider creating a separate account with limited privileges for them to use.

  • DON'T use passwords comprised of dictionary words, birthdays, family and pet names, addresses, or any other personal information. Don't use repeat characters such as 111 or sequences like abc, qwerty, or 123 in any part of your password.

  • DON'T use the same password for different sites. Otherwise, someone who culls your Facebook or Twitter password in a phishing exploit could, for example, access your bank account.

  • DON'T allow your computer to automatically sign in on boot-up and thus use any automatic e-mail, chat, or browser sign-ins. Avoid using the same Windows sign-in password on two different computers.

  • DON'T use the "remember me" or automatic sign-in option available on many Web sites. Keep sign-ins under the control of your password manager instead.

  • DON'T enter passwords on a computer you don't control — such as a friend's computer — because you don't know what spyware or keyloggers might be on that machine.

  • DON'T access password-protected accounts over open Wi-Fi networks — or any other network you don't trust — unless the site is secured via https. Use a VPN if you travel a lot. (See Ian "Gizmo" Richards' Dec. 11, 2008, Best Software column, "Connect safely over open Wi-Fi networks," for Wi-Fi security tips.)

  • DON'T enter a password or even your account name in any Web page you access via an e-mail link. These are most likely phishing scams. Instead, enter the normal URL for that site directly into your browser, and proceed to the page in question from there.

I regularly break seven of those rules. How about you? (Here's my advice on choosing secure passwords.)

Posted on August 10, 2009 at 6:57 AMView Comments

Security vs. Usability

Good essay: "When Security Gets in the Way."

The numerous incidents of defeating security measures prompts my cynical slogan: The more secure you make something, the less secure it becomes. Why? Because when security gets in the way, sensible, well-meaning, dedicated people develop hacks and workarounds that defeat the security. Hence the prevalence of doors propped open by bricks and wastebaskets, of passwords pasted on the fronts of monitors or hidden under the keyboard or in the drawer, of home keys hidden under the mat or above the doorframe or under fake rocks that can be purchased for this purpose.

We are being sent a mixed message: on the one hand, we are continually forced to use arbitrary security procedures. On the other hand, even the professionals ignore many of them. How is the ordinary person to know which ones matter and which don't? The confusion has unexpected negative side-effects. I once discovered a computer system that was missing essential security patches. When I queried the computer's user, I discovered that the continual warning against clicking on links or agreeing to requests from pop-up windows had been too effective. This user was so frightened of unwittingly agreeing to install all those nasty things from "out there" that all requests were denied, even the ones for essential security patches. On reflection, this is sensible behavior: It is very difficult to distinguish the legitimate from the illegitimate. Even experts slip up, as the confessions reported occasionally in various computer digests I attest.

Posted on August 5, 2009 at 6:10 AMView Comments

Too Many Security Warnings Results in Complacency

Research that proves what we already knew:

Crying Wolf: An Empirical Study of SSL Warning Effectiveness

Abstract. Web users are shown an invalid certificate warning when their browser cannot validate the identity of the websites they are visiting. While these warnings often appear in benign situations, they can also signal a man-in-the-middle attack. We conducted a survey of over 400 Internet users to examine their reactions to and understanding of current SSL warnings. We then designed two new warnings using warnings science principles and lessons learned from the survey. We evaluated warnings used in three popular web browsers and our two warnings in a 100-participant, between-subjects laboratory study. Our warnings performed significantly better than existing warnings, but far too many participants exhibited dangerous behavior in all warning conditions. Our results suggest that, while warnings can be improved, a better approach may be to minimize the use of SSL warnings altogether by blocking users from making unsafe connections and eliminating warnings in benign
situations.

Posted on August 4, 2009 at 10:01 AMView Comments

Gaze Tracking Software Protecting Privacy

Interesting use of gaze tracking software to protect privacy:

Chameleon uses gaze-tracking software and camera equipment to track an authorized reader's eyes to show only that one person the correct text. After a 15-second calibration period in which the software essentially "learns" the viewer's gaze patterns, anyone looking over that user's shoulder sees dummy text that randomly and constantly changes.

To tap the broader consumer market, Anderson built a more consumer-friendly version called PrivateEye, which can work with a simple Webcam. The software blurs a user's monitor when he or she turns away. It also detects other faces in the background, and a small video screen pops up to alert the user that someone is looking at the screen.

How effective this is will mostly be a usability problem, but I like the idea of a system detecting if anyone else is looking at my screen.

Slashdot story.

EDITED TO ADD (7/14): A demo.

Posted on July 14, 2009 at 6:20 AMView Comments

The Pros and Cons of Password Masking

Usability guru Jakob Nielsen opened up a can of worms when he made the case for unmasking passwords in his blog. I chimed in that I agreed. Almost 165 comments on my blog (and several articles, essays, and many other blog posts) later, the consensus is that we were wrong.

I was certainly too glib. Like any security countermeasure, password masking has value. But like any countermeasure, password masking is not a panacea. And the costs of password masking need to be balanced with the benefits.

The cost is accuracy. When users don't get visual feedback from what they're typing, they're more prone to make mistakes. This is especially true with character strings that have non-standard characters and capitalization. This has several ancillary costs:

  • Users get pissed off.
  • Users are more likely to choose easy-to-type passwords, reducing both mistakes and security. Removing password masking will make people more comfortable with complicated passwords: they'll become easier to memorize and easier to use.

The benefits of password masking are more obvious:

  • Security from shoulder surfing. If people can't look over your shoulder and see what you're typing, they're much less likely to be able to steal your password. Yes, they can look at your fingers instead, but that's much harder than looking at the screen. Surveillance cameras are also an issue: it's easier to watch someone's fingers on recorded video, but reading a cleartext password off a screen is trivial.

    In some situations, there is a trust dynamic involved. Do you type your password while your boss is standing over your shoulder watching? How about your spouse or partner? Your parent or child? Your teacher or students? At ATMs, there's a social convention of standing away from someone using the machine, but that convention doesn't apply to computers. You might not trust the person standing next to you enough to let him see your password, but don't feel comfortable telling him to look away. Password masking solves that social awkwardness.

  • Security from screen scraping malware. This is less of an issue; keyboard loggers are more common and unaffected by password masking. And if you have that kind of malware on your computer, you've got all sorts of problems.

  • A security "signal." Password masking alerts users, and I'm thinking users who aren't particularly security savvy, that passwords are a secret.

I believe that shoulder surfing isn't nearly the problem it's made out to be. One, lots of people use their computers in private, with no one looking over their shoulders. Two, personal handheld devices are used very close to the body, making shoulder surfing all that much harder. Three, it's hard to quickly and accurately memorize a random non-alphanumeric string that flashes on the screen for a second or so.

This is not to say that shoulder surfing isn't a threat. It is. And, as many readers pointed out, password masking is one of the reasons it isn't more of a threat. And the threat is greater for those who are not fluent computer users: slow typists and people who are likely to choose bad passwords. But I believe that the risks are overstated.

Password masking is definitely important on public terminals with short PINs. (I'm thinking of ATMs.) The value of the PIN is large, shoulder surfing is more common, and a four-digit PIN is easy to remember in any case.

And lastly, this problem largely disappears on the Internet on your personal computer. Most browsers include the ability to save and then automatically populate password fields, making the usability problem go away at the expense of another security problem (the security of the password becomes the security of the computer). There's a Firefox plugin that gets rid of password masking. And programs like my own Password Safe allow passwords to be cut and pasted into applications, also eliminating the usability problem.

One approach is to make it a configurable option. High-risk banking applications could turn password masking on by default; other applications could turn it off by default. Browsers in public locations could turn it on by default. I like this, but it complicates the user interface.

A reader mentioned BlackBerry's solution, which is to display each character briefly before masking it; that seems like an excellent compromise.

I, for one, would like the option. I cannot type complicated WEP keys into Windows -- twice! what's the deal with that? -- without making mistakes. I cannot type my rarely used and very complicated PGP keys without making a mistake unless I turn off password masking. That's what I was reacting to when I said "I agree."

So was I wrong? Maybe. Okay, probably. Password masking definitely improves security; many readers pointed out that they regularly use their computer in crowded environments, and rely on password masking to protect their passwords. On the other hand, password masking reduces accuracy and makes it less likely that users will choose secure and hard-to-remember passwords, I will concede that the password masking trade-off is more beneficial than I thought in my snap reaction, but also that the answer is not nearly as obvious as we have historically assumed.

Posted on July 3, 2009 at 1:42 PMView Comments

The Problem with Password Masking

I agree with this:

It's time to show most passwords in clear text as users type them. Providing feedback and visualizing the system's status have always been among the most basic usability principles. Showing undifferentiated bullets while users enter complex codes definitely fails to comply.

Most websites (and many other applications) mask passwords as users type them, and thereby theoretically prevent miscreants from looking over users' shoulders. Of course, a truly skilled criminal can simply look at the keyboard and note which keys are being pressed. So, password masking doesn't even protect fully against snoopers.

More importantly, there's usually nobody looking over your shoulder when you log in to a website. It's just you, sitting all alone in your office, suffering reduced usability to protect against a non-issue.

Shoulder surfing isn't very common, and cleartext passwords greatly reduces errors. It has long annoyed me when I can't see what I type: in Windows logins, in PGP, and so on.

EDITED TO ADD (6/26): To be clear, I'm not talking about PIN masking on public terminals like ATMs. I'm talking about password masking on personal computers.

EDITED TO ADD (6/30): Two articles on the subject.

Posted on June 26, 2009 at 6:17 AMView Comments

←Previous 1 2 3 4 5 6 7 8 Next→

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.