Logging in from a desktop will require a special USB key, while accessing your data from a mobile device will similarly require a Bluetooth dongle. All non-Google services and apps will be exiled from reaching into your Gmail or Google Drive. Google's malware scanners will use a more intensive process to quarantine and analyze incoming documents. And if you forget your password, or lose your hardware login keys, you'll have to jump through more hoops than ever to regain access, the better to foil any intruders who would abuse that process to circumvent all of Google's other safeguards.
PoisonTap is an impressive hacking tool that can compromise computers via the USB port, even when they are password-protected. What's interesting is the chain of vulnerabilities the tool exploits. No individual vulnerability is a problem, but together they create a big problem.
Kamkar's trick works by chaining together a long, complex series of seemingly innocuous software security oversights that only together add up to a full-blown threat. When PoisonTap -- a tiny $5 Raspberry Pi microcomputer loaded with Kamkar's code and attached to a USB adapter -- is plugged into a computer's USB drive, it starts impersonating a new ethernet connection. Even if the computer is already connected to Wifi, PoisonTap is programmed to tell the victim's computer that any IP address accessed through that connection is actually on the computer's local network rather than the internet, fooling the machine into prioritizing its network connection to PoisonTap over that of the Wifi network.
With that interception point established, the malicious USB device waits for any request from the user's browser for new web content; if you leave your browser open when you walk away from your machine, chances are there's at least one tab in your browser that's still periodically loading new bits of HTTP data like ads or news updates. When PoisonTap sees that request, it spoofs a response and feeds your browser its own payload: a page that contains a collection of iframes -- a technique for invisibly loading content from one website inside anotherthat consist of carefully crafted versions of virtually every popular website address on the internet. (Kamkar pulled his list from web-popularity ranking service Alexa's top one million sites.)
There's more. Here's another article with more details. Also note that HTTPS is a protection.
Yesterday, Itestifiedaboutthisatajointhearing of the Subcommittee on Communications and Technology, and the Subcommittee on Commerce, Manufacturing, and Trade -- both part of the Committee on Energy and Commerce of the US House of Representatives. Here's the video; my testimony starts around 1:10:10.
The topic was the Dyn attacks and the Internet of Things. I talked about different market failures that will affect security on the Internet of Things. One of them was this problem of emergent vulnerabilities. I worry that as we continue to connect things to the Internet, we're going to be seeing a lot of these sorts of attacks: chains of tiny vulnerabilities that combine into a massive security risk. It'll be hard to defend against these types of attacks. If no one product or process is to blame, no one has responsibility to fix the problem. So I gave a mostly Republican audience a pro-regulation message. They were surprisingly polite and receptive.
Every few years, a researcher replicates a security study by littering USB sticks around an organization's grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as "teachable moments" for others. "If only everyone was more security aware and had more security training," they say, "the Internet would be a much safer place."
Enough of that. The problem isn't the users: it's that we've designed our computer systems' security so badly that we demand the user do all of these counterintuitive things. Why can't users choose easy-to-remember passwords? Why can't they click on links in emails with wild abandon? Why can't they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?
Traditionally, we've thought about security and usability as a trade-off: a more secure system is less functional and more annoying, and a more capable, flexible, and powerful system is less secure. This "either/or" thinking results in systems that are neither usable nor secure.
Our industry is littered with examples. First: security warnings. Despite researchers' good intentions, these warnings just inure people to them. I've read dozens of studies about how to get people to pay attention to security warnings. We can tweak their wording, highlight them in red, and jiggle them on the screen, but nothing works because users know the warnings are invariably meaningless. They don't see "the certificate has expired; are you sure you want to go to this webpage?" They see, "I'm an annoying message preventing you from reading a webpage. Click here to get rid of me."
Next: passwords. It makes no sense to force users to generate passwords for websites they only log in to once or twice a year. Users realize this: they store those passwords in their browsers, or they never even bother trying to remember them, using the "I forgot my password" link as a way to bypass the system completely -- effectively falling back on the security of their e-mail account.
And finally: phishing links. Users are free to click around the Web until they encounter a link to a phishing website. Then everyone wants to know how to train the user not to click on suspicious links. But you can't train users not to click on links when you've spent the past two decades teaching them that links are there to be clicked.
We must stop trying to fix the user to achieve security. We'll never get there, and research toward those goals just obscures the real problems. Usable security does not mean "getting people to do what we want." It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users' security goals without -- as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it -- "stress of mind, or knowledge of a long series of rules."
I've been saying this for years. Security usability guru (and one of the guest editors of this issue) M. Angela Sasse has been saying it even longer. People -- and developers -- are finally starting to listen. Many security updates happen automatically so users don't have to remember to manually update their systems. Opening a Word or Excel document inside Google Docs isolates it from the user's system so they don't have to worry about embedded malware. And programs can run in sandboxes that don't compromise the entire computer. We've come a long way, but we have a lot further to go.
"Blame the victim" thinking is older than the Internet, of course. But that doesn't make it right. We owe it to our users to make the Information Age a safe place for everyone -- not just those with "security awareness."
This essay previously appeared in the Sep/Oct issue of IEEE Security & Privacy.
For just a few bucks, you can pick up a USB stick that destroys almost anything that it's plugged into. Laptops, PCs, televisions, photo booths -- you name it.
Once a proof-of-concept, the pocket-sized USB stick now fits in any security tester's repertoire of tools and hacks, says the Hong Kong-based company that developed it. It works like this: when the USB Kill stick is plugged in, it rapidly charges its capacitors from the USB power supply, and then discharges -- all in the matter of seconds.
On unprotected equipment, the device's makers say it will "instantly and permanently disable unprotected hardware".
You might be forgiven for thinking, "Well, why exactly?" The lesson here is simple enough. If a device has an exposed USB port -- such as a copy machine or even an airline entertainment system -- it can be used and abused, not just by a hacker or malicious actor, but also electrical attacks.
This device is clever: it's a three-digit combination lock that prevents a USB drive from being read. It's not going to keep out anyone serious, but is a great solution for the sort of casual security that most people need.
Most of us learned long ago not to run executable files from sketchy USB sticks. But old-fashioned USB hygiene can't stop this newer flavor of infection: Even if users are aware of the potential for attacks, ensuring that their USB's firmware hasn't been tampered with is nearly impossible. The devices don't have a restriction known as "code-signing," a countermeasure that would make sure any new code added to the device has the unforgeable cryptographic signature of its manufacturer. There's not even any trusted USB firmware to compare the code against.
The element of Nohl and Lell's research that elevates it above the average theoretical threat is the notion that the infection can travel both from computer to USB and vice versa. Any time a USB stick is plugged into a computer, its firmware could be reprogrammed by malware on that PC, with no easy way for the USB device's owner to detect it. And likewise, any USB device could silently infect a user's computer.
These are exactly the sorts of attacks the NSA favors.
Nice article on the Tails stateless operating system. I use it. Initially I would boot my regular computer with Tails on a USB stick, but I went out and bought a remaindered computer from Best Buy for $250 and now use that.