This website does readability filtering of other pages. All styles, scripts, forms and ads are stripped. If you want your website excluded or have other feedback, use this form.

Schneier on Security: Blog Entries Tagged security engineering

Schneier on Security

Blog > Entries by Tag >

Entries Tagged “security engineering”

Page 1 of 10

Security Vulnerabilities in Cell Phone Systems

Good essay on the inherent vulnerabilities in the cell phone standards and the market barriers to fixing them.

So far, industry and policymakers have largely dragged their feet when it comes to blocking cell-site simulators and SS7 attacks. Senator Ron Wyden, one of the few lawmakers vocal about this issue, sent a letter in August encouraging the Department of Justice to "be forthright with federal courts about the disruptive nature of cell-site simulators." No response has ever been published.

The lack of action could be because it is a big task -- there are hundreds of companies and international bodies involved in the cellular network. The other reason could be that intelligence and law enforcement agencies have a vested interest in exploiting these same vulnerabilities. But law enforcement has other effective tools that are unavailable to criminals and spies. For example, the police can work directly with phone companies, serving warrants and Title III wiretap orders. In the end, eliminating these vulnerabilities is just as valuable for law enforcement as it is for everyone else.

As it stands, there is no government agency that has the power, funding and mission to fix the problems. Large companies such as AT&T, Verizon, Google and Apple have not been public about their efforts, if any exist.

Posted on January 10, 2019 at 5:52 AMView Comments

New IoT Security Regulations

Due to ever-evolving technological advances, manufacturers are connecting consumer goods­ -- from toys to light bulbs to major appliances­ -- to the Internet at breakneck speeds. This is the Internet of Things, and it's a security nightmare.

The Internet of Things fuses products with communications technology to make daily life more effortless. Think Amazon's Alexa, which not only answers questions and plays music but allows you to control your home's lights and thermostat. Or the current generation of implanted pacemakers, which can both receive commands and send information to doctors over the Internet.

But like nearly all innovation, there are risks involved. And for products born out of the Internet of Things, this means the risk of having personal information stolen or devices being overtaken and controlled remotely. For devices that affect the world in a direct physical manner -- ­cars, pacemakers, thermostats­ -- the risks include loss of life and property.

By developing more advanced security features and building them into these products, hacks can be avoided. The problem is that there is no monetary incentive for companies to invest in the cybersecurity measures needed to keep their products secure. Consumers will buy products without proper security features, unaware that their information is vulnerable. And current liability laws make it hard to hold companies accountable for shoddy software security.

It falls upon lawmakers to create laws that protect consumers. While the US government is largely absent in this area of consumer protection, the state of California has recently stepped in and started regulating the Internet of Things, or "IoT" devices sold in the state­ -- and the effects will soon be felt worldwide.

California's new SB 327 law, which will take effect in January 2020, requires all "connected devices" to have a "reasonable security feature." The good news is that the term "connected devices" is broadly defined to include just about everything connected to the Internet. The not-so-good news is that "reasonable security" remains defined such that companies trying to avoid compliance can argue that the law is unenforceable.

The legislation requires that security features must be able to protect the device and the information on it from a variety of threats and be appropriate to both the nature of the device and the information it collects. California's attorney general will interpret the law and define the specifics, which will surely be the subject of much lobbying by tech companies.

There's just one specific in the law that's not subject to the attorney general's interpretation: default passwords are not allowed. This is a good thing; they are a terrible security practice. But it's just one of dozens of awful "security" measures commonly found in IoT devices.

This law is not a panacea. But we have to start somewhere, and it is a start.

Though the legislation covers only the state of California, its effects will reach much further. All of us­ -- in the United States or elsewhere­ -- are likely to benefit because of the way software is written and sold.

Automobile manufacturers sell their cars worldwide, but they are customized for local markets. The car you buy in the United States is different from the same model sold in Mexico, because the local environmental laws are not the same and manufacturers optimize engines based on where the product will be sold. The economics of building and selling automobiles easily allows for this differentiation.

But software is different. Once California forces minimum security standards on IoT devices, manufacturers will have to rewrite their software to comply. At that point, it won't make sense to have two versions: one for California and another for everywhere else. It's much easier to maintain the single, more secure version and sell it everywhere.

The European General Data Protection Regulation (GDPR), which implemented the annoying warnings and agreements that pop up on websites, is another example of a law that extends well beyond physical borders. You might have noticed an increase in websites that force you to acknowledge you've read and agreed to the website's privacy policies. This is because it is tricky to differentiate between users who are subject to the protections of the GDPR­ -- people physically in the European Union, and EU citizens wherever they are -- ­and those who are not. It's easier to extend the protection to everyone.

Once this kind of sorting is possible, companies will, in all likelihood, return to their profitable surveillance capitalism practices on those who are still fair game. Surveillance is still the primary business model of the Internet, and companies want to spy on us and our activities as much as they can so they can sell us more things and monetize what they know about our behavior.

Insecurity is profitable only if you can get away with it worldwide. Once you can't, you might as well make a virtue out of necessity. So everyone will benefit from the California regulation, as they would from similar security regulations enacted in any market around the world large enough to matter, just like everyone will benefit from the portion of GDPR compliance that involves data security.

Most importantly, laws like these spur innovations in cybersecurity. Right now, we have a market failure. Because the courts have traditionally not held software manufacturers liable for vulnerabilities, and because consumers don't have the expertise to differentiate between a secure product and an insecure one, manufacturers have prioritized low prices, getting devices out on the market quickly and additional features over security.

But once a government steps in and imposes more stringent security regulations, companies have an incentive to meet those standards as quickly, cheaply, and effectively as possible. This means more security innovation, because now there's a market for new ideas and new products. We've seen this pattern again and again in safety and security engineering, and we'll see it with the Internet of Things as well.

IoT devices are more dangerous than our traditional computers because they sense the world around us, and affect that world in a direct physical manner. Increasing the cybersecurity of these devices is paramount, and it's heartening to see both individual states and the European Union step in where the US federal government is abdicating responsibility. But we need more, and soon.

This essay previously appeared on

Posted on November 13, 2018 at 7:04 AMView Comments

Consumer Reports Reviews Wireless Home-Security Cameras

Consumer Reports is starting to evaluate the security of IoT devices. As part of that, it's reviewing wireless home-security cameras.

It found significant security vulnerabilities in D-Link cameras:

In contrast, D-Link doesn't store video from the DCS-2630L in the cloud. Instead, the camera has its own, onboard web server, which can deliver video to the user in different ways.

Users can view the video using an app, mydlink Lite. The video is encrypted, and it travels from the camera through D-Link's corporate servers, and ultimately to the user's phone. Users can also access the same encrypted video feed through a company web page, Those are both secure methods of accessing the video.

But the D-Link camera also lets you bypass the D-Link corporate servers and access the video directly through a web browser on a laptop or other device. If you do this, the web server on the camera doesn't encrypt the video.

If you set up this kind of remote access, the camera and unencrypted video is open to the web. They could be discovered by anyone who finds or guesses the camera's IP address­ -- and if you haven't set a strong password, a hacker might find it easy to gain access.

The real news is that Consumer Reports is able to put pressure on device manufacturers:

In response to a Consumer Reports query, D-Link said that security would be tightened through updates this fall. Consumer Reports will evaluate those updates once they are available.

This is the sort of sustained pressure we need on IoT device manufacturers.

Boing Boing link.

EDITED TO ADD (11/13): In related news, the US Federal Trade Commission is suing D-Link because their routers are so insecure. The lawsuit was filed in January 2017.

Posted on November 7, 2018 at 6:39 AMView Comments

Security of Solid-State-Drive Encryption

Interesting research: "Self-encrypting deception: weaknesses in the encryption of solid state drives (SSDs)":

Abstract: We have analyzed the hardware full-disk encryption of several SSDs by reverse engineering their firmware. In theory, the security guarantees offered by hardware encryption are similar to or better than software implementations. In reality, we found that many hardware implementations have critical security weaknesses, for many models allowing for complete recovery of the data without knowledge of any secret. BitLocker, the encryption software built into Microsoft Windows will rely exclusively on hardware full-disk encryption if the SSD advertises supported for it. Thus, for these drives, data protected by BitLocker is also compromised. This challenges the view that hardware encryption is preferable over software encryption. We conclude that one should not rely solely on hardware encryption offered by SSDs.

EDITED TO ADD: The NSA is known to attack firmware of SSDs.

EDITED TO ADD (11/13): CERT advisory. And older research.

Posted on November 6, 2018 at 6:51 AMView Comments

Buying Used Voting Machines on eBay

This is not surprising:

This year, I bought two more machines to see if security had improved. To my dismay, I discovered that the newer model machines -- those that were used in the 2016 election -- are running Windows CE and have USB ports, along with other components, that make them even easier to exploit than the older ones. Our voting machines, billed as "next generation," and still in use today, are worse than they were before­ -- dispersed, disorganized, and susceptible to manipulation.

Cory Doctorow's comment is correct:

Voting machines are terrible in every way: the companies that make them lie like crazy about their security, insist on insecure designs, and produce machines that are so insecure that it's easier to hack a voting machine than it is to use it to vote.

I blame both the secrecy of the industry and the ignorance of most voting officials. And it's not getting better.

Posted on November 1, 2018 at 6:18 AMView Comments

Chinese Supply Chain Hardware Attack

Bloomberg is reporting about a Chinese espionage operating involving inserting a tiny chip into computer products made in China.

I've written about (alternate link) this threat more generally. Supply-chain security is an insurmountably hard problem. Our IT industry is inexorably international, and anyone involved in the process can subvert the security of the end product. No one wants to even think about a US-only anything; prices would multiply many times over.

We cannot trust anyone, yet we have no choice but to trust everyone. No one is ready for the costs that solving this would entail.

EDITED TO ADD: Apple, Amazon, and others are denying that this attack is real. Stay tuned for more information.

EDITED TO ADD (9/6): TheGrugq comments. Bottom line is that we still don't know. I think that precisely exemplifies the greater problem.

EDITED TO ADD (10/7): Both the US Department of Homeland Security and the UK National Cyber Security Centre claim to believe the tech companies. Bloomberg is standing by its story. Nicholas Weaver writes that the story is plausible.

Posted on October 4, 2018 at 11:30 AMView Comments

Security Vulnerability in ESS ExpressVote Touchscreen Voting Computer

Of course the ESS ExpressVote voting computer will have lots of security vulnerabilities. It's a computer, and computers have lots of vulnerabilities. This particular vulnerability is particularly interesting because it's the result of a security mistake in the design process. Someone didn't think the security through, and the result is a voter-verifiable paper audit trail that doesn't provide the security it promises.

Here are the details:

Now there's an even worse option than "DRE with paper trail"; I call it "press this button if it's OK for the machine to cheat" option. The country's biggest vendor of voting machines, ES&S, has a line of voting machines called ExpressVote. Some of these are optical scanners (which are fine), and others are "combination" machines, basically a ballot-marking device and an optical scanner all rolled into one.

This video shows a demonstration of ExpressVote all-in-one touchscreens purchased by Johnson County, Kansas. The voter brings a blank ballot to the machine, inserts it into a slot, chooses candidates. Then the machine prints those choices onto the blank ballot and spits it out for the voter to inspect. If the voter is satisfied, she inserts it back into the slot, where it is counted (and dropped into a sealed ballot box for possible recount or audit).

So far this seems OK, except that the process is a bit cumbersome and not completely intuitive (watch the video for yourself). It still suffers from the problems I describe above: voter may not carefully review all the choices, especially in down-ballot races; counties need to buy a lot more voting machines, because voters occupy the machine for a long time (in contrast to op-scan ballots, where they occupy a cheap cardboard privacy screen).

But here's the amazingly bad feature: "The version that we have has an option for both ways," [Johnson County Election Commissioner Ronnie] Metsker said. "We instruct the voters to print their ballots so that they can review their paper ballots, but they're not required to do so. If they want to press the button 'cast ballot,' it will cast the ballot, but if they do so they are doing so with full knowledge that they will not see their ballot card, it will instead be cast, scanned, tabulated and dropped in the secure ballot container at the backside of the machine." [TYT Investigates, article by Jennifer Cohn, September 6, 2018]

Now it's easy for a hacked machine to cheat undetectably! All the fraudulent vote-counting program has to do is wait until the voter chooses between "cast ballot without inspecting" and "inspect ballot before casting." If the latter, then don't cheat on this ballot. If the former, then change votes how it likes, and print those fraudulent votes on the paper ballot, knowing that the voter has already given up the right to look at it.

A voter-verifiable paper audit trail does not require every voter to verify the paper ballot. But it does require that every voter be able to verify the paper ballot. I am continuously amazed by how bad electronic voting machines are. Yes, they're computers. But they also seem to be designed by people who don't understand computer (or any) security.

Posted on September 20, 2018 at 6:45 AMView Comments

1 2 3 4 5 6 7 8 9 10 Next→

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.