This website does readability filtering of other pages. All styles, scripts, forms and ads are stripped. If you want your website excluded or have other feedback, use this form.

Schneier on Security: Blog Entries Tagged laws

Schneier on Security

Blog > Entries by Tag >

Entries Tagged “laws”

Page 1 of 29

Cabinet of Secret Documents from Australia

This story of leaked Australian government secrets is unlike any other I've heard:

It begins at a second-hand shop in Canberra, where ex-government furniture is sold off cheaply.

The deals can be even cheaper when the items in question are two heavy filing cabinets to which no-one can find the keys.

They were purchased for small change and sat unopened for some months until the locks were attacked with a drill.

Inside was the trove of documents now known as The Cabinet Files.

The thousands of pages reveal the inner workings of five separate governments and span nearly a decade.

Nearly all the files are classified, some as "top secret" or "AUSTEO", which means they are to be seen by Australian eyes only.

Yes, that really happened. The person who bought and opened the file cabinets contacted the Australian Broadcasting Corp, who is now publishing a bunch of it.

There's lots of interesting (and embarassing) stuff in the documents, although most of it is local politics. I am more interested in the government's reaction to the incident: they're pushing for a law making it illegal for the press to publish government secrets it received through unofficial channels.

"The one thing I would point out about the legislation that does concern me particularly is that classified information is an element of the offence," he said.

"That is to say, if you've got a filing cabinet that is full of classified information ... that means all the Crown has to prove if they're prosecuting you is that it is classified ­ nothing else.

"They don't have to prove that you knew it was classified, so knowledge is beside the point."

[...]

Many groups have raised concerns, including media organisations who say they unfairly target journalists trying to do their job.

But really anyone could be prosecuted just for possessing classified information, regardless of whether they know about it.

That might include, for instance, if you stumbled across a folder of secret files in a regular skip bin while walking home and handed it over to a journalist.

This illustrates a fundamental misunderstanding of the threat. The Australian Broadcasting Corp gets their funding from the government, and was very restrained in what they published. They waited months before publishing as they coordinated with the Australian government. They allowed the government to secure the files, and then returned them. From the government's perspective, they were the best possible media outlet to receive this information. If the government makes it illegal for the Australian press to publish this sort of material, the next time it will be sent to the BBC, the Guardian, the New York Times, or Wikileaks. And since people no longer read their news from newspapers sold in stores but on the Internet, the result will be just as many people reading the stories with far fewer redactions.

The proposed law is older than this leak, but the leak is giving it new life. The Australian opposition party is being cagey on whether they will support the law. They don't want to appear weak on national security, so I'm not optimistic.

EDITED TO ADD (2/8): The Australian government backed down on that new security law.

EDITED TO ADD (2/13): Excellent political cartoon.

Posted on February 7, 2018 at 6:19 AMView Comments

A Framework for Cyber Security Insurance

New paper: "Policy measures and cyber insurance: a framework," by Daniel Woods and Andrew Simpson, Journal of Cyber Policy, 2017.

Abstract: The role of the insurance industry in driving improvements in cyber security has been identified as mutually beneficial for both insurers and policy-makers. To date, there has been no consideration of the roles governments and the insurance industry should pursue in support of this public­-private partnership. This paper rectifies this omission and presents a framework to help underpin such a partnership, giving particular consideration to possible government interventions that might affect the cyber insurance market. We have undertaken a qualitative analysis of reports published by policy-making institutions and organisations working in the cyber insurance domain; we have also conducted interviews with cyber insurance professionals. Together, these constitute a stakeholder analysis upon which we build our framework. In addition, we present a research roadmap to demonstrate how the ideas described might be taken forward.

Posted on August 30, 2017 at 1:22 PMView Comments

Surveillance Intermediaries

Interesting law-journal article: "Surveillance Intermediaries," by Alan Z. Rozenshtein.

Abstract:Apple's 2016 fight against a court order commanding it to help the FBI unlock the iPhone of one of the San Bernardino terrorists exemplifies how central the question of regulating government surveillance has become in American politics and law. But scholarly attempts to answer this question have suffered from a serious omission: scholars have ignored how government surveillance is checked by "surveillance intermediaries," the companies like Apple, Google, and Facebook that dominate digital communications and data storage, and on whose cooperation government surveillance relies. This Article fills this gap in the scholarly literature, providing the first comprehensive analysis of how surveillance intermediaries constrain the surveillance executive. In so doing, it enhances our conceptual understanding of, and thus our ability to improve, the institutional design of government surveillance.

Surveillance intermediaries have the financial and ideological incentives to resist government requests for user data. Their techniques of resistance are: proceduralism and litigiousness that reject voluntary cooperation in favor of minimal compliance and aggressive litigation; technological unilateralism that designs products and services to make surveillance harder; and policy mobilization that rallies legislative and public opinion to limit surveillance. Surveillance intermediaries also enhance the "surveillance separation of powers"; they make the surveillance executive more subject to inter-branch constraints from Congress and the courts, and to intra-branch constraints from foreign-relations and economics agencies as well as the surveillance executive's own surveillance-limiting components.

The normative implications of this descriptive account are important and cross-cutting. Surveillance intermediaries can both improve and worsen the "surveillance frontier": the set of tradeoffs ­ between public safety, privacy, and economic growth ­ from which we choose surveillance policy. And while intermediaries enhance surveillance self-government when they mobilize public opinion and strengthen the surveillance separation of powers, they undermine it when their unilateral technological changes prevent the government from exercising its lawful surveillance authorities.

Posted on June 7, 2017 at 6:19 AMView Comments

Election Security

It's over. The voting went smoothly. As of the time of writing, there are no serious fraud allegations, nor credible evidence that anyone tampered with voting rolls or voting machines. And most important, the results are not in doubt.

While we may breathe a collective sigh of relief about that, we can't ignore the issue until the next election. The risks remain.

As computer security experts have been saying for years, our newly computerized voting systems are vulnerable to attack by both individual hackers and government-sponsored cyberwarriors. It is only a matter of time before such an attack happens.

Electronic voting machines can be hacked, and those machines that do not include a paper ballot that can verify each voter's choice can be hacked undetectably. Voting rolls are also vulnerable; they are all computerized databases whose entries can be deleted or changed to sow chaos on Election Day.

The largely ad hoc system in states for collecting and tabulating individual voting results is vulnerable as well. While the difference between theoretical if demonstrable vulnerabilities and an actual attack on Election Day is considerable, we got lucky this year. Not just presidential elections are at risk, but state and local elections, too.

To be very clear, this is not about voter fraud. The risks of ineligible people voting, or people voting twice, have been repeatedly shown to be virtually nonexistent, and "solutions" to this problem are largely voter-suppression measures. Election fraud, however, is both far more feasible and much more worrisome.

Here's my worry. On the day after an election, someone claims that a result was hacked. Maybe one of the candidates points to a wide discrepancy between the most recent polls and the actual results. Maybe an anonymous person announces that he hacked a particular brand of voting machine, describing in detail how. Or maybe it's a system failure during Election Day: voting machines recording significantly fewer votes than there were voters, or zero votes for one candidate or another. (These are not theoretical occurrences; they have both happened in the United States before, though because of error, not malice.)

We have no procedures for how to proceed if any of these things happen. There's no manual, no national panel of experts, no regulatory body to steer us through this crisis. How do we figure out if someone hacked the vote? Can we recover the true votes, or are they lost? What do we do then?

First, we need to do more to secure our elections system. We should declare our voting systems to be critical national infrastructure. This is largely symbolic, but it demonstrates a commitment to secure elections and makes funding and other resources available to states.

We need national security standards for voting machines, and funding for states to procure machines that comply with those standards. Voting-security experts can deal with the technical details, but such machines must include a paper ballot that provides a record verifiable by voters. The simplest and most reliable way to do that is already practiced in 37 states: optical-scan paper ballots, marked by the voters, counted by computer but recountable by hand. And we need a system of pre-election and postelection security audits to increase confidence in the system.

Second, election tampering, either by a foreign power or by a domestic actor, is inevitable, so we need detailed procedures to follow--both technical procedures to figure out what happened, and legal procedures to figure out what to do--that will efficiently get us to a fair and equitable election resolution. There should be a board of independent computer-security experts to unravel what happened, and a board of independent election officials, either at the Federal Election Commission or elsewhere, empowered to determine and put in place an appropriate response.

In the absence of such impartial measures, people rush to defend their candidate and their party. Florida in 2000 was a perfect example. What could have been a purely technical issue of determining the intent of every voter became a battle for who would win the presidency. The debates about hanging chads and spoiled ballots and how broad the recount should be were contested by people angling for a particular outcome. In the same way, after a hacked election, partisan politics will place tremendous pressure on officials to make decisions that override fairness and accuracy.

That is why we need to agree on policies to deal with future election fraud. We need procedures to evaluate claims of voting-machine hacking. We need a fair and robust vote-auditing process. And we need all of this in place before an election is hacked and battle lines are drawn.

In response to Florida, the Help America Vote Act of 2002 required each state to publish its own guidelines on what constitutes a vote. Some states -- Indiana, in particular -- set up a "war room" of public and private cybersecurity experts ready to help if anything did occur. While the Department of Homeland Security is assisting some states with election security, and the F.B.I. and the Justice Department made some preparations this year, the approach is too piecemeal.

Elections serve two purposes. First, and most obvious, they are how we choose a winner. But second, and equally important, they convince the loser--and all the supporters--that he or she lost. To achieve the first purpose, the voting system must be fair and accurate. To achieve the second one, it must be shown to be fair and accurate.

We need to have these conversations before something happens, when everyone can be calm and rational about the issues. The integrity of our elections is at stake, which means our democracy is at stake.

This essay previously appeared in the New York Times.

Posted on November 15, 2016 at 7:09 AMView Comments

Regulation of the Internet of Things

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the "Internet of Things" and increased regulation of what are now critical and life-threatening technologies. It's no longer a question of if, it's a question of when.

First, the facts. Those websites went down because their domain name provider — a company named Dyn —­ was forced offline. We don't know who perpetrated that attack, but it could have easily been a lone hacker. Whoever it was launched a distributed denial-of-service attack against Dyn by exploiting a vulnerability in large numbers ­— possibly millions — of Internet-of-Things devices like webcams and digital video recorders, then recruiting them all into a single botnet. The botnet bombarded Dyn with traffic, so much that it went down. And when it went down, so did dozens of websites.

Your security on the Internet depends on the security of millions of Internet-enabled devices, designed and sold by companies you've never heard of to consumers who don't care about your security.

The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they're things like cars, home appliances, thermostats, light bulbs, fitness trackers, medical devices, smart streetlights and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don't have the security expertise we've come to expect from the major computer and smartphone manufacturers, simply because the market won't stand for the additional costs that would require. These devices don't get security updates like our more expensive computers, and many don't even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don't care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can't even tell they were used in the attack. The sellers of those devices don't care: They've already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It's a form of invisible pollution.

And, like pollution, the only solution is to regulate. The government could impose minimum security standards on IoT manufacturers, forcing them to make their devices secure even though their customers don't care. They could impose liabilities on manufacturers, allowing companies like Dyn to sue them if their devices are used in DDoS attacks. The details would need to be carefully scoped, but either of these options would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

It's true that this is a domestic solution to an international problem and that there's no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change.

Regardless of what you think about regulation vs. market solutions, I believe there is no choice. Governments will get involved in the IoT, because the risks are too great and the stakes are too high. Computers are now able to affect our world in a direct and physical manner.

Security researchers have demonstrated the ability to remotely take control of Internet-enabled cars. They've demonstrated ransomware against home thermostats and exposed vulnerabilities in implanted medical devices. They've hacked voting machines and power plants. In one recent paper, researchers showed how a vulnerability in smart light bulbs could be used to start a chain reaction, resulting in them all being controlled by the attackers ­— that's every one in a city. Security flaws in these things could mean people dying and property being destroyed.

Nothing motivates the U.S. government like fear. Remember 2001? A small-government Republican president created the Department of Homeland Security in the wake of the 9/11 terrorist attacks: a rushed and ill-thought-out decision that we've been trying to fix for more than a decade. A fatal IoT disaster will similarly spur our government into action, and it's unlikely to be well-considered and thoughtful action. Our choice isn't between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important and complex ­— and they're coming. We can't afford to ignore these issues until it's too late.

In general, the software market demands that products be fast and cheap and that security be a secondary consideration. That was okay when software didn't matter —­ it was okay that your spreadsheet crashed once in a while. But a software bug that literally crashes your car is another thing altogether. The security vulnerabilities in the Internet of Things are deep and pervasive, and they won't get fixed if the market is left to sort it out for itself. We need to proactively discuss good regulatory solutions; otherwise, a disaster will impose bad ones on us.

This essay previously appeared in the Washington Post.

Posted on November 10, 2016 at 6:06 AMView Comments

Arresting People for Walking Away from Airport Security

A proposed law in Albany, NY, would make it a crime to walk away from airport screening.

Aside from wondering why county lawmakers are getting involved with what should be national policy, you have to ask: what are these people thinking?

They're thinking in stories, of course. They have a movie plot in their heads, and they are imaging how this measure solves it.

The law is intended to cover what Apple described as a soft spot in the current system that allows passengers to walk away without boarding their flights if security staff flags them for additional scrutiny.

That could include would-be terrorists probing for weaknesses, Apple said, adding that his deputies currently have no legal grounds to question such a person.

Does anyone have any idea what stories these people have in their heads? What sorts of security weaknesses are exposed by walking up to airport security and then walking away?

Posted on May 31, 2016 at 6:35 AMView Comments

Julian Sanchez on the Feinstein-Burr Bill

Two excellent posts.

It's such a badly written bill that I wonder if it's just there to anchor us to an extreme, so we're relieved when the actual bill comes along. Me:

"This is the most braindead piece of legislation I've ever seen," Schneier -- who has just been appointed a Fellow of the Kennedy School of Government at Harvard -- told The Reg. "The person who wrote this either has no idea how technology works or just doesn't care."

Posted on May 3, 2016 at 1:10 PMView Comments

Data Is a Toxic Asset

Thefts of personal information aren't unusual. Every week, thieves break into networks and steal data about people, often tens of millions at a time. Most of the time it's information that's needed to commit fraud, as happened in 2015 to Experian and the IRS.

Sometimes it's stolen for purposes of embarrassment or coercion, as in the 2015 cases of Ashley Madison and the US Office of Personnel Management. The latter exposed highly sensitive personal data that affects security of millions of government employees, probably to the Chinese. Always it's personal information about us, information that we shared with the expectation that the recipients would keep it secret. And in every case, they did not.

The telecommunications company TalkTalk admitted that its data breach last year resulted in criminals using customer information to commit fraud. This was more bad news for a company that's been hacked three times in the past 12 months, and has already seen some disastrous effects from losing customer data, including £60 million (about $83 million) in damages and over 100,000 customers. Its stock price took a pummeling as well.

People have been writing about 2015 as the year of data theft. I'm not sure if more personal records were stolen last year than in other recent years, but it certainly was a year for big stories about data thefts. I also think it was the year that industry started to realize that data is a toxic asset.

The phrase "big data" refers to the idea that large databases of seemingly random data about people are valuable. Retailers save our purchasing habits. Cell phone companies and app providers save our location information.

Telecommunications providers, social networks, and many other types of companies save information about who we talk to and share things with. Data brokers save everything about us they can get their hands on. This data is saved and analyzed, bought and sold, and used for marketing and other persuasive purposes.

And because the cost of saving all this data is so cheap, there's no reason not to save as much as possible, and save it all forever. Figuring out what isn't worth saving is hard. And because someday the companies might figure out how to turn the data into money, until recently there was absolutely no downside to saving everything. That changed this past year.

What all these data breaches are teaching us is that data is a toxic asset and saving it is dangerous.

Saving it is dangerous because it's highly personal. Location data reveals where we live, where we work, and how we spend our time. If we all have a location tracker like a smartphone, correlating data reveals who we spend our time with­ -- including who we spend the night with.

Our Internet search data reveals what's important to us, including our hopes, fears, desires and secrets. Communications data reveals who our intimates are, and what we talk about with them. I could go on. Our reading habits, or purchasing data, or data from sensors as diverse as cameras and fitness trackers: All of it can be intimate.

Saving it is dangerous because many people want it. Of course companies want it; that's why they collect it in the first place. But governments want it, too. In the United States, the National Security Agency and FBI use secret deals, coercion, threats and legal compulsion to get at the data. Foreign governments just come in and steal it. When a company with personal data goes bankrupt, it's one of the assets that gets sold.

Saving it is dangerous because it's hard for companies to secure. For a lot of reasons, computer and network security is very difficult. Attackers have an inherent advantage over defenders, and a sufficiently skilled, funded and motivated attacker will always get in.

And saving it is dangerous because failing to secure it is damaging. It will reduce a company's profits, reduce its market share, hurt its stock price, cause it public embarrassment, and­ -- in some cases -- ­result in expensive lawsuits and occasionally, criminal charges.

All this makes data a toxic asset, and it continues to be toxic as long as it sits in a company's computers and networks. The data is vulnerable, and the company is vulnerable. It's vulnerable to hackers and governments. It's vulnerable to employee error. And when there's a toxic data spill, millions of people can be affected. The 2015 Anthem Health data breach affected 80 million people. The 2013 Target Corp. breach affected 110 million.

This toxic data can sit in organizational databases for a long time. Some of the stolen Office of Personnel Management data was decades old. Do you have any idea which companies still have your earliest e-mails, or your earliest posts on that now-defunct social network?

If data is toxic, why do organizations save it?

There are three reasons. The first is that we're in the middle of the hype cycle of big data. Companies and governments are still punch-drunk on data, and have believed the wildest of promises on how valuable that data is. The research showing that more data isn't necessarily better, and that there are serious diminishing returns when adding additional data to processes like personalized advertising, is just starting to come out.

The second is that many organizations are still downplaying the risks. Some simply don't realize just how damaging a data breach would be. Some believe they can completely protect themselves against a data breach, or at least that their legal and public relations teams can minimize the damage if they fail. And while there's certainly a lot that companies can do technically to better secure the data they hold about all of us, there's no better security than deleting the data.

The last reason is that some organizations understand both the first two reasons and are saving the data anyway. The culture of venture-capital-funded start-up companies is one of extreme risk taking. These are companies that are always running out of money, that always know their impending death date.

They are so far from profitability that their only hope for surviving is to get even more money, which means they need to demonstrate rapid growth or increasing value. This motivates those companies to take risks that larger, more established, companies would never take. They might take extreme chances with our data, even flout regulations, because they literally have nothing to lose. And often, the most profitable business models are the most risky and dangerous ones.

We can be smarter than this. We need to regulate what corporations can do with our data at every stage: collection, storage, use, resale and disposal. We can make corporate executives personally liable so they know there's a downside to taking chances. We can make the business models that involve massively surveilling people the less compelling ones, simply by making certain business practices illegal.

The Ashley Madison data breach was such a disaster for the company because it saved its customers' real names and credit card numbers. It didn't have to do it this way. It could have processed the credit card information, given the user access, and then deleted all identifying information.

To be sure, it would have been a different company. It would have had less revenue, because it couldn't charge users a monthly recurring fee. Users who lost their password would have had more trouble re-accessing their account. But it would have been safer for its customers.

Similarly, the Office of Personnel Management didn't have to store everyone's information online and accessible. It could have taken older records offline, or at least onto a separate network with more secure access controls. Yes, it wouldn't be immediately available to government employees doing research, but it would have been much more secure.

Data is a toxic asset. We need to start thinking about it as such, and treat it as we would any other source of toxicity. To do anything else is to risk our security and privacy.

This essay previously appeared on CNN.com.

Posted on March 4, 2016 at 5:32 AMView Comments

Using Law against Technology

On Thursday, a Brazilian judge ordered the text messaging service WhatsApp shut down for 48 hours. It was a monumental action.

WhatsApp is the most popular app in Brazil, used by about 100 million people. The Brazilian telecoms hate the service because it entices people away from more expensive text messaging services, and they have been lobbying for months to convince the government that it's unregulated and illegal. A judge finally agreed.

In Brazil's case, WhatsApp was blocked for allegedly failing to respond to a court order. Another judge reversed the ban 12 hours later, but there is a pattern forming here. In Egypt, Vodafone has complained about the legality of WhatsApp's free voice-calls, while India's telecoms firms have been lobbying hard to curb messaging apps such as WhatsApp and Viber. Earlier this year, the United Arab Emirates blocked WhatsApp's free voice call feature.

All this is part of a massive power struggle going on right now between traditional companies and new Internet companies, and we're all in the blast radius.

It's one aspect of a tech policy problem that has been plaguing us for at least 25 years: technologists and policymakers don't understand each other, and they inflict damage on society because of that. But it's worse today. The speed of technological progress makes it worse. And the types of technology­ -- especially the current Internet of mobile devices everywhere, cloud computing, always-on connections and the Internet of Things -- ­make it worse.

The Internet has been disrupting and destroying long-standing business models since its popularization in the mid-1990s. And traditional industries have long fought back with every tool at their disposal. The movie and music industries have tried for decades to hamstring computers in an effort to prevent illegal copying of their products. Publishers have battled with Google over whether their books could be indexed for online searching.

More recently, municipal taxi companies and large hotel chains are fighting with ride-sharing companies such as Uber and apartment-sharing companies such as Airbnb. Both the old companies and the new upstarts have tried to bend laws to their will in an effort to outmaneuver each other.

Sometimes the actions of these companies harm the users of these systems and services. And the results can seem crazy. Why would the Brazilian telecoms want to provoke the ire of almost everyone in the country? They're trying to protect their monopoly. If they win in not just shutting down WhatsApp, but Telegram and all the other text-message services, their customers will have no choice. This is how high-stakes these battles can be.

This isn't just companies competing in the marketplace. These are battles between competing visions of how technology should apply to business, and traditional businesses and "disruptive" new businesses. The fundamental problem is that technology and law are in conflict, and what's worked in the past is increasingly failing today.

First, the speeds of technology and law have reversed. Traditionally, new technologies were adopted slowly over decades. There was time for people to figure them out, and for their social repercussions to percolate through society. Legislatures and courts had time to figure out rules for these technologies and how they should integrate into the existing legal structures.

They don't always get it right --­ the sad history of copyright law in the United States is an example of how they can get it badly wrong again and again­ -- but at least they had a chance before the technologies become widely adopted.

That's just not true anymore. A new technology can go from zero to a hundred million users in a year or less. That's just too fast for the political or legal process. By the time they're asked to make rules, these technologies are well-entrenched in society.

Second, the technologies have become more complicated and specialized. This means that the normal system of legislators passing laws, regulators making rules based on those laws and courts providing a second check on those rules fails. None of these people has the expertise necessary to understand these technologies, let alone the subtle and potentially pernicious ramifications of any rules they make.

We see the same thing between governments and law-enforcement and militaries. In the United States, we're expecting policymakers to understand the debate between the FBI's desire to read the encrypted e-mails and computers of crime suspects and the security researchers who maintain that giving them that capability will render everyone insecure. We're expecting legislators to provide meaningful oversight over the National Security Agency, when they can only read highly technical documents about the agency's activities in special rooms and without any aides who might be conversant in the issues.

The result is that we end up in situations such as the one Brazil finds itself in. WhatsApp went from zero to 100 million users in five years. The telecoms are advancing all sorts of weird legal arguments to get the service banned, and judges are ill-equipped to separate fact from fiction.

This isn't a simple matter of needing government to get out of the way and let companies battle in the marketplace. These companies are for-profit entities, and their business models are so complicated that they regularly don't do what's best for their users. (For example, remember that you're not really Facebook's customer. You're their product.)

The fact that people's resumes are effectively the first 10 hits on a Google search of their name is a problem --­ something that the European "right to be forgotten" tried ham-fistedly to address. There's a lot of smart writing that says that Uber's disruption of traditional taxis will be worse for the people who regularly use the services. And many people worry about Amazon's increasing dominance of the publishing industry.

We need a better way of regulating new technologies.

That's going to require bridging the gap between technologists and policymakers. Each needs to understand the other ­-- not enough to be experts in each other's fields, but enough to engage in meaningful conversations and debates. That's also going to require laws that are agile and written to be as technologically invariant as possible.

It's a tall order, I know, and one that has been on the wish list of every tech policymaker for decades. But today, the stakes are higher and the issues come faster. Not doing so will become increasingly harmful for all of us.

This essay originally appeared on CNN.com.

EDITED TO ADD (12/23): Slashdot thread.

Posted on December 23, 2015 at 6:48 AMView Comments

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 Next→

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.