This website does readability filtering of other pages. All styles, scripts, forms and ads are stripped. If you want your website excluded or have other feedback, use this form.

Schneier on Security: Blog Entries Tagged national security policy

Schneier on Security

Blog > Entries by Tag >

Entries Tagged “national security policy”

Page 1 of 49

Cybersecurity for the Public Interest

The Crypto Wars have been waging off-and-on for a quarter-century. On one side is law enforcement, which wants to be able to break encryption, to access devices and communications of terrorists and criminals. On the other are almost every cryptographer and computer security expert, repeatedly explaining that there's no way to provide this capability without also weakening the security of every user of those devices and communications systems.

It's an impassioned debate, acrimonious at times, but there are real technologies that can be brought to bear on the problem: key-escrow technologies, code obfuscation technologies, and backdoors with different properties. Pervasive surveillance capitalism -- ­as practiced by the Internet companies that are already spying on everyone­ -- matters. So does society's underlying security needs. There is a security benefit to giving access to law enforcement, even though it would inevitably and invariably also give that access to others. However, there is also a security benefit of having these systems protected from all attackers, including law enforcement. These benefits are mutually exclusive. Which is more important, and to what degree?

The problem is that almost no policymakers are discussing this policy issue from a technologically informed perspective, and very few technologists truly understand the policy contours of the debate. The result is both sides consistently talking past each other, and policy proposals -- ­that occasionally become law­ -- that are technological disasters.

This isn't sustainable, either for this issue or any of the other policy issues surrounding Internet security. We need policymakers who understand technology, but we also need cybersecurity technologists who understand­ -- and are involved in -- ­policy. We need public-interest technologists.

Let's pause at that term. The Ford Foundation defines public-interest technologists as "technology practitioners who focus on social justice, the common good, and/or the public interest." A group of academics recently wrote that public-interest technologists are people who "study the application of technology expertise to advance the public interest, generate public benefits, or promote the public good." Tim Berners-Lee has called them "philosophical engineers." I think of public-interest technologists as people who combine their technological expertise with a public-interest focus: by working on tech policy, by working on a tech project with a public benefit, or by working as a traditional technologist for an organization with a public benefit. Maybe it's not the best term­ -- and I know not everyone likes it­ -- but it's a decent umbrella term that can encompass all these roles.

We need public-interest technologists in policy discussions. We need them on congressional staff, in federal agencies, at non-governmental organizations (NGOs), in academia, inside companies, and as part of the press. In our field, we need them to get involved in not only the Crypto Wars, but everywhere cybersecurity and policy touch each other: the vulnerability equities debate, election security, cryptocurrency policy, Internet of Things safety and security, big data, algorithmic fairness, adversarial machine learning, critical infrastructure, and national security. When you broaden the definition of Internet security, many additional areas fall within the intersection of cybersecurity and policy. Our particular expertise and way of looking at the world is critical for understanding a great many technological issues, such as net neutrality and the regulation of critical infrastructure. I wouldn't want to formulate public policy about artificial intelligence and robotics without a security technologist involved.

Public-interest technology isn't new. Many organizations are working in this area, from older organizations like EFF and EPIC to newer ones like Verified Voting and Access Now. Many academic classes and programs combine technology and public policy. My cybersecurity policy class at the Harvard Kennedy School is just one example. Media startups like The Markup are doing technology-driven journalism. There are even programs and initiatives related to public-interest technology inside for-profit corporations.

This might all seem like a lot, but it's really not. There aren't enough people doing it, there aren't enough people who know it needs to be done, and there aren't enough places to do it. We need to build a world where there is a viable career path for public-interest technologists.

There are many barriers. There's a report titled A Pivotal Moment that includes this quote: "While we cite individual instances of visionary leadership and successful deployment of technology skill for the public interest, there was a consensus that a stubborn cycle of inadequate supply, misarticulated demand, and an inefficient marketplace stymie progress."

That quote speaks to the three places for intervention. One: the supply side. There just isn't enough talent to meet the eventual demand. This is especially acute in cybersecurity, which has a talent problem across the field. Public-interest technologists are a diverse and multidisciplinary group of people. Their backgrounds come from technology, policy, and law. We also need to foster diversity within public-interest technology; the populations using the technology must be represented in the groups that shape the technology. We need a variety of ways for people to engage in this sphere: ways people can do it on the side, for a couple of years between more traditional technology jobs, or as a full-time rewarding career. We need public-interest technology to be part of every core computer-science curriculum, with "clinics" at universities where students can get a taste of public-interest work. We need technology companies to give people sabbaticals to do this work, and then value what they've learned and done.

Two: the demand side. This is our biggest problem right now; not enough organizations understand that they need technologists doing public-interest work. We need jobs to be funded across a wide variety of NGOs. We need staff positions throughout the government: executive, legislative, and judiciary branches. President Obama's US Digital Service should be expanded and replicated; so should Code for America. We need more press organizations that perform this kind of work.

Three: the marketplace. We need job boards, conferences, and skills exchanges­ -- places where people on the supply side can learn about the demand.

Major foundations are starting to provide funding in this space: the Ford and MacArthur Foundations in particular, but others as well.

This problem in our field has an interesting parallel with the field of public-interest law. In the 1960s, there was no such thing as public-interest law. The field was deliberately created, funded by organizations like the Ford Foundation. They financed legal aid clinics at universities, so students could learn housing, discrimination, or immigration law. They funded fellowships at organizations like the ACLU and the NAACP. They created a world where public-interest law is valued, where all the partners at major law firms are expected to have done some public-interest work. Today, when the ACLU advertises for a staff attorney, paying one-third to one-tenth normal salary, it gets hundreds of applicants. Today, 20% of Harvard Law School graduates go into public-interest law, and the school has soul-searching seminars because that percentage is so low. Meanwhile, the percentage of computer-science graduates going into public-interest work is basically zero.

This is bigger than computer security. Technology now permeates society in a way it didn't just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.

More generally, technologists need to understand the policy ramifications of their work. There's a pervasive myth in Silicon Valley that technology is politically neutral. It's not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn't matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.

This is where the core issues of society lie. The defining political question of the 20th century was: "What should be governed by the state, and what should be governed by the market?" This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: "How much of our lives should be governed by technology, and under what terms?" In the last century, economists drove public policy. In this century, it will be technologists.

The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.

This essay previously appeared in the January/February issue of IEEE Security & Privacy.

Together with the Ford Foundation, I am hosting a one-day mini-track on public-interest technologists at the RSA Conference this week on Thursday. We've had some press coverage.

EDITED TO ADD (3/7): More news articles.

Posted on March 5, 2019 at 6:31 AMView Comments

Gen. Nakasone on US Cyber Command

Really interesting article by and interview with Paul M. Nakasone (Commander of US Cyber Command, Director of the National Security Agency, and Chief of the Central Security Service) in the current issue of Joint Forces Quarterly. He talks about the evolving role of US Cyber Command, and its new posture of "persistent engagement" using a "cyber-persistant force."

From the article:

We must "defend forward" in cyberspace, as we do in the physical domains. Our naval forces do not defend by staying in port, and our airpower does not remain at airfields. They patrol the seas and skies to ensure they are positioned to defend our country before our borders are crossed. The same logic applies in cyberspace. Persistent engagement of our adversaries in cyberspace cannot be successful if our actions are limited to DOD networks. To defend critical military and national interests, our forces must operate against our enemies on their virtual territory as well. Shifting from a response outlook to a persistence force that defends forward moves our cyber capabilities out of their virtual garrisons, adopting a posture that matches the cyberspace operational environment.

From the interview:

As we think about cyberspace, we should agree on a few foundational concepts. First, our nation is in constant contact with its adversaries; we're not waiting for adversaries to come to us. Our adversaries understand this, and they are always working to improve that contact. Second, our security is challenged in cyberspace. We have to actively defend; we have to conduct reconnaissance; we have to understand where our adversary is and his capabilities; and we have to understand their intent. Third, superiority in cyberspace is temporary; we may achieve it for a period of time, but it's ephemeral. That's why we must operate continuously to seize and maintain the initiative in the face of persistent threats. Why do the threats persist in cyberspace? They persist because the barriers to entry are low and the capabilities are rapidly available and can be easily repurposed. Fourth, in this domain, the advantage favors those who have initiative. If we want to have an advantage in cyberspace, we have to actively work to either improve our defenses, create new accesses, or upgrade our capabilities. This is a domain that requires constant action because we're going to get reactions from our adversary.

[...]

Persistent engagement is the concept that states we are in constant contact with our adversaries in cyberspace, and success is determined by how we enable and act. In persistent engagement, we enable other interagency partners. Whether it's the FBI or DHS, we enable them with information or intelligence to share with elements of the CIKR [critical infrastructure and key resources] or with select private-sector companies. The recent midterm elections is an example of how we enabled our partners. As part of the Russia Small Group, USCYBERCOM and the National Security Agency [NSA] enabled the FBI and DHS to prevent interference and influence operations aimed at our political processes. Enabling our partners is two-thirds of persistent engagement. The other third rests with our ability to act -- that is, how we act against our adversaries in cyberspace. Acting includes defending forward. How do we warn, how do we influence our adversaries, how do we position ourselves in case we have to achieve outcomes in the future? Acting is the concept of operating outside our borders, being outside our networks, to ensure that we understand what our adversaries are doing. If we find ourselves defending inside our own networks, we have lost the initiative and the advantage.

[...]

The concept of persistent engagement has to be teamed with "persistent presence" and "persistent innovation." Persistent presence is what the Intelligence Community is able to provide us to better understand and track our adversaries in cyberspace. The other piece is persistent innovation. In the last couple of years, we have learned that capabilities rapidly change; accesses are tenuous; and tools, techniques, and tradecraft must evolve to keep pace with our adversaries. We rely on operational structures that are enabled with the rapid development of capabilities. Let me offer an example regarding the need for rapid change in technologies. Compare the air and cyberspace domains. Weapons like JDAMs [Joint Direct Attack Munitions] are an important armament for air operations. How long are those JDAMs good for? Perhaps 5, 10, or 15 years, some-times longer given the adversary. When we buy a capability or tool for cyberspace...we rarely get a prolonged use we can measure in years. Our capabilities rarely last 6 months, let alone 6 years. This is a big difference in two important domains of future conflict. Thus, we will need formations that have ready access to developers.

Solely from a military perspective, these are obviously the right things to be doing. From a societal perspective -- from the perspective a potential arms race -- I'm much less sure. I'm also worried about the singular focus on nation-state actors in an environment where capabilities diffuse so quickly. But Cyber Command's job is not cybersecurity and resilience.

The whole thing is worth reading, regardless of whether you agree or disagree.

EDITED TO ADD (2/26): As an example, US Cyber Command disrupted a Russian troll farm during the 2018 midterm elections.

Posted on February 22, 2019 at 5:35 AMView Comments

Public-Interest Tech at the RSA Conference

Our work in cybersecurity is inexorably intertwined with public policy and­ -- more generally­ -- the public interest. It's obvious in the debates on encryption and vulnerability disclosure, but it's also part of the policy discussions about the Internet of Things, cryptocurrencies, artificial intelligence, social media platforms, and pretty much everything else related to IT.

This societal dimension to our traditionally technical area is bringing with it a need for public-interest technologists.

Defining this term is difficult. One blog post described public-interest technologists as "technology practitioners who focus on social justice, the common good, and/or the public interest." A group of academics in this field wrote that "public-interest technology refers to the study and application of technology expertise to advance the public interest/generate public benefits/promote the public good."

I think of public-interest technologists as people who combine their technological expertise with a public-interest focus, either by working on tech policy (for the EFF or as a congressional staffer, as examples), working on a technology project with a public benefit (such as Tor or Signal), or working as a more traditional technologist for an organization with a public-interest focus (providing IT security for Human Rights Watch, as an example). Public-interest technology isn't one thing; it's many things. And not everyone likes the term. Maybe it's not the most accurate term for what different people do, but it's the best umbrella term that covers everyone.

It's a growing field -- one far broader than cybersecurity -- and one that I am increasingly focusing my time on. I maintain a resources page for public-interest technology. (This is the single best document to read about the current state of public-interest technology, and what is still to be done.)

This year, I am bringing some of these ideas to the RSA Conference. In partnership with the Ford Foundation, I am hosting a mini-track on public-interest technology. Six sessions throughout the day on Thursday will highlight different aspects of this important work. We'll look at public-interest technologists inside governments, as part of civil society, at universities, and in corporate environments.

  1. How Public-Interest Technologists are Changing the World . This introductory panel lays the groundwork for the day to come. I'll be joined on stage with Matt Mitchell of Tactical Tech, and we'll discuss how public-interest technologists are already changing the world.
  2. Public-Interest Tech in Silicon Valley. Most of us work for technology companies, and this panel discusses public-interest technology work within companies. Mitchell Baker of Mozilla Corp. and Cindy Cohn of the EFF will lead the discussion, looking at both public-interest projects within corporations and employee activism initiatives by corporate employees.
  3. Working in Civil Society. Bringing a technological perspective into civil society can transform how organizations do their work. Through a series of lightning talks, this session examines how this transformation can happen from a variety of perspectives: exposing government surveillance, protecting journalists worldwide, preserving a free and open Internet, bringing a security focus to artificial intelligence research, protecting NGO networks, and more. For those of us in security, bringing tech tools to those who need them is core to what we do.
  4. Government Needs You. Government needs technologists at all levels. We're needed on legislative staffs and at regulatory agencies in order to make effective tech policy, but we're also needed elsewhere to implement policy more broadly. We're needed to advise courts, testify at hearings, and serve on advisory committees. At this session, you'll hear from public-interest technologists who have had a major impact on government from a variety of positions, and learn about ways you can get involved.
  5. Changing Academia. Higher education needs to incorporate a public-interest perspective in technology departments, and a technology perspective in public-policy departments. This could look like ethics courses for computer science majors, programming for law students, or joint degrees that combine technology and social science. Danny Weitzner of MIT and Latanya Sweeney of Harvard will discuss efforts to build these sorts of interdisciplinary classes, programs, and institutes.
  6. The Future of Public-Interest Tech Creating an environment where public-interest technology can flourish will require a robust pipeline: more people wanting to go into this field, more places for them to go, and an improved market that matches supply with demand. In this closing session, Jenny Toomey of the Ford Foundation and I will sum up the day and discuss future directions for growing the field, funding trajectories, highlighting outstanding needs and gaps, and describing how you can get involved.

Check here for times and locations, and be sure to reserve your seat.

We all need to help. I don't mean that we all need to quit our jobs and go work on legislative staffs; there's a lot we can do while still maintaining our existing careers. We can advise governments and other public-interest organizations. We can agitate for the public interest inside the corporations we work for. We can speak at conferences and write opinion pieces for publication. We can teach part-time at all levels. But some of us will need to do this full-time.

There's an interesting parallel to public-interest law, which covers everything from human-rights lawyers to public defenders. In the 1960s, that field didn't exist. The field was deliberately created, funded by organizations like the Ford Foundation. They created a world where public-interest law is valued. Today, when the ACLU advertises for a staff attorney, paying a third to a tenth of a normal salary, it gets hundreds of applicants. Today, 20% of Harvard Law School grads go into public-interest law, while the percentage of computer science grads doing public-interest work is basically zero. This is what we need to fix.

Please stop in at my mini-track. Come for a panel that interests you, or stay for the whole day. Bring your ideas. Find me to talk about this further. Pretty much all the major policy debates of this century will have a strong technological component -- and an important cybersecurity angle -- and we all need to get involved.

This essay originally appeared on the RSA Conference blog.

Michael Brennan of the Ford Foundation also wrote an essay on the event.

Posted on February 1, 2019 at 9:48 AMView Comments

Security Vulnerabilities in Cell Phone Systems

Good essay on the inherent vulnerabilities in the cell phone standards and the market barriers to fixing them.

So far, industry and policymakers have largely dragged their feet when it comes to blocking cell-site simulators and SS7 attacks. Senator Ron Wyden, one of the few lawmakers vocal about this issue, sent a letter in August encouraging the Department of Justice to "be forthright with federal courts about the disruptive nature of cell-site simulators." No response has ever been published.

The lack of action could be because it is a big task -- there are hundreds of companies and international bodies involved in the cellular network. The other reason could be that intelligence and law enforcement agencies have a vested interest in exploiting these same vulnerabilities. But law enforcement has other effective tools that are unavailable to criminals and spies. For example, the police can work directly with phone companies, serving warrants and Title III wiretap orders. In the end, eliminating these vulnerabilities is just as valuable for law enforcement as it is for everyone else.

As it stands, there is no government agency that has the power, funding and mission to fix the problems. Large companies such as AT&T, Verizon, Google and Apple have not been public about their efforts, if any exist.

Posted on January 10, 2019 at 5:52 AMView Comments

Congressional Report on the 2017 Equifax Data Breach

The US House of Representatives Committee on Oversight and Government Reform has just released a comprehensive report on the 2017 Equifax hack. It's a great piece of writing, with a detailed timeline, root cause analysis, and lessons learned. Lance Spitzner also commented on this.

Here is my testimony before before the House Subcommittee on Digital Commerce and Consumer Protection last November.

Posted on December 19, 2018 at 6:00 AMView Comments

New IoT Security Regulations

Due to ever-evolving technological advances, manufacturers are connecting consumer goods­ -- from toys to light bulbs to major appliances­ -- to the Internet at breakneck speeds. This is the Internet of Things, and it's a security nightmare.

The Internet of Things fuses products with communications technology to make daily life more effortless. Think Amazon's Alexa, which not only answers questions and plays music but allows you to control your home's lights and thermostat. Or the current generation of implanted pacemakers, which can both receive commands and send information to doctors over the Internet.

But like nearly all innovation, there are risks involved. And for products born out of the Internet of Things, this means the risk of having personal information stolen or devices being overtaken and controlled remotely. For devices that affect the world in a direct physical manner -- ­cars, pacemakers, thermostats­ -- the risks include loss of life and property.

By developing more advanced security features and building them into these products, hacks can be avoided. The problem is that there is no monetary incentive for companies to invest in the cybersecurity measures needed to keep their products secure. Consumers will buy products without proper security features, unaware that their information is vulnerable. And current liability laws make it hard to hold companies accountable for shoddy software security.

It falls upon lawmakers to create laws that protect consumers. While the US government is largely absent in this area of consumer protection, the state of California has recently stepped in and started regulating the Internet of Things, or "IoT" devices sold in the state­ -- and the effects will soon be felt worldwide.

California's new SB 327 law, which will take effect in January 2020, requires all "connected devices" to have a "reasonable security feature." The good news is that the term "connected devices" is broadly defined to include just about everything connected to the Internet. The not-so-good news is that "reasonable security" remains defined such that companies trying to avoid compliance can argue that the law is unenforceable.

The legislation requires that security features must be able to protect the device and the information on it from a variety of threats and be appropriate to both the nature of the device and the information it collects. California's attorney general will interpret the law and define the specifics, which will surely be the subject of much lobbying by tech companies.

There's just one specific in the law that's not subject to the attorney general's interpretation: default passwords are not allowed. This is a good thing; they are a terrible security practice. But it's just one of dozens of awful "security" measures commonly found in IoT devices.

This law is not a panacea. But we have to start somewhere, and it is a start.

Though the legislation covers only the state of California, its effects will reach much further. All of us­ -- in the United States or elsewhere­ -- are likely to benefit because of the way software is written and sold.

Automobile manufacturers sell their cars worldwide, but they are customized for local markets. The car you buy in the United States is different from the same model sold in Mexico, because the local environmental laws are not the same and manufacturers optimize engines based on where the product will be sold. The economics of building and selling automobiles easily allows for this differentiation.

But software is different. Once California forces minimum security standards on IoT devices, manufacturers will have to rewrite their software to comply. At that point, it won't make sense to have two versions: one for California and another for everywhere else. It's much easier to maintain the single, more secure version and sell it everywhere.

The European General Data Protection Regulation (GDPR), which implemented the annoying warnings and agreements that pop up on websites, is another example of a law that extends well beyond physical borders. You might have noticed an increase in websites that force you to acknowledge you've read and agreed to the website's privacy policies. This is because it is tricky to differentiate between users who are subject to the protections of the GDPR­ -- people physically in the European Union, and EU citizens wherever they are -- ­and those who are not. It's easier to extend the protection to everyone.

Once this kind of sorting is possible, companies will, in all likelihood, return to their profitable surveillance capitalism practices on those who are still fair game. Surveillance is still the primary business model of the Internet, and companies want to spy on us and our activities as much as they can so they can sell us more things and monetize what they know about our behavior.

Insecurity is profitable only if you can get away with it worldwide. Once you can't, you might as well make a virtue out of necessity. So everyone will benefit from the California regulation, as they would from similar security regulations enacted in any market around the world large enough to matter, just like everyone will benefit from the portion of GDPR compliance that involves data security.

Most importantly, laws like these spur innovations in cybersecurity. Right now, we have a market failure. Because the courts have traditionally not held software manufacturers liable for vulnerabilities, and because consumers don't have the expertise to differentiate between a secure product and an insecure one, manufacturers have prioritized low prices, getting devices out on the market quickly and additional features over security.

But once a government steps in and imposes more stringent security regulations, companies have an incentive to meet those standards as quickly, cheaply, and effectively as possible. This means more security innovation, because now there's a market for new ideas and new products. We've seen this pattern again and again in safety and security engineering, and we'll see it with the Internet of Things as well.

IoT devices are more dangerous than our traditional computers because they sense the world around us, and affect that world in a direct physical manner. Increasing the cybersecurity of these devices is paramount, and it's heartening to see both individual states and the European Union step in where the US federal government is abdicating responsibility. But we need more, and soon.

This essay previously appeared on CNN.com.

Posted on November 13, 2018 at 7:04 AMView Comments

The Pentagon Is Publishing Foreign Nation-State Malware

This is a new thing:

The Pentagon has suddenly started uploading malware samples from APTs and other nation-state sources to the website VirusTotal, which is essentially a malware zoo that's used by security pros and antivirus/malware detection engines to gain a better understanding of the threat landscape.

This feels like an example of the US's new strategy of actively harassing foreign government actors. By making their malware public, the US is forcing them to continually find and use new vulnerabilities.

EDITED TO ADD (11/13): This is another good article. And here is some background on the malware.

Posted on November 9, 2018 at 1:52 PMView Comments

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 Next→

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.