This article claims that the Chinese Peoples Liberation Army was behind, among other things, the August 2003 blackout:
Computer hackers in China, including those working on behalf of the Chinese government and military, have penetrated deeply into the information systems of U.S. companies and government agencies, stolen proprietary information from American executives in advance of their business meetings in China, and, in a few cases, gained access to electric power plants in the United States, possibly triggering two recent and widespread blackouts in Florida and the Northeast, according to U.S. government officials and computer-security experts.
One prominent expert told National Journal he believes that China's People's Liberation Army played a role in the power outages. Tim Bennett, the former president of the Cyber Security Industry Alliance, a leading trade group, said that U.S. intelligence officials have told him that the PLA in 2003 gained access to a network that controlled electric power systems serving the northeastern United States. The intelligence officials said that forensic analysis had confirmed the source, Bennett said. "They said that, with confidence, it had been traced back to the PLA." These officials believe that the intrusion may have precipitated the largest blackout in North American history, which occurred in August of that year. A 9,300-square-mile area, touching Michigan, Ohio, New York, and parts of Canada, lost power; an estimated 50 million people were affected.
This is all so much nonsense I don't even know where to begin.
I wrote about this blackout already: the computer failures were caused by Blaster.
The "Interim Report: Causes of the August 14th Blackout in the United States and Canada," published in November and based on detailed research by a panel of government and industry officials, blames the blackout on an unlucky series of failures that allowed a small problem to cascade into an enormous failure.
The Blaster worm affected more than a million computers running Windows during the days after Aug. 11. The computers controlling power generation and delivery were insulated from the Internet, and they were unaffected by Blaster. But critical to the blackout were a series of alarm failures at FirstEnergy, a power company in Ohio. The report explains that the computer hosting the control room's "alarm and logging software" failed, along with the backup computer and several remote-control consoles. Because of these failures, FirstEnergy operators did not realize what was happening and were unable to contain the problem in time.
Simultaneously, another status computer, this one at the Midwest Independent Transmission System Operator, a regional agency that oversees power distribution, failed. According to the report, a technician tried to repair it and forgot to turn it back on when he went to lunch.
To be fair, the report does not blame Blaster for the blackout. I'm less convinced. The failure of computer after computer within the FirstEnergy network certainly could be a coincidence, but it looks to me like a malicious worm.
The rest of the National Journal article is filled with hysterics and hyperbole about Chinese hackers. I have already written an essay about this -- it'll be the next point/counterpoint between Marcus Ranum and me for Information Security -- and I'll publish it here after they publish it.
EDITED TO ADD (6/2): Wired debunked this claim pretty thoroughly:
This time, though, they've attached their tale to the most thoroughly investigated power incident in U.S. history." and "It traced the root cause of the outage to the utility company FirstEnergy's failure to trim back trees encroaching on high-voltage power lines in Ohio. When the power lines were ensnared by the trees, they tripped.
So China...using the most devious malware ever devised, arranged for trees to grow up into exactly the right power lines at precisely the right time to trigger the cascade.
Large-scale power outages are never one thing. They're a small problem that cascades into series of ever-bigger problems. But the triggering problem were those power lines.
"There is a stealth encryption chip called a TPM that is going on the motherboards of most of the computers that are coming out now," he pointed out
"What that says is that in the games business we will be able to encrypt with an absolutely verifiable private key in the encryption world -- which is uncrackable by people on the internet and by giving away passwords -- which will allow for a huge market to develop in some of the areas where piracy has been a real problem."
"TPM" stands for "Trusted Platform Module." It's a chip that is probably already in your computer and may someday be used to enforce security: both your security, and the security of software and media companies against you. The system is complicated, and while it will prevent some attacks, there are lots of ways to hack it. (I've written about TPM here, and here when Microsoft called it Palladium. Ross Anderson has some good stuff here.)
On May 13th, 2008 the Debian project announced that Luciano Bello found an interesting vulnerability in the OpenSSL package they were distributing. The bug in question was caused by the removal of the following line of code from md_rand.c
These lines were removed because they caused the Valgrind and Purify tools to produce warnings about the use of uninitialized data in any code that was linked to OpenSSL. You can see one such report to the OpenSSL team here. Removing this code has the side effect of crippling the seeding process for the OpenSSL PRNG. Instead of mixing in random data for the initial seed, the only "random" value that was used was the current process ID. On the Linux platform, the default maximum process ID is 32,768, resulting in a very small number of seed values being used for all PRNG operations.
More info, from Debian, here. And from the hacker community here. Seems that the bug was introduced in September 2006.
Random numbers are used everywhere in cryptography, for both short- and long-term security. And, as we've seen here, security flaws in random number generators are really easy to accidently create and really hard to discover after the fact. Back when the NSA was routinely weakening commercial cryptography, their favorite technique was reducing the entropy of the random number generator.
The standard way to take control of someone else's computer is by exploiting a vulnerability in a software program on it. This was true in the 1960s when buffer overflows were first exploited to attack computers. It was true in 1988 when the Morris worm exploited a Unix vulnerability to attack computers on the Internet, and it's still how most modern malware works.
Vulnerabilities are software mistakes--mistakes in specification and design, but mostly mistakes in programming. Any large software package will have thousands of mistakes. These vulnerabilities lie dormant in our software systems, waiting to be discovered. Once discovered, they can be used to attack systems. This is the point of security patching: eliminating known vulnerabilities. But many systems don't get patched, so the Internet is filled with known, exploitable vulnerabilities.
New vulnerabilities are hot commodities. A hacker who discovers one can sell it on the black market, blackmail the vendor with disclosure, or simply publish it without regard to the consequences. Even if he does none of these, the mere fact the vulnerability is known by someone increases the risk to every user of that software. Given that, is it ethical to research new vulnerabilities?
Unequivocally, yes. Despite the risks, vulnerability research is enormously valuable. Security is a mindset, and looking for vulnerabilities nurtures that mindset. Deny practitioners this vital learning tool, and security suffers accordingly.
Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent--or protect against--those failures. Most software vulnerabilities don't ever appear in normal operations, only when an attacker deliberately exploits them. So security engineers need to think like attackers.
People without the mindset sometimes think they can design security products, but they can't. And you see the results all over society--in snake-oil cryptography, software, Internet protocols, voting machines, and fare card and other payment systems. Many of these systems had someone in charge of "security" on their teams, but it wasn't someone who thought like an attacker.
This mindset is difficult to teach, and may be something you're born with or not. But in order to train people possessing the mindset, they need to search for and find security vulnerabilities--again and again and again. And this is true regardless of the domain. Good cryptographers discover vulnerabilities in others' algorithms and protocols. Good software security experts find vulnerabilities in others' code. Good airport security designers figure out new ways to subvert airport security. And so on.
This is so important that when someone shows me a security design by someone I don't know, my first question is, "What has the designer broken?" Anyone can design a security system that he cannot break. So when someone announces, "Here's my security system, and I can't break it," your first reaction should be, "Who are you?" If he's someone who has broken dozens of similar systems, his system is worth looking at. If he's never broken anything, the chance is zero that it will be any good.
Vulnerability research is vital because it trains our next generation of computer security experts. Yes, newly discovered vulnerabilities in software and airports put us at risk, but they also give us more realistic information about how good the security actually is. And yes, there are more and less responsible--and more and less legal--ways to handle a new vulnerability. But the bad guys are constantly searching for new vulnerabilities, and if we have any hope of securing our systems, we need the good guys to be at least as competent. To me, the question isn't whether it's ethical to do vulnerability research. If someone has the skill to analyze and provide better insights into the problem, the question is whether it is ethical for him not to do vulnerability research.
This was originally published in InfoSecurity Magazine, as part of a point-counterpoint with Marcus Ranum. You can read Marcus's half here.
The COFEE, which stands for Computer Online Forensic Evidence Extractor, is a USB "thumb drive" that was quietly distributed to a handful of law-enforcement agencies last June. Microsoft General Counsel Brad Smith described its use to the 350 law-enforcement experts attending a company conference Monday.
The device contains 150 commands that can dramatically cut the time it takes to gather digital evidence, which is becoming more important in real-world crime, as well as cybercrime. It can decrypt passwords and analyze a computer's Internet activity, as well as data stored in the computer.
It also eliminates the need to seize a computer itself, which typically involves disconnecting from a network, turning off the power and potentially losing data. Instead, the investigator can scan for evidence on site.
COFEE, according to forensic folk who have used it, is simply a suite of 150 bundled off-the-shelf forensic tools that run from a script. None of the tools are new or were created by Microsoft. Microsoft simply combined existing programs into a portable tool that can be used in the field before agents bring a computer back to their forensic lab.
Microsoft wouldn't disclose which tools are in the suite other than that they're all publicly available, but a forensic expert told me that when he tested the product last year it included standard forensic products like Windows Forensic Toolchest (WFT) and RootkitRevealer.
With COFEE, a forensic agent can select, through the interface, which of the 150 investigative tools he wants to run on a targeted machine. COFEE creates a script and copies it to the USB device which is then plugged into the targeted machine. The advantage is that instead of having to run each tool separately, a forensic investigator can run them all through the script much more quickly and can also grab information (such as data temporarily stored in RAM or network connection information) that might otherwise be lost if he had to disconnect a machine and drag it to a forensics lab before he could examine it.
And it's certainly not a back door, as TechDirt claims.
But given that a Federal court hasruled that border guards can search laptop computers without cause, this tool might see wider use than Microsoft anticipated.
Hidden malicious circuits provide an attacker with a stealthy attack vector. As they occupy a layer below the entire software stack, malicious circuits can bypass traditional defensive techniques. Yet current work on trojan circuits considers only simple attacks against the hardware itself, and straightforward defenses. More complex designs that attack the software are unexplored, as are the countermeasures an attacker may take to bypass proposed defenses.
We present the design and implementation of Illinois Malicious Processors (IMPs). There is a substantial design space in malicious circuitry; we show that an attacker, rather than designing one speci?c attack, can instead design hardware to support attacks. Such ?exible hardware allows powerful, general purpose attacks, while remaining surprisingly low in the amount of additional hardware. We show two such hardware designs, and implement them in a real system. Further, we show three powerful attacks using this hardware, including a login backdoor that gives an attacker complete and highlevel access to the machine. This login attack requires only 1341 additional gates: gates that can be used for other attacks as well. Malicious processors are more practical, more flexible, and harder to detect than an initial analysis would suggest.
At issue is a growing trend in which ISPs subvert the Domain Name System, or DNS, which translates website names into numeric addresses.
When users visit a website like Wired.com, the DNS system maps the domain name into an IP address such as 18.104.22.168. But if a particular site does not exist, the DNS server tells the browser that there's no such listing and a simple error message should be displayed.
But starting in August 2006, Earthlink instead intercepts that Non-Existent Domain (NXDOMAIN) response and sends the IP address of ad-partner Barefruit's server as the answer. When the browser visits that page, the user sees a list of suggestions for what site the user might have actually wanted, along with a search box and Yahoo ads.
The rub comes when a user is asking for a nonexistent subdomain of a real website, such as http://webmale.google.com, where the subdomain webmale doesn't exist (unlike, say, mail in mail.google.com). In this case, the Earthlink/Barefruit ads appear in the browser, while the title bar suggests that it's the official Google site.
The hacker could, for example, send spam e-mails to Earthlink subscribers with a link to a webpage on money.paypal.com. Visiting that link would take the victim to the hacker's site, and it would look as though they were on a real PayPal page.
Kaminsky demonstrated the vulnerability by finding a way to insert a YouTube video from 80s pop star Rick Astley into Facebook and PayPal domains. But a black hat hacker could instead embed a password-stealing Trojan. The attack might also allow hackers to pretend to be a logged-in user, or to send e-mails and add friends to a Facebook account.
Earthlink isn't alone in substituting ad pages for error messages, according to Kaminsky, who has seen similar behavior from other major ISPs including Verizon, Time Warner, Comcast and Qwest.
Usually I don't bother blogging about these, but this one is particularly bad. Anyone with basic SQL knowledge could have registered anyone he wanted as a sex offender.
One of the cardinal rules of computer programming is to never trust your input. This holds especially true when your input comes from users, and even more so when it comes from the anonymous, general public. Apparently, the developers at Oklahoma’s Department of Corrections slept through that day in computer science class, and even managed to skip all of Common Sense 101. You see, not only did they trust anonymous user input on their public-facing website, but they blindly executed it and displayed whatever came back.
The result of this negligently bad coding has some rather serious consequences: the names, addresses, and social security numbers of tens of thousands of Oklahoma residents were made available to the general public for a period of at least three years. Up until yesterday, April 13 2008, anyone with a web browser and the knowledge from Chapter One of SQL For Dummies could have easily accessed -- and possibly, changed -- any data within the DOC’s databases. It took me all of a minute to figure out how to download 10,597 records -- SSNs and all -- from their website.
Note that this is the same card -- maybe a different version -- that was used in the Dutch transit system, and was hacked back in January. There's another hack of that system (press release here, and a video demo), and many companies -- and government agencies -- are scrambling in the wake of all these revelations.
Seems like the Mifare system (especially the version called Mifare Classic -- and there are billions out there) was really badly designed, in all sorts of ways. I'm sure there are many more serious security vulnerabilities waiting to be discovered.