French police hacked EncroChat secure phones, which are widely used by criminals:
Encrochat's phones are essentially modified Android devices, with some models using the "BQ Aquaris X2," an Android handset released in 2018 by a Spanish electronics company, according to the leaked documents. Encrochat took the base unit, installed its own encrypted messaging programs which route messages through the firm's own servers, and even physically removed the GPS, camera, and microphone functionality from the phone. Encrochat's phones also had a feature that would quickly wipe the device if the user entered a PIN, and ran two operating systems side-by-side. If a user wanted the device to appear innocuous, they booted into normal Android. If they wanted to return to their sensitive chats, they switched over to the Encrochat system. The company sold the phones on a subscription based model, costing thousands of dollars a year per device.
This allowed them and others to investigate and arrest many:
Unbeknownst to Mark, or the tens of thousands of other alleged Encrochat users, their messages weren't really secure. French authorities had penetrated the Encrochat network, leveraged that access to install a technical tool in what appears to be a mass hacking operation, and had been quietly reading the users' communications for months. Investigators then shared those messages with agencies around Europe.
Only now is the astonishing scale of the operation coming into focus: It represents one of the largest law enforcement infiltrations of a communications network predominantly used by criminals ever, with Encrochat users spreading beyond Europe to the Middle East and elsewhere. French, Dutch, and other European agencies monitored and investigated "more than a hundred million encrypted messages" sent between Encrochat users in real time, leading to arrests in the UK, Norway, Sweden, France, and the Netherlands, a team of international law enforcement agencies announced Thursday.
EncroChat learned about the hack, but didn't know who was behind it.
Going into full-on emergency mode, Encrochat sent a message to its users informing them of the ongoing attack. The company also informed its SIM provider, Dutch telecommunications firm KPN, which then blocked connections to the malicious servers, the associate claimed. Encrochat cut its own SIM service; it had an update scheduled to push to the phones, but it couldn't guarantee whether that update itself wouldn't be carrying malware too. That, and maybe KPN was working with the authorities, Encrochat's statement suggested (KPN declined to comment). Shortly after Encrochat restored SIM service, KPN removed the firewall, allowing the hackers' servers to communicate with the phones once again. Encrochat was trapped.
Encrochat decided to shut itself down entirely.
Lots of details about the hack in the article. Well worth reading in full.
The UK National Crime Agency called it Operation Venetic: "46 arrests, and £54m criminal cash, 77 firearms and over two tonnes of drugs seized so far."
Masks that have made criminals stand apart long before bandanna-wearing robbers knocked over stagecoaches in the Old West and ski-masked bandits held up banks now allow them to blend in like concerned accountants, nurses and store clerks trying to avoid a deadly virus.
"Criminals, they're smart and this is a perfect opportunity for them to conceal themselves and blend right in," said Richard Bell, police chief in the tiny Pennsylvania community of Frackville. He said he knows of seven recent armed robberies in the region where every suspect wore a mask.
Just how many criminals are taking advantage of the pandemic to commit crimes is impossible to estimate, but law enforcement officials have no doubt the numbers are climbing. Reports are starting to pop up across the United States and in other parts of the world of crimes pulled off in no small part because so many of us are now wearing masks.
In March, two men walked into Aqueduct Racetrack in New York wearing the same kind of surgical masks as many racing fans there and, at gunpoint, robbed three workers of a quarter-million dollars they were moving from gaming machines to a safe. Other robberies involving suspects wearing surgical masks have occurred in North Carolina, and Washington, D.C, and elsewhere in recent weeks.
The article is all anecdote and no real data. But this is probably a trend.
Facebook Inc. in 2018 beat back federal prosecutors seeking to wiretap its encrypted Messenger app. Now the American Civil Liberties Union is seeking to find out how.
The entire proceeding was confidential, with only the result leaking to the press. Lawyers for the ACLU and the Washington Post on Tuesday asked a San Francisco-based federal court of appeals to unseal the judge's decision, arguing the public has a right to know how the law is being applied, particularly in the area of privacy.
The Facebook case stems from a federal investigation of members of the violent MS-13 criminal gang. Prosecutors tried to hold Facebook in contempt after the company refused to help investigators wiretap its Messenger app, but the judge ruled against them. If the decision is unsealed, other tech companies will likely try to use its reasoning to ward off similar government requests in the future.
A federal court has ruled that violating a website's terms of service is not "hacking" under the Computer Fraud and Abuse Act.
The plaintiffs wanted to investigate possible racial discrimination in online job markets by creating accounts for fake employers and job seekers. Leading job sites have terms of service prohibiting users from supplying fake information, and the researchers worried that their research could expose them to criminal liability under the CFAA, which makes it a crime to "access a computer without authorization or exceed authorized access."
So in 2016 they sued the federal government, seeking a declaration that this part of the CFAA violated the First Amendment.
But rather than addressing that constitutional issue, Judge John Bates ruled on Friday that the plaintiffs' proposed research wouldn't violate the CFAA's criminal provisions at all. Someone violates the CFAA when they bypass an access restriction like a password. But someone who logs into a website with a valid password doesn't become a hacker simply by doing something prohibited by a website's terms of service, the judge concluded.
"Criminalizing terms-of-service violations risks turning each website into its own criminal jurisdiction and each webmaster into his own legislature," Bates wrote.
Bates noted that website terms of service are often long, complex, and change frequently. While some websites require a user to read through the terms and explicitly agree to them, others merely include a link to the terms somewhere on the page. As a result, most users aren't even aware of the contractual terms that supposedly govern the site. Under those circumstances, it's not reasonable to make violation of such terms a criminal offense, Bates concluded.
This is not the first time a court has issued a ruling in this direction. It's also not the only way the courts have interpreted the frustratingly vague Computer Fraud and Abuse Act.
The GIF set off a highly unusual court battle that is expected to equip those in similar circumstances with a new tool for battling threatening trolls and cyberbullies. On Monday, the man who sent Eichenwald the moving image, John Rayne Rivello, was set to appear in a Dallas County district court. A last-minute rescheduling delayed the proceeding until Jan. 31, but Rivello is still expected to plead guilty to aggravated assault. And he may be the first of many.
The Epilepsy Foundation announced on Monday it lodged a sweeping slate of criminal complaints against a legion of copycats who targeted people with epilepsy and sent them an onslaught of strobe GIFs -- a frightening phenomenon that unfolded in a short period of time during the organization's marking of National Epilepsy Awareness Month in November.
Rivello's supporters -- among them, neo-Nazis and white nationalists, including Richard Spencer -- have also argued that the issue is about freedom of speech. But in an amicus brief to the criminal case, the First Amendment Clinic at Duke University School of Law argued Rivello's actions were not constitutionally protected.
"A brawler who tattoos a message onto his knuckles does not throw every punch with the weight of First Amendment protection behind him," the brief stated. "Conduct like this does not constitute speech, nor should it. A deliberate attempt to cause physical injury to someone does not come close to the expression which the First Amendment is designed to protect."
Back in 1998, Tim May warned us of the "Four Horsemen of the Infocalypse": "terrorists, pedophiles, drug dealers, and money launderers." I tended to cast it slightly differently. This is me from 2005:
Beware the Four Horsemen of the Information Apocalypse: terrorists, drug dealers, kidnappers, and child pornographers. Seems like you can scare any public into allowing the government to do anything with those four.
Which particular horseman is in vogue depends on time and circumstance. Since the terrorist attacks of 9/11, the US government has been pushing the terrorist scare story. Recently, it seems to have switched to pedophiles and child exploitation. It began in September, with a long New York Timesstory on child sex abuse, which included this dig at encryption:
And when tech companies cooperate fully, encryption and anonymization can create digital hiding places for perpetrators. Facebook announced in March plans to encrypt Messenger, which last year was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material, according to people familiar with the reports. Reports to the authorities typically contain more than one image, and last year encompassed the record 45 million photos and videos, according to the National Center for Missing and Exploited Children.
(That's wrong, by the way. Facebook Messenger already has an encryptedoption. It's just not turned on by default, like it is in WhatsApp.)
That was followed up by a conference by the US Department of Justice: "Lawless Spaces: Warrant Proof Encryption and its Impact on Child Exploitation Cases." US Attorney General William Barr gave a speech on the subject. Then came an open letter to Facebook from Barr and others from the UK and Australia, using "protecting children" as the basis for their demand that the company not implement strong end-to-end encryption. (I signed on to another another open letter in response.) Then, the FBI tried to get Interpol to publish a statement denouncing end-to-end encryption.
This week, the Senate Judiciary Committee held a hearing on backdoors: "Encryption and Lawful Access: Evaluating Benefits and Risks to Public Safety and Privacy." Video, and written testimonies, are available at the link. Eric Neuenschwander from Apple was there to support strong encryption, but the other witnesses were all against it. New York District Attorney Cyrus Vance was true to form:
In fact, we were never able to view the contents of his phone because of this gift to sex traffickers that came, not from God, but from Apple.
It was a disturbing hearing. The Senators asked technical questions to people who couldn't answer them. The result was that an adjunct law professor was able to frame the issue of strong encryption as an externality caused by corporate liability dumping, and another example of Silicon Valley's anti-regulation stance.
Let me be clear. None of us who favor strong encryption is saying that child exploitation isn't a serious crime, or a worldwide problem. We're not saying that about kidnapping, international drug cartels, money laundering, or terrorism. We are saying three things. One, that strong encryption is necessary for personal and national security. Two, that weakening encryption does more harm than good. And three, law enforcement has other avenues for criminal investigation than eavesdropping on communications and stored devices. This is one example, where people unraveled a dark-web website and arrested hundreds by analyzing Bitcoin transactions. This is another, where policy arrested members of a WhatsApp group.
So let's have reasoned policy debates about encryption -- debates that are informed by technology. And let's stop it with the scare stories.
EDITED TO ADD (12/13): The DoD just said that strong encryption is essential for national security.
All DoD issued unclassified mobile devices are required to be password protected using strong passwords. The Department also requires that data-in-transit, on DoD issued mobile devices, be encrypted (e.g. VPN) to protect DoD information and resources. The importance of strong encryption and VPNs for our mobile workforce is imperative. Last October, the Department outlined its layered cybersecurity approach to protect DoD information and resources, including service men and women, when using mobile communications capabilities.
As the use of mobile devices continues to expand, it is imperative that innovative security techniques, such as advanced encryption algorithms, are constantly maintained and improved to protect DoD information and resources. The Department believes maintaining a domestic climate for state of the art security and encryption is critical to the protection of our national security.
In a BGP hijack, a malicious actor convinces nearby networks that the best path to reach a specific IP address is through their network. That's unfortunately not very hard to do, since BGP itself doesn't have any security procedures for validating that a message is actually coming from the place it says it's coming from.
To better pinpoint serial attacks, the group first pulled data from several years' worth of network operator mailing lists, as well as historical BGP data taken every five minutes from the global routing table. From that, they observed particular qualities of malicious actors and then trained a machine-learning model to automatically identify such behaviors.
The system flagged networks that had several key characteristics, particularly with respect to the nature of the specific blocks of IP addresses they use:
Volatile changes in activity: Hijackers' address blocks seem to disappear much faster than those of legitimate networks. The average duration of a flagged network's prefix was under 50 days, compared to almost two years for legitimate networks.
Multiple address blocks: Serial hijackers tend to advertise many more blocks of IP addresses, also known as "network prefixes."
IP addresses in multiple countries: Most networks don't have foreign IP addresses. In contrast, for the networks that serial hijackers advertised that they had, they were much more likely to be registered in different countries and continents.
Note that this is much more likely to detect criminal attacks than nation-state activities. But it's still good work.
German investigators said Friday they have shut down a data processing center installed in a former NATO bunker that hosted sites dealing in drugs and other illegal activities. Seven people were arrested.
Thirteen people aged 20 to 59 are under investigation in all, including three German and seven Dutch citizens, Brauer said.
Authorities arrested seven of them, citing the danger of flight and collusion. They are suspected of membership in a criminal organization because of a tax offense, as well as being accessories to hundreds of thousands of offenses involving drugs, counterfeit money and forged documents, and accessories to the distribution of child pornography. Authorities didn't name any of the suspects.
The data center was set up as what investigators described as a "bulletproof hoster," meant to conceal illicit activities from authorities' eyes.
Investigators say the platforms it hosted included "Cannabis Road," a drug-dealing portal; the "Wall Street Market," which was one of the world's largest online criminal marketplaces for drugs, hacking tools and financial-theft wares until it was taken down earlier this year; and sites such as "Orange Chemicals" that dealt in synthetic drugs. A botnet attack on German telecommunications company Deutsche Telekom in late 2016 that knocked out about 1 million customers' routers also appears to have come from the data center in Traben-Trarbach, Brauer said.
Criminals in 2017 managed to get an advanced backdoor preinstalled on Android devices before they left the factories of manufacturers, Google researchers confirmed on Thursday.
Triada first came to light in 2016 in articles published by Kaspersky here and here, the first of which said the malware was "one of the most advanced mobile Trojans" the security firm's analysts had ever encountered. Once installed, Triada's chief purpose was to install apps that could be used to send spam and display ads. It employed an impressive kit of tools, including rooting exploits that bypassed security protections built into Android and the means to modify the Android OS' all-powerful Zygote process. That meant the malware could directly tamper with every installed app. Triada also connected to no fewer than 17 command and control servers.
In July 2017, security firm Dr. Web reported that its researchers had found Triada built into the firmware of several Android devices, including the Leagoo M5 Plus, Leagoo M8, Nomu S10, and Nomu S20. The attackers used the backdoor to surreptitiously download and install modules. Because the backdoor was embedded into one of the OS libraries and located in the system section, it couldn't be deleted using standard methods, the report said.
On Thursday, Google confirmed the Dr. Web report, although it stopped short of naming the manufacturers. Thursday's report also said the supply chain attack was pulled off by one or more partners the manufacturers used in preparing the final firmware image used in the affected devices.
This is a supply chain attack. It seems to be the work of criminals, but it could just as easily have been a nation-state.
The so-called Crypto Wars have been going on for 25 years now. Basically, the FBI -- and some of their peer agencies in the UK, Australia, and elsewhere -- argue that the pervasive use of civilian encryption is hampering their ability to solve crimes and that they need the tech companies to make their systems susceptible to government eavesdropping. Sometimes their complaint is about communications systems, like voice or messaging apps. Sometimes it's about end-user devices. On the other side of this debate is pretty much all technologists working in computer security and cryptography, who argue that adding eavesdropping features fundamentally makes those systems less secure.
A recent entry in this debate is a proposal by Ian Levy and Crispin Robinson, both from the UK's GCHQ (the British signals-intelligence agency -- basically, its NSA). It's actually a positive contribution to the discourse around backdoors; most of the time government officials broadly demand that the tech companies figure out a way to meet their requirements, without providing any details. Levy and Robinson write:
In a world of encrypted services, a potential solution could be to go back a few decades. It's relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who's who and which devices are involved -- they're usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there's an extra 'end' on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn't give any government power they shouldn't have.
On the surface, this isn't a big ask. It doesn't affect the encryption that protects the communications. It only affects the authentication that assures people of whom they are talking to. But it's no less dangerous a backdoor than any others that have been proposed: It exploits a security vulnerability rather than fixing it, and it opens all users of the system to exploitation of that same vulnerability by others.
In a blog post, cryptographer Matthew Green summarized the technical problems with this GCHQ proposal. Basically, making this backdoor work requires not only changing the cloud computers that oversee communications, but it also means changing the client program on everyone's phone and computer. And that change makes all of those systems less secure. Levy and Robinson make a big deal of the fact that their backdoor would only be targeted against specific individuals and their communications, but it's still a general backdoor that could be used against anybody.
The basic problem is that a backdoor is a technical capability -- a vulnerability -- that is available to anyone who knows about it and has access to it. Surrounding that vulnerability is a procedural system that tries to limit access to that capability. Computers, especially internet-connected computers, are inherently hackable, limiting the effectiveness of any procedures. The best defense is to not have the vulnerability at all.
That old physical eavesdropping system Levy and Robinson allude to also exploits a security vulnerability. Because telephone conversations were unencrypted as they passed through the physical wires of the phone system, the police were able to go to a switch in a phone company facility or a junction box on the street and manually attach alligator clips to a specific pair and listen in to what that phone transmitted and received. It was a vulnerability that anyone could exploit -- not just the police -- but was mitigated by the fact that the phone company was a monolithic monopoly, and physical access to the wires was either difficult (inside a phone company building) or obvious (on the street at a junction box).
The functional equivalent of physical eavesdropping for modern computer phone switches is a requirement of a 1994 U.S. law called CALEA -- and similar laws in other countries. By law, telephone companies must engineer phone switches that the government can eavesdrop, mirroring that old physical system with computers. It is not the same thing, though. It doesn't have those same physical limitations that make it more secure. It can be administered remotely. And it's implemented by a computer, which makes it vulnerable to the same hacking that every other computer is vulnerable to.
This isn't a theoretical problem; these systems have been subverted. The most public incident dates from 2004 in Greece. Vodafone Greece had phone switches with the eavesdropping feature mandated by CALEA. It was turned off by default in the Greek phone system, but the NSA managed to surreptitiously turn it on and use it to eavesdrop on the Greek prime minister and over 100 other high-ranking dignitaries.
There's nothing distinct about a phone switch that makes it any different from other modern encrypted voice or chat systems; any remotely administered backdoor system will be just as vulnerable. Imagine a chat program added this GCHQ backdoor. It would have to add a feature that added additional parties to a chat from somewhere in the system -- and not by the people at the endpoints. It would have to suppress any messages alerting users to another party being added to that chat. Since some chat programs, like iMessage and Signal, automatically send such messages, it would force those systems to lie to their users. Other systems would simply never implement the "tell me who is in this chat conversation" featurewhich amounts to the same thing.
And once that's in place, every government will try to hack it for its own purposes -- just as the NSA hacked Vodafone Greece. Again, this is nothing new. In 2010, China successfully hacked the back-door mechanism Google put in place to meet law-enforcement requests. In 2015, someone -- we don't know who -- hacked an NSA backdoor in a random-number generator used to create encryption keys, changing the parameters so they could also eavesdrop on the communications. There are certainly other stories that haven't been made public.
Simply adding the feature erodes public trust. If you were a dissident in a totalitarian country trying to communicate securely, would you want to use a voice or messaging system that is known to have this sort of backdoor? Who would you bet on, especially when the cost of losing the bet might be imprisonment or worse: the company that runs the system, or your country's government intelligence agency? If you were a senior government official, or the head of a large multinational corporation, or the security manager or a critical technician at a power plant, would you want to use this system?
Of course not.
Two years ago, there was a rumor of a WhatsApp backdoor. The details are complicated, and calling it a backdoor or a vulnerability islargelyinaccurate -- but the resultant confusion caused some people to abandon the encrypted messaging service.
Trust is fragile, and transparency is essential to trust. And while Levy and Robinson state that "any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users," this proposal does exactly that. Communications companies could no longer be honest about what their systems were doing, and we would have no reason to trust them if they tried.
In the end, all of these exceptional access mechanisms, whether they exploit existing vulnerabilities that should be closed or force vendors to open new ones, reduce the security of the underlying system. They reduce our reliance on security technologies we know how to do well -- cryptography -- to computer security technologies we are much less good at. Even worse, they replace technical security measures with organizational procedures. Whether it's a database of master keys that could decrypt an iPhone or a communications switch that orchestrates who is securely chatting with whom, it is vulnerable to attack. And it will be attacked.
The foregoing discussion is a specific example of a broader discussion that we need to have, and it's about the attack/defense balance. Which should we prioritize? Should we design our systems to be open to attack, in which case they can be exploited by law enforcement -- and others? Or should we design our systems to be as secure as possible, which means they will be better protected from hackers, criminals, foreign governments and -- unavoidably -- law enforcement as well?
This discussion is larger than the FBI's ability to solve crimes or the NSA's ability to spy. We know that foreign intelligence services are targeting the communications of our elected officials, our power infrastructure, and our voting systems. Do we really want some foreign country penetrating our lawful-access backdoor in the same way the NSA penetrated Greece's?
I have long maintained that we need to adopt a defense-dominant strategy: We should prioritize our need for security over our need for surveillance. This is especially true in the new world of physically capable computers. Yes, it will mean that law enforcement will have a harder time eavesdropping on communications and unlocking computing devices. But law enforcement has other forensic techniques to collect surveillance data in our highly networked world. We'd be much better off increasing law enforcement's technical ability to investigate crimes in the modern digital world than we would be to weaken security for everyone. The ability to surreptitiously add ghost users to a conversation is a vulnerability, and it's one that we would be better served by closing than exploiting.