Juniper Routers: M320 is currently being worked on and we would expect to have full support by the end of 2010.
No other models are currently supported.
Juniper technology sharing with NSA improved dramatically during CY2010 to exploit several target networks where GCHQ had access primacy.
Yes, the document said "end of 2010" even though the document is dated February 3, 2011.
This doesn't have much to do with the Juniper backdoor currently in the news, but the document does provide even more evidence that (despite what the government says) the NSA hoards vulnerabilities in commonly used software for attack purposes instead of improvingsecurity for everyone by disclosing it.
EDITED TO ADD: In thinking about the equities process, it's worth differentiating among three different things: bugs, vulnerabilities, and exploits. Bugs are plentiful in code, but not all bugs can be turned into vulnerabilities. And not all vulnerabilities can be turned into exploits. Exploits are what matter; they're what everyone uses to compromise our security. Fixing bugs and vulnerabilities is important because they could potentially be turned into exploits.
I think the US government deliberately clouds the issue when they say that they disclose almost all bugs they discover, ignoring the much more important question of how often they disclose exploits they discover. What this document shows is that -- despite their insistence that they prioritize security over surveillance -- they like to hoard exploits against commonly used network equipment.
As lots of clever people noted in this blog for years the only way to achieve true security is using open source products. Anything closed source cannot be trusted, not to say commercial products developed by members of the FVEY.
Let me answer the White House Cybersecurity Coordinator Michael Daniel's questions for him based on what we have seen so far of massive stockpiling of zero days for years at a time, will nil regard given to the wider public, commerce, and security of US and global networks:
How much is the vulnerable system used in the core internet infrastructure, in other critical infrastructure systems, in the U.S. economy, and/or in national security systems?
Ans: Usually critical systems are left vulnerable, for years at a time e.g. SSL
Does the vulnerability, if left unpatched, impose significant risk?
Yes. The more systems are affected the hornier the NSA and 5-eyes partners get.
How much harm could an adversary nation or criminal group do with knowledge of this vulnerability?
A lot. This is the reason OPM and other critical network infrastructure has already been compromised. The NSA would rather play cyber Astro Boy than get serious in protecting themselves and others from criminals. Indeed they are themselves criminals - not subject to codified laws, nor ever held to account for any breach, no matter how egregious.
How likely is it that we would know if someone else was exploiting it?
Very likely - but deemed irrelevant. NSA toy boys would rather attack and infiltrate systems than care about any other group using the known weakness, likely sold onto black markets for a quick buck by insiders.
How badly do we need the intelligence we think we can get from exploiting the vulnerability?
Not very. The 3-4 letter agencies have already shown themselves to be inept, and drowning in data based on the disclosures of insiders and veterans of the system e.g. William Binney.
Are there other ways we can get it?
Yes. By now everybody knows that the NSA can hack everything from Barbie in your kid's bedroom to offline nuclear reactors e.g. Stuxnet, with targeted approaches.
Could we utilize the vulnerability for a short period of time before we disclose it?
No. The intelligence agencies get wet playing cyber wizard and would rather be water-boarded than share their 'precious'.
How likely is it that someone else will discover the vulnerability?
Extremely likely. Based on insider sales, intelligence of agents in innumerable other sovereign nations, and the findings of researchers over the years, including the resident genius Schneier.
Can the vulnerability be patched or otherwise mitigated?
Of course, but cyber wizards with delusions of grandeur who have embedded Hollywood fantasies as likely scenarios in their paranoid minds will never give up anything willingly.
From the "Assessment of Intelligence Opportunity — Juniper" document:
Juniper as a Threat
Juniper’s leadership in core IP routing and the Enterprise Network Firewall and SSL VPN markets means that the SIGINT community should keep up with Juniper technology to be positioned to maintain CNE access over time. The threat comes from Juniper’s investment and emphasis on being a security leader. If the SIGINT community falls behind, it might take years to regain a Juniper firewall or router access capability if Juniper continues to rapidly increase their security.
(as seen on p. 5 of 7), followed by this bullet at the Target Usage of Juniper section on the same page:
Pakistan — Juniper firewalls are central to the very high priority HEADRESS NU project targeting a Pakistan government/military secure network. While the core Internet routers in Pakistan are all Cisco, Juniper is often seen as an edge router on networks connected to the core. Juniper routers are deployed in the Mobilink network and possibly Telenor.
Anything coming out the US is considered to be bullet proof from criminals in Russia and other countries, no one should have any doubt there would be any backdoor in Cisco, EMC, Intel, Microsoft, Oracle, Dell, HDS, HP products, I am sure no one in the USA would intentionally put any backdoor in them, absolutely no one would do that lol!
There are so many unsubstantiated allegations against the US government and his four other partners, these are totally baseless, unpatriotic and dangerous. There is absolutely no reason to believe NSA would plant holes into US commercial products, because that would hurt their long-term reputation. Of course, NSA and IT firms don't cooperate with each other on security flaws.
Undoubtedly, scientia est potentia — but it is not our business to feed them with knowledge, our activities are legal ones as it is the constitutional right to run computing systems that protect our privacy and increase our security.
"There are so many unsubstantiated allegations against the US government and his four other partners, these are totally baseless, unpatriotic and dangerous."
I totally agree. There are too many baseless claims distracting from the real issues in online forums. Aliens controlling Obama, hiding the bodies of certain celebrities, conspiracies seeing Putin as a Western ally... so much bullshit. It's kind of ridiculous.
"There is absolutely no reason to believe NSA would plant holes into US commercial products, because that would hurt their long-term reputation."
Truly no rational person would've believed this without reading years of memoirs and declassified documents from CIA, NSA, etc. *Those* showed many examples of internal and external subversion to achieve political or intelligence goals. These attempts sometimes succeeded and sometime the shit hit the fan. Yet, prosecutions for such things were rare with intelligence community largely having immunity. A rational person seeing their prior activities and knowing no punishment happens would conclude they'll try it again without strong accountability.
"Of course, NSA and IT firms don't cooperate with each other on security flaws."
A rational person would agree with this, too. It's clear most IT firms don't cooperate with NSA. A select few, like those in Snowden slides, either inform them about existing flaws or introduce new ones. Others merely respond to FBI and secret courts methods of "compelling SIGINT-enabling" (source: ECI leaks), whatever that involves. Most IT providers certainly don't work with NSA, though, probably only selling out their customers in response to warrants, national security letters, and whatever ECI referred to.
So, all good points in your effort to focus people's minds on how irrational the subversion is, that lack of independent audits have given us little data, that a minority of U.S. companies sell customers out to NSA, and that the rest were coerced with secret means. Bravo! ;)
@Who?: seriously? OSS is "better"? Hmm IPSEC, OpenSSL, OpenBSD. That should do for a start...
Basically OSS is no better than "closed source" (aka commercial software), and in fact has been demonstrated to be really quite exceptionally poorly written. Please, the myth that OSS is somehow "superior" in terms of quality is long dead.
In the case of OpenBSD (leading to IPSEC) shows that it very much is open to subversion, and further proves that little to no code review takes place in OSS, despite the fact all source code is available (OpenBSD only caught it because they did an explicit code review following an apparent tip-off).
The debacle involving TLS keep-alive is yet further evidence that changes to OSS are not adequately checked at commit stage, so after the fact is even more remote.
...against the US government and his four other partners...
The gender convention for sovereign nations in English is "her" not "his", the same as it is for machines.
@ No Such Agency,
Basically OSS is no better than "closed source" (aka commercial software), and in fact has been demonstrated to be really quite exceptionally poorly written.
Please do not conflate to entirely seperate issues, it makes yor argument weak at best.
Open Sources Software, has the advantage that "the source is available if you should wish to use it", which is generaly not the case for Closed Source Software.
The fact that code can be written well or baddly under either Open or Closed Source is an entirely seperate issue that should fall under the QA policy of the developing entity.
The level of QA is effected by a number of things, not least of which is "resources made available" by the developing entity.
As a rough rule of thumb, small entities lack QA resources irrespective of if they are Open or Closed Source.
However there is an argument that says that the more open a system is to view the more likely it is to have better quality processes (though I've yet to see sufficient comparative evidence). But this is tempered by the desire to "get to market" quickly.
I've actually worked at organisations where they claim high QA standards, but exempt the design process from any QA requirments...
I've also worked at other places where "code review" was at best a "box check" at worst a compleatly ineffectual bad joke.
Thus I would argue that the quality of the source code is down to both organisational resources and ethics that get put into the system processes, not the availability of the source.
In times past Microsoft were quite rightly vilified for the quality of their code. However they have improved the system processes and things appear to have improved (in that they are nolonger "the worst kid on the block").
Speaking of "myths" it always amazes me why people put so much faith in code signing... perhaps they feal the need to believe in something, despite the reality.
The reason why CISCO, Ericsson are not named is because they have National Security Agreements with the NSA. If you want to provide kit to the USA Gov, you must sign an NSA. When you sign an NSA with the NSA, then you have an obligation to hand over your source code... You can guess the rest
“Thanks to a 1970’s-era student privacy law, school districts don’t need parents’ written consent to share information about students with the companies they contract with, privacy advocates said. Many school districts simply leave parents “in the dark” about who is tracking their children’s activities in school, said Barnes. A 2013 study examining a national sample of public school districts found that 95 percent of them relied on some sort of online cloud service. But only a quarter of those districts informed parents about their use of such systems and one in five lacked policies governing the use of online services. Fewer than a quarter of service agreements spelled out why the school district was disclosing student information to a vendor and less than 7 percent restricted vendors from selling or marketing data about students, according to the study. “I think officials in most school districts in the U.S. are COMPLETELY IGNORANT about the information they are giving out about their kids,” said Joel Reidenberg, a Fordham University law professor and one of the authors of the study.” [www.washingtonpost.com]
Frankly speaking, the best and brightest university students rarely become educators. Many young women educators are themselves hooked on social media. These addicts have no defenses against social media's invasive tracking and building of dossiers. Since many teachers do NOT look out for their young, innocent, vulnerable students they are unfit to be educators in this high-tech age.
Neither are virtually all young mothers who are too busy posting their children s life on Facebook. If you speak to them they aren't interested or even become angry when tasked to critically think: What the hell have I been doing, publishing my child’s most sensitive personal info? The same ones who sign away Big-Data personal health releases at the doctors office without even reading it. Since they too do NOT look out for their child’s best interest they are really unfit to be mothers in this high-tech age. Russia published a report stating the majority of Americans are dumbed down from anti-depressants. Politicians are incompetent too being totally corrupted by big-data re-election money and the dirt in their dossier. Law enforcement are totally dependent on every aspect of Big-Data for their next promotion. Big-data cheapens and degrades every aspect of our lives. But it sure is convenient for the incompetent and lazy. 'I Surrender All' used to have a Good meaning!
I know the history behind OpenBSD backdoor. All started with an email sent by Gregory Perry to Theo de Raadt announcing that the FBI implemented a number of backdoors and side channel key leaking mechanisms into the OCF, for the express purpose of monitoring the site to site VPN encryption system implemented by EOUSA, the parent organization to the FBI. Here, OCF stands for "OpenBSD Cryptographic Framework", inmediately made public by Mr. de Raadt on the [email protected] mailing list so an audit can be done by those OpenBSD users who use the OCF, people can take actions they feel appropiate and in case this announcement is not true, those who are being accused can defend themselves:
In the end it looks like a revenge by a former developer who has tried to damage the reputation of the OpenBSD project.
It is sad this history is reborn again and again. A lot of people asks me about the backdoors from time to time; they know something happens but miss the important point: OpenBSD has not been backdoored.
The careful review of all commits done to the source tree by other hackers of the OpenBSD project makes nearly impossible implanting a backdoor in the base system. OpenBSD source code is permanently and carefully audited.
The problem with the "alleged" NETSEC backdoor in IPSEC is three fold,
1, Firstly some people want it to be true.
2, Secondly not finding a backdoor is logicaly not proof one does not exist.
3, Thirdly all IPSEC code is a mess due to the specifications so bugs are going to happen.
The first point is not an attack on OpenBSD but in some cases the notion of high quality Free Open Source Software gets right up some peoples noses as some how being "Cancerous" / "communism" or "Anti-American" behaviour (a point Comey and Co are going to get around to at some point nodoubt). Others want IPSEC dead and buried for various reasons and thus will try to attack and discredit any thing that has it in.
The second point is a fact of life that is almost always going to be true to some extent the Dual EC RNG and NSA / NIST issue was a warning, the recent Juniper Network kit just one possible example of that sort of class of attack. It demonstrates that sometimes we just do not know enough so any code review no matter how good will fail at that point in time, and may not get caught again.
As for the third point IPSEC is a mess for a whole host of reasons, not least in that it was trying to solve the wrong problem, to late in the game and thus tried to cover to many bases.
I for one would urge people to consider IPSEC like WEP, that is it only provides a level of privacy sufficient to keep out those not realy trying. If you want real secrecy then there is one heck of a lot more you need to do in nearly all areas not just networking to stop it being stepped around in some way.
The problem being that OpSec is hard, and it gets in the way of what people want to do now. A point the IC amongst many others have come to rely on, which is one reason they are getting "upity". As various tools are making implementing better OpSec easier with time, the IC et al are finding their current lazy approach starting to fail. And the bleating they are making is in part because they don't want to go back to the old days when they had to get up from the comfort of their desks and go and do some real work via legal methods with oversight and other safe guards.
 I'm a strong disliker of IPSEC but I make sure I make it clear it's IPSEC as an idea/standard that is the problem not those who have the misfortune to try and make it work. Further I point out the need to mitigate in other areas. That said I can not know how people want to hear the warnings, and thus how they pass them on to others such is the politics of life and sales / specmanship.
Thanks a lot for your comments. I completely agree with you. (1) and (2) are clear enough; (1) should not be a problem, however I have learnt that people do not really like think themselves. (2) is obviously true. Even if OCF is open to public review, and lots of knowledgeable developers and users look at it, there is a risk of having currently unknown bugs laying in the code either intentionally or not. It would not be the first time the OpenBSD team discovers a previously unknown class of vulnerabilities in the code and a full audit is required. We know, unknown to the public does not necessarily means unknown to powerful adversaries.
May I ask you about (3)?
As you note, you're a strong disliker of IPSEC but you make it clear it's IPSEC as an idea/standard that is the problem not those who have the misfortune to try and make it work. I completely agree, overly complicated standards are not a good basis. Standards should be simple, not to say the ones related to security.
However, what do you see as the weak points of IPSEC? Is the use of weak pre-shared keys, even the possible existence of keys shared by all nodes so a compromise in one node means a compromise of the entire virtual private network, the problem? Is PSK itself the problem?
Is the fact that, even if encryption algorithms are (supposedly) strong, there are other weak points, like the key exchange mechanism? What combination of algorithms would you suggest? Performance is not a problem to me, security is.
Perhaps IPSEC falsely makes people feel secure so they care less about using additional encryption tools, setting up appropiate firewall rules to protect their IPSEC nodes, or even ignore basic OPSEC principles?
I understand IPSEC as just another layer of security. I think that IPSEC is better than nothing, but it obviously does not replace other tools or fundamental OPSEC.
I see via Twitter that Cisco has a recent blogpost about how they want to be perceived as trustworthy, and they elaborate by pointing out how they have a "no backdoor policy". 0 comments though, as comments are disabled.
I wish companies would say that they simply don't allow backdoors, instead of doing what Cisco does, by seemingly being precise, but perhaps paradoxically maintaining an ironic distance to their work, by referring to "our development practices". In my head it is like referencing a reference in a way, or explaining an explanation, ultimately being vague and possibly being pointless and meaningless imo. Same thing on their 'Trust and Transparency Center' page, about the part about wanting to help customers to "manage risk" with the information Cisco provide, where the meaning of this notion of "helping customers managing risk" imo becomes more like being a performative statement (an expression, more for wanting to poingnant, and less for explaining anything), thus not referencing to 'risk management' as some important thing by itself. So the following sentence imo seem equivocal (having more than one meaning): "The Cisco Trust and Transparency Center is dedicated to providing you with the information, resources and answers to your cybersecurity questions that help you manage risk." 'Helping someone managing risk', and 'managing risk' become two wildly different things. Philosophically, I guess it could be said that it is difficult to know where they want to be poignant, and where they want to explain something to you.