Forbesreports that the Israeli company Cellebrite can probably unlock all iPhone models:
Cellebrite, a Petah Tikva, Israel-based vendor that's become the U.S. government's company of choice when it comes to unlocking mobile devices, is this month telling customers its engineers currently have the ability to get around the security of devices running iOS 11. That includes the iPhone X, a model that Forbes has learned was successfully raided for data by the Department for Homeland Security back in November 2017, most likely with Cellebrite technology.
It also appears the feds have already tried out Cellebrite tech on the most recent Apple handset, the iPhone X. That's according to a warrant unearthed by Forbes in Michigan, marking the first known government inspection of the bleeding edge smartphone in a criminal investigation. The warrant detailed a probe into Abdulmajid Saidi, a suspect in an arms trafficking case, whose iPhone X was taken from him as he was about to leave America for Beirut, Lebanon, on November 20. The device was sent to a Cellebrite specialist at the DHS Homeland Security Investigations Grand Rapids labs and the data extracted on December 5.
This story is based on some excellent reporting, but leaves a lot of questions unanswered. We don't know exactly what was extracted from any of the phones. Was it metadata or data, and what kind of metadata or data was it.
The story I hear is that Cellebrite hires ex-Apple engineers and moves them to countries where Apple can't prosecute them under the DMCA or its equivalents. There's also a credible rumor that Cellebrite's mechanisms only defeat the mechanism that limits the number of password attempts. It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.
EDITED TO ADD (3/1): Another article, with more information. It looks like there's an arms race going on between Apple and Cellebrite. At least, if Cellebrite is telling the truth -- which they may or may not be.
The ARM TrustZone (TZ) is only possible at stopping moderate software level attacks since TZ which you put it simply is a second kernel (typically a microkernel) with much more privilege than the normal CPU and userspace kernel.
The Secure Enclave is derived from the TZ so if the Enclave is broken, it is due to the weaknesses in the designs of the TZ as well as the Enclave.
It can be an architectural vulnerability or an implementation vulnerability but whichever the case, a break in Enclave may spread to other types of ARM Cortex A series chipsets that inherently have the TZ technology and would effectively be one of the biggest breaks as most modern smartphones are using some form of ARM Cortex A series chipset with TZ technology built-in whether you like it or not.
Even if you are not using the Qualcomm, Apple or Samsung chipsets, many other ARM Cortex A series manufacturers like the NXP i.MX 6 or 8 series also carries TZ.
In fact, I have included the i.MZ 6 and 8 brochures in the link below too which both brochures have clearly stated that their chipsets have TZ embedded in the chip.
So the claims that Librem 5 is going to be all free and open, think again. TZ requires NDAs.
Even if Librem 5 does not want to enable TZ on the chip, can you with 110% assurance that the TZ module in the chipset is not doing things on it's own without your permission knowing that the TZ module has it's own highly privilege -1 and lower CPU ring access where a typical userspace kernel only has ring 0.
Also, we should note that the Intel SGX (part of Intel AMT/ME) and the AMD PSP (similar to Intel's) are all descendent of the ancestor, which is the ARM TZ and the core architecture of the TZ is to invest huge privileges and powers in the TZ kernel which is not in user's control and runs on it's own accord and has the power to access all interfaces and peripherals and also the ability to inspect, halt and modify all CPU executions (instructions and data) at will.
This isn't going to sit well with many whom want to support Librem 5 because knowing that the i.MX 6/8 has a TZ in it which is beyond anyone's control even Librem and the chipmaker (NXP) is the only one that can set the hardware ROM and OTP bits (that typically contains an RSA-2048 key or an AES key).
I have tried to reach out to Purism team to offer my help for Librem 5 and no reply came back despite offering free aid and service and also pointed the above facts to them via multiple emails and warned them of the perils of TZ.
I have mentioned many problems of the TZ in this comments (use the search) and similar me and @Clive Robinson have found TZ to be problematic. There are many real world attacks on the TZ too.
I have also mentioned in the past that Qualcomm decided to step up it's game in the Snapdragon 845 chipset by embedding what is effectively a smart card chip into it's chipset (linked below) by using the SC300 smart card architecture sold by ARM and calls it the Security Processing Unit (SPU).
This is old news but the significance is a smart card enabled chipset if done properly and following the standards set out by GlobalPlatform (GP) and using the latest recommendations from GP would have much more effective security.
If the SPU follows the standards and designs of a smart card, it would be moderately tamper resistant and that means trying to glitch the PIN code mechanism or biometric authentication mechanism hosted in the SPU would almost certainly be a no-go as detecting a power line or logical glitch would certainly trip the tamper sensors in a typical smart card chip and if that is applied to the SPU (assuming it applies all the security of a smart card including the tamper resisting mechanisms), it would have a variety of reactions ranging from locking up the chip to erasing all Security Parameters stored in the chip.
That being said, whatever I mentioned above is from the assumption that they actually embed all the security of a smart card into the SPU without taking shortcuts and leaving out critical security features.
The huge downside is that if the SPU is truely an embedded smart card chip in the 845 chipset, it would become a much stronger blackbox which is very dangerous to user's privacy as who knows what it might be doing.
To put it simply, if you trust your smartphone, you have to rethink your security model. In fact, almost all chips we use have some form of backdoor whether you like it or not and thus the likes of projects like TFC and ideas behind the C-v-P debates.
Also, the beloved Raspberry Pi is using a closed source Broadcom chipset with ARM Cortex A series and I am not surprised if TZ is inside.
I would probably like to advice @maqp to move the entire Transmitter and Receiver modules to a STM32 or PIC32 chipset with dedicated OS and firmware because those are unlikely to be tainted by TZ but who knows what other backdoors are inside.
For now, for the sake of convenience, RPis should be used only for experiments and demonstration.
hires ex-Apple engineers and moves them to countries where Apple can't prosecute them
Reason #307 no smart company hires people who aren't governed by local laws. And I have to imagine a company as large and rich as Apple has a very long reach no matter what country you run off to hide in. Remember the whole fuss in the aftermath of that lost iPhone prototype? Cellebrite had better be offering one hell of a paycheck to make up for the nightmare those engineers could be facing.
There is a UK commomn law theoretical legal argument that the persons behind goods and services produced overseas where the UK is an intended receiver of those goods and services could be held accountable in a UK court. This was also the judgement of a court case a few years ago relating to libel and/or harassing comments hosted on a foreign website. This kind of action may be effective against the owners of websites deliberately hosted abroad or hiding behind foreign shell companies.
There is also forum non conveniens. Here is a paper outlining this principle across a number of jurisdictions worldwide.
Comparative Forum Non Conveniens and the Hague Convention on Jurisdiction and Judgments
>The story I hear is that Cellebrite hires ex-Apple engineers and moves them to countries where Apple can't prosecute them under the DMCA or its equivalents
You know, the DMCA seems like it is used to keep security holes secret & unpatched, to bully researchers, enforce vendor lock-in, block competition, and now it allows the NSA to hack phones illegally... Copyright, eh?
There's also a credible rumor that Cellebrite's mechanisms only defeat the mechanism that limits the number of password attempts.
Maybe I'm just belaboring the obvious, but they have to bypass not just the total attempt limit, but also the variable up to 1 hour delay that is supposed to be enforced by the secure enclave. That leaves the 80ms running time.
So the U.S. government's law enforcement authorities on the one hand enforce DMCA prohibitions against "unlocking" or "rooting" upon U.S. consumers but on the other hand pay foreign hacker groups (whether they be Italian, Israeli, etc.) to do precisely what U.S. "consumers" are forbidden from doing on their own.
There is really no "nice" way to describe this state of affairs in the light of the high-tech cartel's refusal to hire U.S. workers.
The United States' freedom of speech is the real monkey wrench in the tech giant cartel's inner workings.
It reminds me of the tech giants' mass "pink sheet" layoffs that began in the 1990s. Laid-off employees received diagnoses of cancer "coincidentally" with notices of termination of their health care coverage. Those with a "diagnosis" were unable to obtain new jobs at all: they were pinched between government-mandated employee health insurance coverage and the refusal of health insurance companies to cover "pre-existing conditions."
I have become so cynical about the whole game that I'd bet even money that it works like this.
Apple gives crypto keys to foreign government and then foreign government gives data to FBI. Apple then makes up a bunch of lies and spreads FUD to disguise what is really going on.
I just don't believe the story that this is all down to "ex-Apple" employees. Apple is smart enough to compartmentalize information so that ex-employees can't do much damage either individually or in small groups. If they don't, Apple's incompetence is indistinguishable from malice.
In fact, I am very worried state for these devices but we live in a time where these decisions are made by those with resources and power and thus the power imbalance.
There are ways to get around these problems but the chances are getting slimmer.
Many of us have different ways of displaying our despair and I have done it here too where I used to draw greeting cards mocking at technology and posting them online here. You can still find them with a search but it wouldn't be easy.
And thus yes, I have shown despair on this forum before and for multiple times.
hmm, the difference is that the total-count-erase-phone-limit is optional, set in preferences, while the 1 hour limit is supposed to be hard-wired in the secure enclave, so it wouldn't be surprising if they could bypass the first and not the second.
So what did they find when they unlocked it? Anything they did not know already? If you ask me this is just an add for the company and is worthless until and if they ever tell us how it was done. Also, from a business side of things would it not be more profitable for Cellebrite to ask Apple for as much money as they want to show them how the phone can be hacked so Apple can fix the flow?
There is no app that does that. Seem like a silly question.
Whilst there might not be an app to do some or all of it, security wise it's a subject that has been seriously considered for well over half a century now.
If you have had the misfortune to work with certain levels of classified equipment then you will know about the "Destruction Kits" or inbuilt destruction devices. Some are actually explosive devices, others such as thermite or bulk acid break glass containers whilst sounding not as dangerous, are probably actually more so.
Back in the 1980's AMD amongst others developed chips that would "zeroize" on command and put them in their ordinary data books.
Such chips are still very much in demand by the military and communications security entities, but you need restricted data books to see what they are. However from time to time aspects of such designs make it into the open press.
 Though with the old idea of "Poke to Explode" and the issues some phone manufacturers have had with batteries in the recent past it might not take much to produce one. After all Apple have claimed that is the technical reason for their "slow the old phone" update, which might or might not be true...
It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.
That is a big "if".
Worse even if it's true today, it's likely not to be in the near future.
For their own sake people need to start investing in other ways of doing things, than using technology they can neither effectively audit or control.
As has been pointed out in the past, whilst you may buy a mobile phone, you neither buy or own the servive providers SIM, which means they control thus effectively "own" much of the phone functionality.
Further the segregation between the phone and any smart device it hosts is becoming more ill defined and porous all the time. Which invariably means more vectors opening up.
It's a game the ordinary user can not enter into, thus the only option is not to play.
It's also becoming more easily seen that various distopian books and their authors predictions are likely to happen (if they have not already). Thus the question of how far things can go?
Take it one step further, whatever they “found”, what would it be worth in court? I’m not talking of the 'fruit of the poisoned tree' issue, but would the defense, and more importantly the judge, buy it as “the truth”?
Because they say it is it evidence? Is it fit for vetting by the defense? Is it justice, or is it “because we can”?
After Bruce created this new topic I tried looking up the official documentation and commentary to discover what was possible. Thus rapidly turned into a byzantine timesink.
One question I have after reading a stray comment on CPU design is whether custom architecture using currently none mainstream designs can produce much higher throughput for analysis/decrypting. I'm sure it can I just don't know enough about the engineering to have an opinion. This beggars the question what is theoretically and achiveably possible?
If you ask me this is just an add for the company and is worthless until and if they ever tell us how it was done.
Whilst it may well be "promotional" for the company it is far from worthless to other people in the more general population. It will for instance give quite a few people "a word to the wise" thus they may well change habits.
Whilst I would not want certain types of criminal to "get wise" I suspect that some already have. That is previous events would have caused that, this would just be further confirmation.
However the people that most concern me are those we won't get to hear about. I suspect it's quite likely that Cellebright have other customers for their services.
Whilst we probably have no real evidence of who Cellebright's customers might be, we know from past revelations that other organisations in the same/similar line of business were shall we say not as ethical as we might have hoped.
Thus hopefully the targets of certain countries authorities will get a heads up and act on it if they have not already.
To get from a password to an AES key you need some mechanism.
The method Apple has used in the past that came out with their argument with the DOJ was "supposedly" not correctly implemented thus became vulnerable to a hardware attack. As the DoJ case was not the slam dunk they expected and then started fizzeling like an old style cartoon bomb fuse in the DoJ's direction the FBI then as if by magic allegedly found a company at the last moment so the DoJ could vacate before an adverse to them judgment thus president was set by the magistrate.
As what actually happened never became public we do not know if it worked for real. Thus there are people who have their doubts.
But on the assumption there was a hardware attack that could be fixed in some way it is likely that Apple would have investigated fixing it, even though they might not have rolled it out.
One aspect of the court case that came out was how secure was the way that Apple was generating and protecting the bulk of the AES key?
There is a large list of things that Apple could have got wrong, and that list grows almost daily. Quite a few of those could give rise to a way to make a brut force attack viable.
Thanks. Yes I remember this topic. It bothers me a little that current mainstream estimates of cracking encrypted data tend to align themselves with mainstream CPU capabilities rather than what is theoretically and reasonably possible to build with custom processors. I'm also not wholly convinced passphrase schemes, such as Diceware, have as high an effective entropy as claimed in practice. With a differently calculated baseline real world capabilities might be estimated more properly?
So the Mexican police in rural Chiapas, the Zimbabwean police, the Azerbajiani police, the Saudi Arabian religious police, the U.S. CIA pretty much anywhere in the world as well as GHCQ and the NSA, and a Scottish bobby who just wants to look up information on his daughter's new boyfriend.
Past examples are easier to come by because the current ones haven't been quite so well publicized: The Tsar's Ohkrana, the Iranian SAVAK, J. E. Hoover's FBI, the East German Stasi...
I continue to be staggered by the apparent assumption that the cops are always the good guys.
This applies to Cellebrite's sales. This applies to the "law enforcement backdoor" various governments keep asking for. This applies over and over again and yet they keep trotting out the sheriff of Mayberry as if that's the only policeman that ever was. As any journalist in any war-torn country will tell you, exactly which side is 'the law' is a highly dubious exercise.
I'm also not wholly convinced passphrase schemes, such as Diceware, have as high an effective entropy as claimed in practice.
They don't for several reasons.
The first as always is the human factor and the average human's inability to remember random items in random orders.
Thus if you get five words,
Horse, Battery, Staple, Journey, House.
You will try to find a way to remember them... thus people will not just,
1, rearange the order 2, remove duplicate words 3, keep hiting the button
Untill they get something they can more easily remember. With a moments thought you can see how each of those behaviours reduces the size of the total passphrase pool.
What is open to debate is if making the passphrase a natural language sentence by the user adding connectives increases or decreases the entropy. That is what happens when you take five words from say a set of one thousand words and rearange into sentances, is the size of the resulting sentance set larger or smaller. There is evidence to suggest with humans at the wheel it gets smaller...
There are also other issues to do with what happens when you use predictive modeling. That is if you create a model of common human behaviours and you apply it to guess passphrases what happens to the odds of the model guessing the supposadly random passphrase when other factors are known? like the actual passphrase length or typing cadence that often can get "given away" by poor coding choices or your attacker happens to know what wordlist you are using. It's not difficult to realise that seeing the typing cadence and knowing the word list can make matching not trivial but reasonably possible. Even without the wordlist certain things can be deduced.
Then there is the issue of what is and is not "truely random". Whilst people like the idea of diceware they don't like the effort of throwing dice... Think of it this way a throw of one dice gives as little as two bits of entropy. That is five throws for each word selection using a thousand word list. You thus will have to throw the dice twentyfive times to get five words that will in all probability only give you a little over fourty bits of entropy after human behaviour and a lot less if cadence information leaks.
How many people do you think will realy throw a dice twenty five times every month without a little cheating?
Thus a software version using say AES in CTR mode with an initial throw of twenty five dice to set the secret counter value and throwing a dice once or twice every month to give a little random walk to the counter value might well give a higher equivalent entropy value for an attacker to deal with, whilst maybe giving the user less incentive to cheat.
When left to my own devices I know my passwords have less entropy than they could and due to psychological weighting etcetera they have predictable elements. In theory I have, say, X bits of entropy while in reality I know I have Y or less. I suspect the predictable elements reduce entropy by a huge amount.
My endpoints are Swiss cheese so it's all a moot point.
@Clive , re throwing dice I bought a kit of small colored dice. Picked out five of each color. Put them in a small transparent box. Use by shaking vigorously for 10 sec, then tapping gently on side just so the dice all lay flat. Then write down the number of all the dice of one colour, in order as one would read a text. Quite fast method.
Which marks you out as a person who both thinks and takes security more seriously than most (I actually use a 1pint "straight" beer glass with either two or a half dozen dice depending on what I'm doing).
Most people would not do that, as it's easier to cheat the system, because in their mind "What's the harm". And as @Bruce has pointed out their priorities are not aligned with security unless it makes their day to day life quicker or easier.
I once worked at a company, where the marketing head honcho handed down an edict that you had to change your voice mail greating every day and that he would check the log to see it had been done... Let's just say the results were not good as the phone system had never been designed to have a hundred or so people try to change their voice mail at more or less the same time.
My solution knowing that the system only logged the number you called was to check for messages not change my greating. Because I did not use voice mail, I just used call divert to my mobile when I was away from my desk, and only changed my voice mail when going to meetings or out to customers. I don't know what it did to the company phone bill, but it made me more productive ;-)
The story I hear is that Cellebrite hires ex-Apple engineers and moves them to countries where Apple can't prosecute them under the DMCA or its equivalents.
If this is true, and Cellebrite is circumventing US DCMA law, then isn't it illegal for the FBI, or any other US government entity -- including state and local -- to do business (e.g. contract or procurement) with them?
A competior to Cellibrite has emerged offering similar services.
Given the attitude of these companies I wonder about rights to privacy and whether state agencies could be sued for negligience for knowingly allowing citizens data to be insecure or facilitating companies and ecosystems which place that privacy at risk. I also note that legislators are much slower with regard to enabling citizens safety online in a civil and criminal context with many enforcement agencies either "no criming" complaints or putting inadequate resources in place.
Mysterious $15,000 'GrayKey' Promises To Unlock iPhone X For The Feds
As the EFF said of the Cellebrite iPhone revelations, when vulnerabilities are kept secret, everyone is walking around with weaknesses in their devices that could be exploited by anyone, whether governments or criminals.
Cellebrite doesn't agree with that line of thinking. In a recent interview with Forbes, chief marketing officer Jeremy Nazarian said it was necessary for the flaws to stay secret so the tools remained effective and law enforcement could gather evidence from devices.
In a recent interview with Forbes, [Cellebrite] chief marketing officer Jeremy Nazarian said it was necessary for the flaws to stay secret...
There is a saying about the difficulty of making a man see a different point of view when his income is dependent on not doing so... Basicaly he is saying "Don't break my rice bowl"...
The fact that he is ignoring the truth of the other point of view and the fact he tries to excuse it, shows just how dishonest Mr Nazarian can be. Just like the FBI he has a felonious view of NOBUS. The simple fact is that now it's known there are flaws in the Apple Software it will encorage many others to go searching for them with a high likelyhood they will be found, thus NOBUS goes out the window...
But as already noted by others Cellebrite is in breach of the DMCA and if other stories about transporting ex Apple engineers to do the same is true that becomes conspiracy, it just gets worse and worse... But hey "they are golden" because they are helping those who think they are "The Good Guys", which of course leaves the question of what other undesirables they are helping.
Yes, I have noticed this kind of broad phenomena you describe in UK local government and healthcare not just private business and whatnot. The same claims of secrecy and "exceptionalism" and muddying of the grace and favour waters exists.
10,000 preventable deaths a year soon adds up. After one decade this is the equivalent of a major war.
It's funny you should say that, speaking of synchronicity I was discussing those who formally resided in Richmond House today and some of the nonsense the man in charge of it has been upto.
Sometimes it's difficult to Hunt out the truth, even what Leeds to it, but from what I'm hearing it looks like we can expect another major ICT incident again. I know the problems have a long history which crosses party divides and both the major players appear to have been after kick backs into party coffers and the like that a more sensible devolved system would have not given them.
Unfortunatly though it appears that the rush to go "Digital" has ment patient records have been "off shored" without some necessary procedures in place (no surprise there). The most immediate effect visable is that there are errors and omissions that are endangering lives. But it appears that these "damaged" records are considered a "Goose that will lay golden eggs" if marketed to interested parties...
It's been pointed out to me what some of the secondary effects of those errors and omissions may mean in terms of pharmacology and insurance and their development in the UK and the attendant mortality rate...
The problem is of course as I said is it's difficult to Hunt out the truth of it. Past scandles with those who have to take blood products to survive indicate the scale of the difficulties we might end up facing within a few decades at most. No doubt at some point the need to vacate Richmond House so a "mini alcohol-free Parliment" can be built will be used as a cover up for misplaced records etc to avoid making then available via courts etc.
I try to avoid dogmatism but it's difficult to push a matchstick through the roadcrash of fingerpointing gatekeeping ricebowls. My sense is the whole system is essentially badly designed. We need solutions.