Apple is rolling out an iOS security usability feature called Security code AutoFill. The basic idea is that the OS scans incoming SMS messages for security codes and suggests them in AutoFill, so that people can use them without having to memorize or type them.
Sounds like a really good idea, but Andreas Gutmann points out an application where this could become a vulnerability: when authenticating transactions:
Transaction authentication, as opposed to user authentication, is used to attest the correctness of the intention of an action rather than just the identity of a user. It is most widely known from online banking, where it is an essential tool to defend against sophisticated attacks. For example, an adversary can try to trick a victim into transferring money to a different account than the one intended. To achieve this the adversary might use social engineering techniques such as phishing and vishing and/or tools such as Man-in-the-Browser malware.
Transaction authentication is used to defend against these adversaries. Different methods exist but in the one of relevance here -- which is among the most common methods currently used -- the bank will summarise the salient information of any transaction request, augment this summary with a TAN tailored to that information, and send this data to the registered phone number via SMS. The user, or bank customer in this case, should verify the summary and, if this summary matches with his or her intentions, copy the TAN from the SMS message into the webpage.
This new iOS feature creates problems for the use of SMS in transaction authentication. Applied to 2FA, the user would no longer need to open and read the SMS from which the code has already been conveniently extracted and presented. Unless this feature can reliably distinguish between OTPs in 2FA and TANs in transaction authentication, we can expect that users will also have their TANs extracted and presented without context of the salient information, e.g. amount and destination of the transaction. Yet, precisely the verification of this salient information is essential for security. Examples of where this scenario could apply include a Man-in-the-Middle attack on the user accessing online banking from their mobile browser, or where a malicious website or app on the user's phone accesses the bank's legitimate online banking service.
This is an interesting interaction between two security systems. Security code AutoFill eliminates the need for the user to view the SMS or memorize the one-time code. Transaction authentication assumes the user read and approved the additional information in the SMS message before using the one-time code.
Someone changed the address of UPS corporate headquarters to his own apartment in Chicago. The company discovered it three months later.
The problem, of course, is that in the US there isn't any authentication of change-of-address submissions:
According to the Postal Service, nearly 37 million change-of-address requests known as PS Form 3575 were submitted in 2017. The form, which can be filled out in person or online, includes a warning below the signature line that "anyone submitting false or inaccurate information" could be subject to fines and imprisonment.
To cut down on possible fraud, post offices send a validation letter to both an old and new address when a change is filed. The letter includes a toll-free number to call to report anything suspicious.
Each year, only a tiny fraction of the requests are ever referred to postal inspectors for investigation. A spokeswoman for the U.S. Postal Inspection Service could not provide a specific number to the Tribune, but officials have previously said that the number of change-of-address investigations in a given year totals 1,000 or fewer typically.
While fraud involving change-of-address forms has long been linked to identity thieves, the targets are usually unsuspecting individuals, not massive corporations.
Researchers at Princeton University have released IoT Inspector, a tool that analyzes the security and privacy of IoT devices by examining the data they send across the Internet. They've already used the tool to study a bunch of different IoT devices. From their blog post:
Finding #3: Many IoT Devices Contact a Large and Diverse Set of Third Parties
In many cases, consumers expect that their devices contact manufacturers' servers, but communication with other third-party destinations may not be a behavior that consumers expect.
We have found that many IoT devices communicate with third-party services, of which consumers are typically unaware. We have found many instances of third-party communications in our analyses of IoT device network traffic. Some examples include:
Samsung Smart TV. During the first minute after power-on, the TV talks to Google Play, Double Click, Netflix, FandangoNOW, Spotify, CBS, MSNBC, NFL, Deezer, and Facebookeven though we did not sign in or create accounts with any of them.
Amcrest WiFi Security Camera. The camera actively communicates with cellphonepush.quickddns.com using HTTPS. QuickDDNS is a Dynamic DNS service provider operated by Dahua. Dahua is also a security camera manufacturer, although Amcrest's website makes no references to Dahua. Amcrest customer service informed us that Dahua was the original equipment manufacturer.
Halo Smoke Detector. The smart smoke detector communicates with broker.xively.com. Xively offers an MQTT service, which allows manufacturers to communicate with their devices.
Geeni Light Bulb. The Geeni smart bulb communicates with gw.tuyaus.com, which is operated by TuYa, a China-based company that also offers an MQTT service.
We also looked at a number of other devices, such as Samsung Smart Camera and TP-Link Smart Plug, and found communications with third parties ranging from NTP pools (time servers) to video storage services.
Their first two findings are that "Many IoT devices lack basic encryption and authentication" and that "User behavior can be inferred from encrypted IoT device traffic." No surprises there.
Princeton's Karen Levy has a good article computer security and the intimate partner threat:
When you learn that your privacy has been compromised, the common advice is to prevent additional access -- delete your insecure account, open a new one, change your password. This advice is such standard protocol for personal security that it's almost a no-brainer. But in abusive romantic relationships, disconnection can be extremely fraught. For one, it can put the victim at risk of physical harm: If abusers expect digital access and that access is suddenly closed off, it can lead them to become more violent or intrusive in other ways. It may seem cathartic to delete abusive material, like alarming text messages -- but if you don't preserve that kind of evidence, it can make prosecution more difficult. And closing some kinds of accounts, like social networks, to hide from a determined abuser can cut off social support that survivors desperately need. In some cases, maintaining a digital connection to the abuser may even be legally required (for instance, if the abuser and survivor share joint custody of children).
Threats from intimate partners also change the nature of what it means to be authenticated online. In most contexts, access credentials -- like passwords and security questions -- are intended to insulate your accounts against access from an adversary. But those mechanisms are often completely ineffective for security in intimate contexts: The abuser can compel disclosure of your password through threats of violence and has access to your devices because you're in the same physical space. In many cases, the abuser might even own your phone -- or might have access to your communications data because you share a family plan. Things like security questions are unlikely to be effective tools for protecting your security, because the abuser knows or can guess at intimate details about your life -- where you were born, what your first job was, the name of your pet.
On Friday, Vietnamese security firm Bkav released a blog post and video showing that -- by all appearances -- they'd cracked FaceID with a composite mask of 3-D-printed plastic, silicone, makeup, and simple paper cutouts, which in combination tricked an iPhone X into unlocking.
The article points out that the hack hasn't been independently confirmed, but I have no doubt it's true.
I don't think this is cause for alarm, though. Authentication will always be a trade-off between security and convenience. FaceID is another biometric option, and a good one. I wouldn't be less likely to use it because of this.
This is a good interview with Apple's SVP of Software Engineering about FaceID.
Honestly, I don't know what to think. I am confident that Apple is not collecting a photo database, but not optimistic that it can't be hacked with fake faces. I dislike the fact that the police can point the phone at someone and have it automatically unlock. So this is important:
I also quizzed Federighi about the exact way you "quick disabled" Face ID in tricky scenarios -- like being stopped by police, or being asked by a thief to hand over your device.
"On older phones the sequence was to click 5 times [on the power button], but on newer phones like iPhone 8 and iPhone X, if you grip the side buttons on either side and hold them a little while -- we'll take you to the power down [screen]. But that also has the effect of disabling Face ID," says Federighi. "So, if you were in a case where the thief was asking to hand over your phone -- you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID."
That squeeze can be of either volume button plus the power button. This, in my opinion, is an even better solution than the "5 clicks" because it's less obtrusive. When you do this, it defaults back to your passcode.
It's worth noting a few additional details here:
If you haven't used Face ID in 48 hours, or if you've just rebooted, it will ask for a passcode.
If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode -- it tried to read the people setting the phones up on the podium.)
Developers do not have access to raw sensor data from the Face ID array. Instead, they're given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.
You'll also get a passcode request if you haven't unlocked the phone using a passcode or at all in 6.5 days and if Face ID hasn't unlocked it in 4 hours.
Also be prepared for your phone to immediately lock every time your sleep/wake button is pressed or it goes to sleep on its own. This is just like Touch ID.
Federighi also noted on our call that Apple would be releasing a security white paper on Face ID closer to the release of the iPhone X. So if you're a researcher or security wonk looking for more, he says it will have "extreme levels of detail" about the security of the system.
Facial recognition has long been notoriously easy to defeat. In 2009, for instance, security researchers showed that they could fool face-based login systems for a variety of laptops with nothing more than a printed photo of the laptop's owner held in front of its camera. In 2015, Popular Science writer Dan Moren beat an Alibaba facial recognition system just by using a video that included himself blinking.
Hacking FaceID, though, won't be nearly that simple. The new iPhone uses an infrared system Apple calls TrueDepth to project a grid of 30,000 invisible light dots onto the user's face. An infrared camera then captures the distortion of that grid as the user rotates his or her head to map the face's 3-D shape -- a trick similar to the kind now used to capture actors' faces to morph them into animated and digitally enhanced characters.
It'll be harder, but I have no doubt that it will be done.
"Traffic shaping" -- the practice of tricking data to flow through a particular route on the Internet so it can be more easily surveiled -- is an NSA technique that has gotten much less attention than it deserves. It's a powerful technique that allows an eavesdropper to get access to communications channels it would otherwise not be able to monitor.
This report describes a novel and more disturbing set of risks. As a technical matter, the NSA does not have to wait for domestic communications to naturally turn up abroad. In fact, the agency has technical methods that can be used to deliberately reroute Internet communications. The NSA uses the term "traffic shaping" to describe any technical means the deliberately reroutes Internet traffic to a location that is better suited, operationally, to surveillance. Since it is hard to intercept Yemen's international communications from inside Yemen itself, the agency might try to "shape" the traffic so that it passes through communications cables located on friendlier territory. Think of it as diverting part of a river to a location from which it is easier (or more legal) to catch fish.
The NSA has clandestine means of diverting portions of the river of Internet traffic that travels on global communications cables.
Could the NSA use traffic shaping to redirect domestic Internet traffic -- emails and chat messages sent between Americans, say -- to foreign soil, where its surveillance can be conducted beyond the purview of Congress and the courts? It is impossible to categorically answer this question, due to the classified nature of many national-security surveillance programs, regulations and even of the legal decisions made by the surveillance courts. Nevertheless, this report explores a legal, technical, and operational landscape that suggests that traffic shaping could be exploited to sidestep legal restrictions imposed by Congress and the surveillance courts.
LyreBird is a system that can accurately reproduce the voice of someone, given a large amount of sample inputs. It's pretty good -- listen to the demo here -- and will only get better over time.
The applications for recorded-voice forgeries are obvious, but I think the larger security risk will be real-time forgery. Imagine the social engineering implications of an attacker on the telephone being able to impersonate someone the victim knows.
I don't think we're ready for this. We use people's voices to authenticate them all the time, in all sorts of different ways.
EDITED TO ADD (5/11): This is from 2003 on the topic.
ProofMode is an app for your smartphone that adds data to the photos you take to prove that they are real and unaltered:
On the technical front, what the app is doing is automatically generating an OpenPGP key for this installed instance of the app itself, and using that to automatically sign all photos and videos at time of capture. A sha256 hash is also generated, and combined with a snapshot of all available device sensor data, such as GPS location, wifi and mobile networks, altitude, device language, hardware type, and more. This is also signed, and stored with the media. All of this happens with no noticeable impact on battery life or performance, every time the user takes a photo or video.
This doesn't solve all the problems with fake photos, but it's a good step in the right direction.
Mike Specter has an interesting idea on how to make biometric access-control systems more secure: add a duress code. For example, you might configure your iPhone so that either thumb or forefinger unlocks the device, but your left middle finger disables the fingerprint mechanism (useful in the US where being compelled to divulge your password is a 5th Amendment violation but being forced to place your finger on the fingerprint reader is not) and the right middle finger permanently wipes the phone (useful in other countries where coercion techniques are much more severe).