Apple has announced that it plans to replace previous iPhone login credentials with facial recognition technology to log into the iPhone and to access Apple Pay.
This should prompt some privacy and security concerns, but probably not the ones you’re thinking. It’s not the TSA or the Deep State who are most likely to abuse it.
The depth and portability of current facial recognition authentication may be a new bridge into consumer app insecurity, like online banking apps and devices that hook up to the Internet of Things.
ADVERTISEMENT
Basically, with a newer facial recognition method about to become an industry standard after Tuesday’s Apple event, identity theft could take on whole a new face. And, down the line, it’ll make identity theft yet another thing only those with disposable income can afford to stop.
Feature-Based Replacing Image-Based Facial Recognition
There are two general technologies that perform facial recognition: feature-based and image-based. Image-based is old school, but it’s everywhere, like the facial image recognition tech you see at a mall or bank.
At least as far back as 2001, and seemingly accelerated by concerns about terrorism post-9/11, the use of image-based facial recognition to identify individuals became popular, ostensibly to weed out terrorists and criminals. Today, approximately half of Americans are in an image-based police face recognition database.
Image-based recognition is what it sounds like—the algorithm either extracts measurements like distance between facial features (geometric), or compares measurements (photometric) to try to identify a unique face.
Here’s where the new iPhone is different: It will use a newer method called feature-based facial recognition. Feature-based facial recognition takes an initial set of measurements—at the millimeter level, and sometimes subdermally. The sensor, in this case the high-resolution front-facing camera of your iPhone, will create an architecture, create code around it, and sign that code with a unique hash.
Each face, including your face, will have a hash ID.
This hash is stored in a secure enclave of the phone. But here’s the problem: If an individual can break into the secure enclave, they can take the representation of you and use it to be you, in any platform that uses that same database.
Feature-Based Facial Recognition Relies on Software Security
So feature-based facial recognition is extremely hard to deceive, and easy to use, on the face-camera sensor side. What’s the issue?
The problem stems from the unique identifier—or a hash—that’s generated by your phone’s front-facing camera, consistent with whatever sensory configuration the recognition software seeks. That hash is stored in your iPhone.
Feature-based facial recognition leaves access to all the good stuff on your phone ultimately stealable because of the math behind the tech. It relies on software code to generate a hash, and relies on hardware and software code to reinforce security and protect the access to that information.
As I’ve noted before, code is written by humans, and humans are flawed.
This is true for facial recognition, but also other types of identity management. In fact, every identity management platform does this. It takes an input, and frames it in software based on mathematics. In this new system of identification, all authentication—regardless of method (voice, fingerprint, or facial recognition)—is reducing something down to a string of bits, represented in a hash. There are vulnerabilities because software is stealable, manipulable, transferable, and portable.
There are innumerable obvious privacy concerns in the ways this could play out based on the intimacy of data and information that we store within an identity-managed system. Think “the Fappening” leak of iCloud photos, or on the legal side, the increase in warrantless cellphone seizures in the last few years.
Security in Feature-Based Facial Recognition
There is no generic or universal formula for how to translate feature-based recognition.
But, to get technical for a second, if you can get the source code for how the camera interpreted that face, you could triangulate to capture their compiler so it will compile like it’s a real person. As long as I can get the hashes to match, I can open any device on the Apple platform. And someone can insert serial ports and force the device to authenticate the hash.
In human English: If someone figures out how to hack your face, that is the least of your worries.
Increasingly, we’re using phones for everything from banking and company emails to unlocking the front door to our house. Feature-based facial recognition allows for an unprecedented depth of authentication across applications.
It’s not just Apple. In the context of connected homes, for example, Google is a platform where the same hash can be used across all of their devices.
As we see applications “speaking to each other” across platforms, we must question the security of the devices themselves. Are devices that are storing these keys secure enough for what we are giving them? If I’m giving my phone a key to my whole life, and the key can be shipped to anyone, should I feel comfortable? We use phone-based applications for everything from turning on the lights, to accessing our hotel room.
In the new iPhone, you will use facial recognition and a double-click of the home button to make an Apple payment. If I want all of these features, am I giving up obscurity and trading security for convenience?
iPhone Versus the Rest
Apple’s secure enclave is a big differentiator between Apple and Android. Apple has a vertically integrated supply chain, its own ecosystem, a tightly controlled app store, and secure enclaves.
Apple is generally just way better at ensuring a uniform security standard.
Apple and Microsoft both use formal methods in key products that require security, which is an old way that was used to teach programming. That’s the mathematical rigor behind coding, to pursue watertight security of the resulting code. It is based on mathematically secure programming. It’s hard, because it’s unforgiving.
So, the reassuring story is that the secure enclave in your iPhone—the tiny box inside the virtual machine—will check, re-check, and govern permissions, or what the phone is allowed to do with your data.
In iPhone’s case, no two applications can talk to each other unless they’re manufactured by Apple Inc. Apple has a zero trust perimeter. So independent developers can get their box in the truck but can’t get into the truck itself. It’s not impermeable, as an Apple head of security engineering said last year at Blackhat, but this makes the iPhone a step above the rest.
Meanwhile, the architecture of others—including Android phones—suggests that software controls on facial-recognition applications aren’t strong enough. If an attacker is sophisticated enough, the phone itself will not govern the security controls.
And if the iPhone’s move to facial recognition is a prompt for all similar device makers to rely more heavily on feature-based facial recognition, as Apple tends to be emulated, this could create an easier point of access for connected devices around the world.
Essentially, if the new iPhone makes facial recognition an industry standard, cheaper Android phones with worse security could easily put your face—or the code behind it—in the wrong hands.
It all comes down to security design architectures. This mirrors and amplifies the differences in technology access and security that we see patterned in the rest of America: Those who have access to top-tier vendors and security-for-hire benefit from the inheritance of a series of business decisions that prioritize security and internalize the costs.
All of Apple’s sandboxed security measures and proprietariness as a corporate strategy comes at a cost that the company hands down to the willing consumer, in the form of a steeper price tag for iPhones and Macbooks.
Not all people can afford it and, in the future, that means not all people can protect themselves from having their identities stolen.
What Does This Mean for Vulnerable Communities in America?
In the past decade or more, Microsoft and Apple have not only invested in their security architectures to ensure that their devices and services maintain the cutting edge of security protocols.
They have vertically integrated and controlled their entire supply chain and ecosystem. They have also expended significant court costs to fight vigorously to avoid allowing any compromise of their systems predicated on law enforcement or other governmental justification.
The fact that security is a white shoe commodity should not be lost on us.
This extends far beyond iPhone and Android as examples. It will reverberate throughout the use of feature-based facial recognition, as it has with other forms of identity management.
And we are all stakeholders. No one has an option to opt out.
As the Equifax hack demonstrated, even companies with whom you pursued no deliberate relationship may have access to your data, let alone the ones you do accept into your life, and in increasingly intimate ways.
And, really, what technology is more intimate to us than our phones?
Try this thought experiment: Envision two American women, Emma and Trudy.
Emma is a senior lawyer at a firm and makes $450,000 per year. She has AmEx fraud protection, loan/title company insurance on the house she owns, and a few subscription security services like LifeLock. She shops at Neiman Marcus and owns an iPhone. She’s inheriting security as an artifact of wealth.
Trudy has a few young children and perhaps serves as caretaker for other family members like nephews or grandmothers. Let’s say her husband died in a war and she’s receiving survivor benefits of a military spouse and working as a nurse in the local hospital. She makes $43,000, rents from a weak title company, and shops at Target and TJ Maxx. She has a phone that came with the Cricket Wireless pay-as-you-go plan. Not only does she not have a nice AmEx with fraud prevention, she uses a prepaid Green Dot debit card that can be stolen like cash and doesn’t have monitoring or support. She isn’t a lawyer, doesn’t have a lawyer, and doesn’t know what her rights are as a consumer.
They have different access to security awareness. They have differing amounts of agency to articulate or change how secure they are. Third, and most notably for current purposes, these two hypothetical actors inherit different levels of security as a function of socioeconomic status.
This is not an outcome that I believe Americans have said they are willing to accept.
The new iPhone’s facial-recognition software is the harbinger of a new arc in the era of stratified social goods disproportionately available—or unavailable—to Americans, depending on how much people can afford to protect themselves.
The views expressed herein are the personal views of the author and do not necessarily represent the views of the FCC or the U.S. Government, for whom she works.
—with additional reporting by Alex Kreilein