Facial recognition technology has come under a lot of fire. Good! But let’s not forget, making sure FR is used properly is just one battle in our war for the ethical use of powerful technology.
Poor old facial recognition technology. It can’t catch a break. Rarely a week goes by without some the emergence of some fresh scandal or the publishing of new academic research belittling its competence and highlighting its various deficiencies.
Just a few days ago, the University of Essex announced it had found that the FR technology being trialled by the London Metropolitan Police was only accurate once in every five attempts. Let’s rephrase, in case any of the lunacy of that statement is lost: An astonishing 80% of the time, the Met’s proposed face-matching technology is failing to…well…match faces. If it was ever used in the field, the result would be a huge rise in mistaken identities and unneeded arrests.
Shocking though this is, we must go a little further back, to February of this year, for the truly troubling material. It was then that Joy Buolamwini, researcher at the MIT Media Lab, announced she had found the accuracy of three leading FR solutions, produced by IBM, Microsoft and Chinese company Megvii, to dramatically drop when analysing images of people with darker skin. More damming still, she found that the darker the subject’s skin tone, the less accurate FR became.
This is not simply a coincidental failing of a handful of poorly-coded programmes. Racial bias is systemic across FR, as it is in many other technologies dependent on machine learning. ML algorithms are often trained using data that’s contaminated by prejudice. Not knowing any better, the algorithms assume variances in racial representation should be treated as empirical fact, and reflected in its decision-making. This is how bias becomes enshrined in the digital world.
No one is accusing tech companies of deliberate racism. If they’re guilty of anything, it’s negligence. Though, to be fair some tech companies are making an effort to address this problem by building more diverse datasets which should be free from bias. They are also calling on governments to regulate the development of FR systems, encouraging better technology while maintaining an even playing field.
FR has a long way to go, though. And until its problems are resolved, it shouldn’t be allowed anywhere near the frontline of policing.
Our society should not be guinea-pigged. Nor should it be forced to beta-test biased and bug-ridden technology, especially when the consequences of these bugs are injustice and the violation of human rights. If we allow the government to deploy facial recognition technology because we believe it’ll make us safer, we must demand that it’s fit for use.
I chose my words carefully, here. ‘If’ we allow them. We have a choice to deny our government access to these technologies and to lobby for laws that restrict their unfettered use by private companies. FR is just one example of a range of emerging technologies which governments could employ. It’s a microcosm of the wider problem we’re struggling to come to terms with.
While governments take great care to regulate the refinement of lethal weapons, the unleashing of new technologies that can scrutinise, oppress has galloped on with practically zero oversight. Legislative and legal recourse is pursued reactively, and rarely. When they do act, their preferred deterrent – minuscule fines – is feeble.
We can change this. But we must do so before it’s too late. With each technological advance, the few in control acquire more power over those in need.
At some point, the difference in power will become so great that the many will lose the option to regain the control they voluntarily relinquished. A popular revolution led by disgruntled professionals armed with sticks and staplers will be no match for the technological might of the state, or of whatever state/corporate hybrid prevails. Before that happens, we must be certain it’s where we want to end up. If it’s not, we must act.
The problem before us is obvious: We have too much faith in those with power. And too much faith in technological benevolence. Google runs the most expansive and invasive data collection programme in the history of the world. It reads our emails, records our search results, and knows our location. Why do we let Google get away with this? Yes, it makes our lives easier. But Google has also worked hard to paint itself as a benign actor. It wants no harm to come to its customers. Nor does it want to do anything untoward with its Fort Knox of intimate intelligence. And so we hand our data over.
Google is neither benign nor malicious. It is a company. It has two interests: profits and the growth of profits. Our economic fixation on growth may drive the likes of Google and Amazon down some unseemly, but legally-sanctioned alleyways. But that’s a discussion for another blog. The point is we afford too much trust, or pay too little attention, to the powers that be. This trust is allowing them to consolidate power through innovation and by unchecked experimentation.
Some might abide by the ‘if you’re guilty, you have nothing to hide’. But the line between guilt and innocence changes. Homosexuality was illegal in the UK half a century ago. Before 1967, you could be found ‘guilty’ just for loving another person. Laws are not set in stone. Nor are they anchored to some great, inalienable morality.
We would be wise to remember this, especially now that the nefarious forces of populism are on the rise. Political instability is becoming the new norm. Change may happen sooner than we think. Imagine, god forbid, the Brexit Party takes control of Westminster, and Nigel Farage becomes Prime Minister. Would you trust an administration propelled by bigotry with the technological tools capable of widespread oppression?
It is, thankfully, an unlikely scenario. But contemplating extreme scenarios provides a valuable acid test for these critical decisions about our future.
Technology is only starting to interact and intertwine with our way of life. As we climb further up Moore’s increasingly steep curve, we are being confronted by bigger lessons to learn, and more important decisions to make. We cannot sit back and expect our needs to be seen and met out of the goodness of corporate or bureaucratic hearts.
Peter Parker’s uncle Ben famously said: “with great power comes great responsibility”. But responsibility does not necessarily follow power. Responsibility must be forced upon the powerful by those with the courage to do so.
The power is coming. The question is, are we ready to make sure it’s wielded responsibly?
If you are curious about R&D Tax Credits, Innovation Grants and Open CultureGET IN TOUCH
'Lab-grown' meat is a bleeding-edge technology which could make plunderous, unethical meat consumption a thing of the past. We answer some of your biggest questions about this burgeoning industry, including…
With Brexit less than four months away, I analyse how this major political and economic shift will impact the R&D Tax Credit scheme. Baring some extraordinary evasive action, Britain will…
Our six-step breakdown of the R&D Tax Credits claims process. Confused about how to claim? You've come to the right place! This may be a surprising article to find on…
When looking into grant funding it’s easy to get confused. What competition should I apply to? Do I meet the criteria? How much money can I ask for? And perhaps…