It was the week that big tech slammed the brakes on development of facial recognition systems.
On Tech Tent we ask whether it is enough to stop police using these systems – or whether there needs to be a wider look at the implications of this technology.
The rapid adoption of facial recognition systems by law enforcement agencies around the world has been challenged by civil liberties groups which have alleged the technology is not fit for purpose – and in particular is poor at accurately identifying black people.
But it is only this week in the wake of the protests over the killing of George Floyd that the tech companies have taken much notice.
Listen to the latest Tech Tent podcast on BBC Sounds
Listen every Friday at 15.00 GMT on the BBC World Service
First IBM – not a huge player in this field – announced it would stop selling facial recognition software. Then Amazon imposed a one-year moratorium on the use of its Rekognition system by the police pending regulatory action by Congress and finally Microsoft too said its facial recognition system would not be sold to police departments until there was a federal law governing the technology.
Amazon’s case is probably the most significant and Tech Tent hears from a former employee who expressed concern about the deployment of facial recognition when she worked as an engineer at the company. Anima Anandkumar, now professor of computing at California Institute of Technology, says the Rekognition system which allows a police officer to compare a photo taken on their smartphone with a database of known suspects was deployed before it was ready:
“That’s not how we should be launching products that are not what I call ‘battle tested’ into the real world. Researchers like me wanted to take a slower approach.”
She says there is a wider problem of bias with artificial intelligence products that businesses are eager to push out into the market in a hurry: “We have highly imbalanced datasets to train them, especially for face recognition that is heavily under-representing black women and men in these datasets.”
Technology analyst Stephanie Hare has been taking an interest in the ethical issues around facial recognition systems for a while, and she warns that this is not just about police use of the technology: “Surveillance technology is big business, and not just for police and security services, but the private sector.”
She says even if the problem of bias in the systems is ironed out, lawmakers will still need to confront some big questions about facial recognition: “It will change the very nature of power in our society, who has this technology? Who can use it? For what purposes? Do you ever have a right to opt out? How do you opt out if it’s just being used everywhere, and it can be used on you without you even knowing it?”
And Prof Anandkumar also hopes this week’s events will provoke more debate about the commercial deployment of artificial intelligence systems. She wants “more accountability, transparency, more privacy rights for all citizens…face recognition is just the beginning. There are so many other problematic uses of AI.”
Around the world, other technology companies and law enforcement agencies may face more challenges over their use of facial recognition. In the UK, it was South Wales Police which pioneered the adoption of the technology, using a system developed by Japan’s NEC at a major football match and on the busy shopping streets of Cardiff.
That led to a legal challenge by one shopper who alleged the system breached his right to privacy. In collaboration with the civil rights group Liberty, Ed Bridges tried to get a judicial review in what the High Court said was the first time any court in the world had considered the use of facial recognition technology. He failed but later this month the case will come before the Court of Appeal.
Whatever the outcome, we can expect further challenges to the use of a technology which critics say should not be deployed until we have had a proper debate about its regulation.