AI-powered facial recognition is coming soon to a store near you
A rise in the use of high-caliber facial recognition tools by UK retailers has sparked concerns from individuals and civil rights groups alike - and those surveillance tools have now gained an injection of AI.
Not unlike the recent rapid improvements to generative AI software such as ChatGPT, facial recognition has also been seeing major AI-fueled improvements. UK police have already deployed widespread use of facial recognition, to much criticism.
These services are already available to private establishments at very feasible prices. As they improve thanks to the power of AI, they become increasingly tempting for businesses to use them for things like detecting shoplifting and identifying “undesirable” customers. If you’re sympathetic to the businesses in question, you may see no issues with this - but concerns have arisen about the expansion of AI-powered surveillance further into our lives.
A recent New York Times report described how Facewatch supplies facial recognition data and services to retailers that subscribe to their services. If a shoplifter is detected (or anyone who is otherwise undesired at the business), Facewatch flags every time it recognizes them after that, alerts the business, and then it’s at the business’s discretion if they want to watch the person or dismiss them from the premises.
If the shoplifter repeatedly attempts to steal or attempts to steal something that costs more than £100 ($131), they are put on a special list, formulated for each respective geographical area, which allows their data to be shared with other Facewatch-subscribed businesses in the area.
Who watches the watchers?
This sort of dystopian public surveillance has not gone completely unnoticed. The civil society group Big Brother Watch has raised concerns, claiming that this is a disproportionate reaction in comparison to the relatively minor offense of shoplifting.
What’s more, if you end up on one of these lists, there’s essentially no way to find out why or how to appeal. Under Facewatch’s system, once a person is added to the list, their name remains there for a year. At the moment, nearly 400 businesses in the UK are utilizing Facewatch, and the roster of such establishments is growing.
Although there are multiple levels of checks done before the alert is sent, including a final human check that is done by a trained “super recognizer,” Facewatch has admitted that (very rarely) mistakes do occur. It’s also unclear exactly how these “super recognizers” are trained, or what exactly the point of AI assistance is if human staff still need to be involved.
Big Brother Watch has described the process as bringing “airport-style security levels” to the most mundane everyday tasks, like picking up groceries. The UK Information Commissioner’s Office, charged with regulating such privacy matters, investigated Facewatch and concluded that it could continue to operate. However, this was under the condition that changes were made in how it was operated; Facewatch did have to revise the criteria under which a person was put on a watchlist.
Always watching
Facewatch was founded in 2010 by Simon Gordon, owner of a 19th-century wine bar in central London, who initially funded the development of the software when he was trying to deal with pickpockets at his own establishment.
He has said that Facewatch has seen “exponential growth” and that it’s seeing significant interest from business owners internationally. He is looking to expand Facewatch’s client base to the United States - so don’t think you’re safe from this surveillance just because you don’t live in the UK.
To me, this is a pretty invasive use of AI tools - to the point where I would actually call it misuse. It would discourage me to shop at a particular establishment if I have to consent by default to potentially being tracked by spy-tech level surveillance.
We can also now shop online, and not worry about such intense physical monitoring and potential further misuse of our biometric information. It seems that the European Union agrees with me, as it’s currently drafting some strongly-worded laws addressing these types of scenarios and the use of AI-powered tools.
That said, especially in an era where AI technology is worming its way into every aspect of our society, we now have to deal with whatever ethical quandaries arise. With the TSA now making widespread use of AI-aided facial recognition, not unlike the UK police, without any sort of legislation mitigating them, I don’t see why US businesses wouldn’t embrace it - especially as prices continue to fall and the accuracy of the tech continues to improve.