We are currently in the middle of a four-year research project involving various case studies exploring the roll-out of facial recognition and facial detection technologies into everyday settings.

Academic discussions around FRT are wide-ranging, with most critical scholars understandably concerned with the significant capacity that this technology has to discriminate, oppress and generally disadvantage people that already face discrimination, oppression and disadvantage.

Yet, when we examine the implementation of FRTs in situ, the most immediate challenge is often that the tech simply does not work as intended … and certainly does not work to the extent claimed by hype-heavy industry actors pitching to largely AI-credulous markets.

The over-selling of AI technology is already well-noted. In 2019, an audit of over 2,800 ‘AI’ start-ups in Europe concluded that two-fifths of firms were not making any meaningful use of artificial intelligence in their products. 

This was also the case of Banjo – a ‘panopticon’ surveillance system used by the State of Utah to provide early-warning of unfolding public safety incidents. Here, a 2021 public audit into potential algorithmic biases forced Banjo to concede that its software “does not use techniques that meet the industry definition of artificial intelligence”.

Thus, it is perhaps unsurprising that our own studies of ‘AI’ driven facial detection software used to monitor online examinations in universities found these systems to be dismissed by Computer Science professors as ‘buggy’, and acknowledged by college administrators to primarily be a symbolic deterrent rather than serious substantive means of detecting malpractice. Reports soon surfaced from test-takers of proctoring systems falling to recognise faces without excessively bright light, misrecognising objects, and generally being easy to fool and circumvent. All told, we found strong parallels with the roll-out of this facial detection tech and what Bruce Schneier has termed “security theatre”.

This question of what an emerging technology is actually capable of doing (in comparison to what its vendors claim it is capable of) is therefore a key question that needs to be foregrounded much more prominently in critical discussions of tech. As Meredith Broussard puts it, we need to move away from fixating on the ex-ante claims of the software industry, and only turning our attention to the limitations of their technology products much later on down the track:

“A lot of the problem we run into with AI is that people make dramatic claims about what the software can do (ex ante claims) and then the analysis afterward (ex-post) reveals that the claims are false”.

Of course, there is real value in social scientists continuing to grapple with the potential mis-uses and broader logics implicit in the sociotechnical imaginaries and ‘futures’ thinking that drives the design and development of emerging technologies. Yet tech critics should not be distracted by the speculative qualities attached to these technologies at the expense of engaging with their actual substance. As Deb Raji observes:

“Even AI critics will fall for the PR hype, discussing ethics in the context of some supposedly functional technology. But, often, there is no moral dilemma beyond the fact that something consequential was deployed and it doesn’t work”.

Put bluntly, the consequences of a technology not working will usually lead to range of significant breakdowns, disjunctures and harms that often remain unreported in critical accounts of technology, yet require our close attention.

For many people, the instance of a new technology having a temporary glitch or breakdown is not a massive problem. In many cases the consequences of a new technology not doing what it is supposed might be presumed as merely the mild inconvenience of having to revert to previous ways of doing the thing that the new technology was purported to do.

Yet, such inconveniences can be decidedly more significant when the pre-technology ways of doing something have been neglected – or jettisoned altogether – in favour of the technological solution. For example, your fancy apartment block with newly-installed FR door locks might no longer employ a lobby porter 24/7 to let you in when the new system fails to recognise your face, as they would have done previously when you mislaid your keys.

Then, of course, are the real-life consequences of not being processed, being mis-classified, omitted from records, or generally rendered as unrecognisable by the system. This might only be a fleeting discomfort for some, but a significant problem for others. People can be made bankrupt, arrested, convicted, lose their homes and livelihoods all on the basis of faulty tech systems and broader unjust decision-making systems that they feed into.

The inconveniences and harms associated with emerging tech not working properly clearly depend on who you are, and your wider life circumstances. High-profile push-back against landlords surreptitiously installing facial recognition door-entry systems in rent-stabilized apartments in Brooklyn shows how poorly-functioning technology coalesces with broader class and race-related forms of control and suppression. What might be occasionally inconvenient to upper-middle class apartment-owners in Manhattan, takes on a completely different tone for less wealthy populations of colour.

Alongside these ‘end user’ issues, we should also remain mindful the considerable amounts of ‘behind-the-scenes’ human labour that buggy and glitchy systems incur – as evidenced in the thankless maintenance and repair work carried out by the human supervisor of any automated ‘self-service’ checkout area of your local supermarket. 

In practice, many of the AI-driven ‘automated’ facial recognition technologies that we are studying actually rely on substantial amount of human labour, as people are forced to ‘work around’ the fact that these systems do not always function as promised. This chimes with Astra Taylor’s description of the charade of ‘fauxtomation’– noting that this work is often under-paid (or not paid at all), carried out by women, not valued by those investing in these expensive technologies, and “reinforcing the illusion that machines are smarter than they really are”.

All told, tech critics need to pay much more attention to the likely harms of emerging technologies not working, as much as we should be concerned with the more distant prospect of them actually reaching their promised heights. Of course, it remains important to consider what might happen if the implementation of any emerging technology is taken to its logical conclusions. Yet, we need to be well-aware of the likely large gap between the rhetoric and realties of new technologies – any logical conclusions are likely to remain imagined and speculative for a long time to come.

Thus, alongside conceptually-provocative questions of imaginaries and potential shifts, are more prosaic practical questions to be asked of any emerging technology such as facial recognition and facial detection technology:

  • In what ways is this technology likely to fall short of the claims being made about its capabilities? 
  • What compensatory actions and practices are likely to emerge as a result?
  • What consequences are likely to arise from this technology not working? 
  • How are these consequences likely to differ for different people in different contexts?