As Mercedes Bunz (2021) observes, we are surrounded by Beta versions of digital technologies that have been sold to us as the finished project.
In other words, the current business model throughout most of the IT industry commonly presumes that the development of digital technologies does not require test-environments in which tech developers can extensively experiment and rigourously refine their creations.
Instead, it is presumed that the testing, experimentation and refinement of any new technology can take place in everyday contexts – with upgrades, ‘patches’ and ‘fixes’ being rolled-out when problems arise.
Indeed, we have become used – if not inured – to such problems arising. People being killed by self-driving cars, wrongful arrests due toracist image recognition systems, and countless other less dramatic breakdowns and harms.
Yet, regardless of how severe the failure, all these instances are usually explained away as glitches, bugs and other teething problems … accompanied by forthright promises that things will be improved and these issues will be immediately fixed.
So why have we become happy to accept this reliance on what Marres and Stark (2018) describe as ‘street trials’ of what is potentially harmful technology? Why do we have a premature acceptance of digital systems and automated machines that can wreak havoc in our everyday lives?
In short, we should be pushing the recommendation that any digital technology deemed suitable for implementation in a societal setting is properly tested, evaluated and scrutinised before it is ever let loose in the ‘real world’.
Bunz, M. (2021) AI and the calculation of meaning. Lecture June 7th: https://www.youtube.com/watch?v=J-fEr_Fa6xI
Marres, N., & Stark, D. (2020). Put to the test: For a new sociology of testing. The British Journal of Sociology, 71(3), 423-443.