Amid growing concerns over digital ethics, data privacy and algorithmic bias, more attention needs to be paid to the straightforward ‘harms’ that arise from digital technology use in society. Defining these as “adverse lived experiences resulting from a system’s deployment and operation in the world” (p.2), Renee Shelby and colleagues develop a comprehensive taxonomy of harms arising from algorithmic technologies.

The paper takes a sociotechnical approach – focusing on how harms emerge from the coming together of technical systems and social factors. In this sense, the authors remind us that the harms arising from any digital technology need to be seen in a relational manner – as the “outcome of entangled relationships between norms, power dynamics, and design decisions” (p.3).

Of particular importance is the propensity of technical systems to “encode systematic inequalities” (p.3). In other words, the design, development and implementation of technologies that many people might well experience as beneficial can nevertheless “adopt the default norms and power structures of society” (p.3) and therefore end up disadvantaging others. This differential impact is what Ruha Benjamin calls the ‘duplicity’ of technology. Just because you have always found technology X to be unproblematic, does not mean that it is not highly problematic for others.

As Benjiman and many others have pointed out, those who are most likely to be disadvantaged by emerging technologies are those who are already in disadvantaged and vulnerable positions – i.e. ‘marginalized’ communities that face structural social exclusion in many aspects of everyday life. Thus, unless technologies are explicitly designed to address matters of equity, marginalised groups are likely to disproportionately experience these harms. 

In short, we live in times when most digital technologies and digital systems work to reinforce and amplify existing social inequalities in a society. This is why the most vocal cheerleaders for increased digitisation are often those in privileged positions – white, middle-class, relatively well-off and relatively young. However, if you are someone who is black/ queer/ female/aging and living in a society where it is already disadvantageous to be black/ queer/ female/aging, then it makes sense to not expect new and emerging digital technologies to necessarily work in your best interests.

These disparities are being increasingly acknowledged within computer science and technology circles – indeed, Shelby’s paper is rooted in the computing literature and authored by a large group of authors affiliated with Google. Based on a scoping review of 172 articles and frameworks from computing research, these authors systematically examine how computing researchers tend to conceptualise harms in algorithmic systems. This results in the identification of five major types of sociotechnical harm: 

i. Representational arms

This first set of harms relates to how social characteristics and social phenomenon are represented in: (i) the data that is inputted into algorithmic systems; and (ii) in the data outputs that are subsequently produced. The main concern here is how data-based systems can limit the ways in which social phenomena are represented –  forcing information about people, their behaviours and backgrounds into restrictive categories that do not accommodate a full range of positions, and often perpetuate socially constructed beliefs and unjust hierarchies about social groups. 

In education, perhaps the most common examples of this harm would be the reduction of a student’s gender identity to the two categories of either ‘M’ or ‘F’. More expansive forms of categorisation will often sideline minority groups into catch-all categories of ‘Other’. These reductions might make statistical sense, yet perpetuate forms of stereotyping, essentialism and erasure. As Shelby concludes, these harms often relate to technology inputs and outputs working to  “reinforce the subordination of social groups along the lines of identity, such as disability, gender, race and ethnicity, religion, and sexuality”. 

ii. Allocative harms

This second set of harms relate to how data-driven systems can reach decisions that result in the uneven distribution of information, resources and/or opportunities to different social groups. These harms relate specifically to use of algorithmic systems in areas that substantially relate to people’s economic fortunes, material well-being and life-chances – such as housing, employment, social services, finance, education, and healthcare.

Commonly reported examples include algorithmic HR and hiring systems that favour job applicants that fit into narrow demographics of previously hired employees (e.g. white, male, college-educated). The past few years have also seen regular reports of financial credit-decision making systems that discriminate against applicants from particular demographics (e.g. denying home loans to black applicants). One high-profile educational case was the UK government’s 2020 examination grade decision-making exercise that resulted in students from fee-paying schools being routinely awarded higher grades that equivalent students from government schools.

iii. Quality-of-Service harms

This third set of harms relates to the ways that algorithmic systems systematically fail to perform in the same ways (or to the same standards) for different people depending on their backgrounds and circumstances. Common examples of this include facial recognition systems failing to detect people of colour, or voice recognition software failing to detect particular accents and dialects. As Shelby notes, these harms involve algorithmic systems disproportionately failing “for certain groups of people along social categories of difference such as disability, ethnicity, gender identity, and race”. 

Anyone encountering such breakdowns on a very occasional basis might consider them to be only a minor inconvenience. Yet, Shelby notes how these failures and breakdowns are highly problematic for people who are regularly being mis-recognised, non-processed and declined. First, is the considerable additional efforts required to make the technology work properly. More significant is the adverse emotions and feelings of alienation produced through the act of repeatedly not being recognised, not to say the disproportionate loss of benefits from the technology.

iv. Interpersonal harms 

A fourth set of harms relates to how digital technologies can adversely impact on relations between people. In an immediate sense, these harms can stem from the use of digital technology in direct interactions between people – for example, various forms of technology-facilitated violence such as online abuse or digital domestic violence.

Elsewhere, are various ways that digital systems mediate interactions between people and institutions. Examples here include privacy violations – such as schools ‘spying’ on students’ use of laptops when at home, or software that results in non-consensual forms of data  collection. These harms also include technology-driven forms of social control – such as students having to use specific platforms in order to take online exams, or school authorities algorithmically-profiling online activities to identify students ‘at risk’ of course non-completion. As Shelby notes, these latter forms of interpersonal harms “emerge through power dynamics of technology affordances” between educational institutions and individuals that they serve. 

v. Social System / Societal harms 

Shelby’s final set of harms relate to the ways in which digital technologies and systems can impact the development of societies – contributing to increased social inequalities and the destabilisation of social systems. These macro-level societal harms can take a variety of forms. Perhaps most familiar is the long-standing issue of ‘digital divides’ and digital inequalities, with significant social inequalities continuing to be perpetuated through successive waves of digital innovation and emerging technology development.

In addition, are how recent technology developments have led to new forms of societal destabilisation. One obvious case is how algorithmically-driven social media technologies now play a key part in the debasing of democratic politics (e.g. recent instances of election interference). Also noteworthy are recent trends in digital production and dissemination of disinformation, malformation and misinformation – all destabilising influences on societal and cultural cohesion.

Finally, Shelby also points to the various environmental harms associated with the production of digital devices and the processing and storage of digital data – not least the depletion and contamination of natural resources, the carbon-hungry nature of data storage, and damage to the natural environment through e-waste.

Discussion

Shelby offers a sobering range of diverse tech-related harms – bound up in everything from issues of identity politics to the spectre of climate crisis. Of course, while offering a neat overview of the range of different harms that potentially arise from digital technologies, it is important to remain mindful that these issues play out in ways that are decidedly messy and often difficult to disentangle. People will often experience combinations of these harms at the same time, and as Shelby acknowledges, “there may be grey areas within and across harm categories, and multiple harms may occur in a single use case or system” (p.7).

Nevertheless, this framework can help focus discussions about the disadvantages and drawbacks of digital technologies beyond individually-focused ideas of ‘user’ harms and personal risks. Instead, all these different types of harm point to the deleterious impact of technology at the level of individuals, communities and society. 

Shelby suggests that these different forms of harm should be proactively factored into the ways in which new technologies are designed and implemented, as well as guiding the focus of further research into how these harms play out in different socio-technical contexts. Above all, is the warning that these are not simply technical features (i.e. what programmers might see as ‘bugs’) that can be easily ‘fixed’ through better technological design or completely avoided through the development of new deliberately ‘non-harmful’ technology. Instead, Shelby stresses that:

“… harms from algorithmic systems are sociotechnical and co-produced through social and technical elements, they cannot be remedied by technical fixes alone. They also require social and cultural change” (p.4).

**

Reference

Shelby, R., Rismani, S., Henne, K., Moon, A., Rostamzadeh, N., Nicholas, P., Yilla, N., Gallegos, J.,  Smart, A., Garcia, E. and Virk, G.  (2022).  Sociotechnical harms: scoping a taxonomy for harm reduction. https://arxiv.org/abs/2210.05791v1