Introduction

The past decade or so has seen the ebbing away of popular hopes that digital technology might prove to be an inherently ‘good thing’ for society. Looking back, the 1990s were tinged with a distinct wide-eyed optimism (if not utopianism) around the transformative possibilities of personal computing and the worldwide web. Similarly, much of the 2000s’ discourse around social media and web 2.0 was also imbued with a dogged sense of hope and enthusiasm. 

Nowadays, however, digital technology tends to be seen along more circumspect and suspicious lines. Ever since the Economist identified beginnings of what it termed a  ‘tech-lash’ in 2013, a distinct scepticism has come to settle over the digital technologies that shape our societies and lives. Public trust in companies such as Meta and Google is at an all-time low, while Elon Musk, Jeff Bezos and other titans of ‘Big Tech’ are treated by many with a distinct wariness. In short, we no longer live in times when it is assumed that digital technology is essentially a force for making the world a better place.

The need to better promote the societal value of digital technologies has not been lost on the technological classes – i.e. IT industry, financiers, computer scientists, policymakers and others with vested interests in the digital complex. As such, we are now seeing growing talk of ‘Tech for Good, ‘AI for Social Good’, and similar sloganeering that “loosely refers to technology developed or employed to advance human flourishing or for social purpose” (Powell et al. 2022).

‘Tech for Good’ has fast become a popular trope amongst technology developers, business leaders, policymakers and academics. National governments have begun to frame their AI policy positions in terms of ‘for good’, the phrase is proving popular among consultancy firms such as McKinsey, while corporations such as Google and Intel have also seized on the approach as a leading business principle. The slogan is now attached to all manner of technology projects, initiatives and use-cases – from humanitarian uses of facial recognition through to AI-based means of preventing domestic violence. Running throughout all these cases is the underpinning inference is that ‘good’ ends can arise from deliberate changes to technology design and/or technology business processes. 

‘Tech For Social Good’ – more than just marketing spin?

At first glance, this framing of emerging technology might be dismissed simply as empty commercial sloganeering and marketing. As Magalhães and Couldry (2021) point out, the suffix ‘For Social Good’ certainly provides powerful Big Tech corporations with an easy means of disingenuously aligning their commercial interests with humanitarian ideals. 

Nevertheless, it is worth unpacking this discursive shift in more detail – especially the ways in which it reframes issues of power, private interests and the notion of public good. For example, as Radhika Radhakrishnan (2022) contends, underpinning the application of ‘Tech For Social Good’ in fields such as healthcare and education is the presumption that private sector technological interventions are now the most direct means of redressing poor quality public services (rather than directing our time and efforts on directly improving the public sector). In addition, such interventions also push the importance of investment in capital in the form of digital technology, while devaluing the idea of investing in workforces and improved working conditions.

In all these ways, then, the idea of ‘Tech For Good’ is not just a benign addition to existing forms of public policymaking and societal improvement, but works to fundamentally change the nature of what is done, and who is doing it. Put bluntly, talk of ‘Tech For Good’ promotes the idea of developing and deploying ‘spectacular technologies’ which can push fields such as education, healthcare, humanitarian aid and other facets of public and civic responsibility “towards more technologized super-specialized futures, which are shaped and driven by private stakeholders” (Radhakrishnan 2022).

‘Tech For Good’ as a mis-reading of ethics?

In one sense, the continued promotion of ‘Tech For Social Good’ reveals some fundamental flaws in how technology is conceived by ‘big tech’ actors and associated groups responsible for developing and installing IT across  society. As Alison Powell (2022) and colleagues reason, these narrowly-framed ideas of ‘For Good’ are rooted in highly limited understandings of technology ethics, values and virtue. This is evident in at least three ways:

  • a highly individualised conception of ‘virtue’ – where good ends are seen to arise from an individual entity (such as a tech company or individual UI designer) behaving virtuously. Thus, rather than ‘doing good’ being a collectively-achieved process, ‘good’ outcomes are seen to arise from individual efforts; 
  • a sense of ‘tech exceptionalism’ – where technology gives these individual actors freedom to act virtuously within social contexts that are otherwise usually experienced as heavily bounded and structured;
  • a linear cause and effect’ conception of how to reach virtuous outcomes (doing X to achieve Y), with technology framed as a lever to these achieve virtuous outcomes.

What is left out of ‘Tech For Good’?

As with any critique, it is important to ask what is not being said (as well as what is being said) within the continued promotion of  ‘Tech For Social Good’. All told, these limited conceptualisations of how ‘good’ might be realised within social contexts work to simplify and sideline a wide range of complexities inherent within the social problems that these tech projects set out to address. 

One key concern are the ways in which talk of ‘Tech For Social Good’ sidelines any consideration of the actual needs, viewpoints and understandings of the social groups and local environments that are having ‘good’ done to them. Here, it might be argued that ‘Tech for Social Good’ approaches tend to frame social issues in ways that reflect dominant interests, understandings and knowledges. More often than not, these tend to reflect technocentric, technocratic, male, Western, positivist lines of thinking that reinforce socioeconomic power structures. Thus ‘Tech For Good’ projects tend to be notably limited in terms of what topics are deemed ‘good’ and worthy, what specific problems are focused on, and what outcomes are anticipated (and eventually celebrated).

Writing from a global South perspective, Radhakrishnan extends this point by drawing direct parallels between the rise of ‘Tech For Good’ in Indian healthcare and the historical emphasis placed on science and technology as instruments of the colonial state. Radhakrishnan contends that the application of AI and other data-driven technologies represents a post-colonial subjugation of the disempowered – appropriating the needs of the sick-poor, exploiting their lives, bodies and medical records as data to train AI algorithms, and therefore perpetuating the extractive logics of the data economy.

Conclusions

As with most aspects of ‘digital society’ debate, many of these concerns relate to the ways in which emerging technologies are entwined with issues of structural power and systemic disadvantage. More specifically, as Powell et al (2022) contend:

“Put simply, the ‘good’ in ‘technology for good’ continues to mask the structural issues inherent in our societies that cause the problems that need to be fixed through technology”.

In this sense, the recent rise of ‘Technology For Good’ efforts might be seen as a variation on the ‘techno-solutionism’ that has pervaded the societal roll-out of digital technologies over the past 40 years or so. Indeed, current industry and policy enthusiasm around ‘Tech For Good’ certainly perpetuate an over-selling of technology-based social interventions in ways that limit how social problems are chosen and framed. They also seem to give rise to projects and initiatives that result in outcomes that are unlikely to challenge dominant interests and the overall status quo.

All told, we are well advised to challenge this limited conceptualisation of technology as a neutral and objective means of ‘doing good’. Instead, it is crucial to address the contexts in which ‘Tech For Good’ programs are being produced, and problematize the power relations and institutional settings within which these technologies are being applied.

Of course, it is not enough to simply be critical of such efforts, and the arguments developed by the likes of Radhakrishnan, Powell and others should not be taken as advocating that we give up altogether on the idea of technology being used to beneficial ends. Instead, as Radhakrishnan puts it, there is clearly merit in local communities, techno-activists and civil society continuing to consider “the challenge of what kinds of problems can be ethically solved using AI without exacerbating marginalizations”.

In continuing to think about what technology might be good for, it seems sensible to look beyond the big social issues that ‘Tech For Good’ programs usually tend to gravitate towards. Instead, the most impactful uses of technology are much more likely to be small-scale and locally-appropriate. Perhaps most important, is the need to look for ways to redress the limited extent to which targeted communities of these ‘Tech For Good’ programs are currently given a say in determining the nature of how technologies are being deployed in their communities and throughout their everyday lives. 

As such, it seems far more realistic to approach the possible benefits of digital technologies as a collectively determined endeavour. This requires repositioning technology ethics as a collective practice rather than an individualistic process. As Powell et al. (2022) conclude, this involves “the juggling and juxtaposing of different individual positions in relation to unfolding collective concerns”. 

Above all, this implies the sustained and genuine involvement of vulnerable, marginalised groups in determining the focus and form of any technology related interventions. This recognises that ethical outcomes do not arise from technology use in a linear, consequentialist fashion, but arise from “the fact that humans and non-humans are embedded in networks of interdependency that make possible our very existence and survival in the first place” (Powell et al. 2022).  In other words, if tech is going to be leveraged for social good, this is something that everyone needs to be involved in. 

**

References

Magalhães, J. and Couldry, N. (2021) Giving by taking away: big tech, data colonialism and the reconfiguration of social good. International Journal of Communication15:343-362.

Powell, A., Ustek-Spilda, F.,   Lehuedé, S. and Shklovski, I. (2022).  Addressing ethical gaps in ‘Technology for Good’: foregrounding care and capabilitiesBig Data & Society  https://doi.org/10.1177/20539517221113774

Radhakrishnan, R.  (2022) Experiments with social good: feminist critiques of Artificial Intelligence in healthcare in IndiaCatalyst  https://catalystjournal.org/index.php/catalyst/article/view/34916/28260