Pieter Verdegem – in the new edited collection ‘AI For Everyone? Critical Perspectives’ (available as an open access PDF) – makes a strong case for examining AI through the lens of power. He argues that while the past few years have seen a welcome increase in socially-concerned talk of AI ethics, AI ‘for good’, AI harms and similar notions of (dis)empowerment, many of these conversations around the politics of AI have notably lacked any sustained deliberation of power.

Verdegem acknowledges that power is a contested concept. We might see power in a positive sense as simply the capacity to achieve and accomplish things. Alternately, we might conceptualise power in a negative ‘coercive’ sense of being able to make people do certain things, and/or prevent them from doing other things. In contrast to these common-sense understandings, Verdegem makes use of Neil Thompson’s (1995) additional forms of economic, political and symbolic forms of power. Seen along these lines, then, the following forms of power are associated with the ongoing development and promotion of AI:

  • Symbolic power – power that derives from meaning-making and influencing the actions of others. This is certainly evident in what Verdegem terms ‘AI ideology’ – i.e. the ways in which human consciousness is manipulated to see AI as an important (if not inevitable) means of determining future forms of society and/or economy. Current dominant forms of AI ideology range from vague progressive notions of how AI might foster new forms of a ‘good society’, through to the sustained capitalist boosting of AI-driven work and economic growth. How particular voices and interests are able to drive societal understandings of what AI is (and what AI is for) are key forms of contemporary power.
  • Economic power – power that derives from accumulating resources for productive activity. In terms of AI, this form of power is exercised through ownership of the data infrastructure and computational power that underpins the development and deployment of AI systems. This forms of power also lies in the expert data science and human resources involved in the development of AI. Clearly, this power is concentrated in the hands of specific IT industry and ‘big tech’ actors.
  • Political power – power that derives from having the authority to coordinate individuals and their interactions. This political power is evident in what Verdegem terms the ‘social practices’ of AI. These include the ways in which developers, programmers and engineers choose to design the classification systems, models and procedures that drive processes of machine learning, deep learning and algorithmic computation. Who gets to be involved in these design processes, and who gets to decide what is included/excluded from the development of these systems (e.g. how things are represented through data) are all key forms of political power. 

Approaching AI through these different lenses is a neat way of foregrounding the structural power imbalances that currently underpin the corporate development AI technologies by ‘big tech’ and IT industry actors, their attendant financial and policy backing, and the technical elites that work for and/in invest in these corporate interests. At the same time, viewing AI in these terms is also a neat way of foregrounding alternate technosocial arrangements that might support more empowering conditions for those groups that do not currently benefit. This raises a number of questions. For example …  

  • what might it mean to promote more societally just and fair messages about how and why AI should be finding a place in our digital societies? 
  • what might it mean to have data infrastructures and computational capabilities in public (rather than private) hands? 
  • what might it mean to democratise the technical and data skills across social groups – more diverse professions that are fully representative of women, POC, non-ablest backgrounds? 
  • what might it mean for those from minoritized backgrounds to lead key decision-making processes that steer the design and development of new AI technology?

Here, then, we reach the dénouement of Verdegem’s argument around the idea of ‘AI for Everybody’. Here he argues for the ‘radical democratization of AI’ – redistributing power in the field of AI in ways that challenge the power dynamics and structural inequalities implicit in current arrangements. Drawing on Erik Olin Wright, Verdegem advances three principles around which AI might be reframed and reimagined:

  • AI and the benefits it offers, should be accessible to everyone – this reflects the egalitarian ideal that AI should be of benefit to all, including the idea of establishing AI that is of intergenerational benefit (i.e. of benefit to future generations), and also of environmental benefit.
  • AI and the different services that are being developed should also represent everyone – this reflects the democratic ideal that all members of society “should have a say about what type of AI is being developed and what services are being offered”
  • AI should be beneficial to everyone – this reflects an ambition to reframe AI through ideals of solidarity and community. In other words, the development and deployment of AI must be organised along cooperative lines that ensure that all members of society can benefit. 

These are idealistic ambitions, and Verdegem acknowledges that the idea of ‘AI for Everybody’ might be easily co-opted by the AI industry in a similar manner to ‘AI for social good’ or other popular platitudes. Yet he argues that the three principles just outlined are imbued with qualities that are conspicuously absent from current articulations of ‘AI for good’ and AI ethics – i.e. ideas of genuine democratisation, and placing the needs of humans and human society at the forefront of any discussions around the forms of AI that society might desire. In this sense, these principles reorient the forms of power associated with AI away from values of profit, domination, control and individual gain. Instead, these principles challenge us to reimagine AI in terms of values of relationships, mutual support, shared power, cooperation, collaboration, solidarity, and collective empowerment. These latter values all point to the possibility of a very different politics of AI. While Verdegem’s arguments might be ambitious (if not idealistic), they at least offer us a different set of targets to be aiming for, and serve as a reminder that other forms of AI and AI futures are possible.



Verdegem, P.  (2021).  AI for Everyone? Critical Perspectives.  University of Westminster Press