It is difficult to refute the argument that dominant forms of contemporary AI enforce limited ways of knowing and acting – more specifically, ways of knowing and acting that serve to reinforce dominant interests. As has been well-noted by critics, the forms of AI now coming to prominence are notably products of very particular sociocultural environments rooted in values that are typically white, patriarchal, heteronormative, ableist and so on. Less well-noted, perhaps, is the ways in which AI is grounded in Western perspectives. As Williams and Shipley (2021, p.44) put it, this is technology that is perhaps better described as “artificial Western ethno-intelligence”. Indeed, Western values run to the heart of how AI is conceived, not least its grounding in modernist ideas of categorising and labelling, utility and efficiency, as well as the privileging of empirical knowledge over embodied knowledge (McQuillan 2022).

Critical interest is growing, therefore, in how we might  work to promote alternate values, priorities and perspectives around AI. One particularly interesting aspect of these ongoing discussions is how AI might be reframed through the lens of Indigenous epistemologies, cosmologies, and ways of being and doing. For most proponents of AI in the global North, this pushes familiar Western presumptions about AI  into correspondence with what might be seen as unfamiliar (and perhaps confronting) concepts such as harmony with others, close kinship networks, deeper affinity with the natural world, and ecological understanding. 

There is clear value in working to reconceptualise AI in such terms. One initial attempt to do so is offered by Luke Munn’s recent paper ‘Designing and evaluating AI according to indigenous Māori principles’, which applies the work of anthropologist, historian, and noted Māori leader Sir Hirini Moko Mead to current Western framings of AI technologies as they are starting to be applied across various societal domains. As Munn explains, Mead is famous for developing a widely-used set of ‘Five Tests’ which can be used to evaluate any new societal issue from an Indigenous perspective. It therefore makes good sense to apply these tests to Western conceptualisations of AI technology. 

Test1:Tapu 

Mead’s first test is Tapu – broadly referring to anything that can be considered as sacred, set apart, strictly restricted and/or forbidden within a society. As such, these are fundamental cultural rules that any AI system designer needs to be aware of and work to carefully maintain. In other words, these are sacrosanct principles that need to guide all aspects of the development of any AI technology – not just a set of rules that technology designers work to skirt around, and attempt to not breach.

To put this in terms of AI development, Mann gives the example of the deceased always having a high Tapu status in Māori cultures. This means that within Maori culture, the dead are always set well apart from the living. If  we apply this principle to the development of facial recognition technology, for example, this places the common practice of training facial recognition technology on photographic datasets containing images of living and dead people in clear breach of tapu principles.

In a basic sense, then, Tapu raises the imperative for the development of AI models in ways that are culturally sensitive and culturally aware. Yet it also challenges typically Western computer science concerns where culturally sensitive content might be seen as justifiably co-opted in the name of efficiency and optimisation. The developers of facial recognition technologies tend to push strongly for training their models on as large a number of known facial photographs as possible in order to refine the accuracy of their products. In stark contrast, the principle of Tapu asserts that need to place limits on which faces  are included in such exercises.

This logic can be applied to other recent controversies around the ethics of training algorithmic models using existing data. Take, for example, the controversy surrounding reports of third party sales of chat transcripts from a US teen suicide online service. This data was reportedly being used to refine AI software being developed to guide emotionally-charged call-centre interactions. For the system designers, the original provenance of these conversations was of far less importance than their illustrative capacity to show how human counsellors deal with emotionally-charged conversations. In contrast, a more principled approach to AI development based around Tapu might well have considered this as wholly inappropriate. 

Test 2: Mauri 

Mead’s second test that can be applied to AI is Mauri – i.e. the life-force, vitality and spark of life that exists within people and places. As Munn explains, once a living thing dies it loses its Mauri. In this sense, the development and implementation of AI technologies need to take into account the need to uphold (rather than endanger) this spark of life and vitality where-ever it is found.

Again, this is a completely different slant on what constitutes an acceptable form of AI technology, and immediately foregrounds concerns around the impact of AI technologies on the vitality of families, communities and eco-systems. In terms of Mauri, these are all areas of care and protection that AI systems should not compromise, corrupt or endanger. Indeed, the Mauri test emphasises the deeply interrelated nature of these social and ecological aspects of life. Caring for someone’s family implies caring for the community within which they are connected. Caring for a community involves caring for the natural environments (such as lakes and forests) that sustain their lives and livelihoods.  

Re-assessing AI in these terms, then, raises some very different conversations about the forms of technology that we might be prepared to accept into our societies. Do we want technologies, for example, that incur exceptional energy burdens, or rely on data storage centres  which are implicated in the mass consumption of water resources, or the enforced seizing of community land? Do we want technologies that are designed to substitute for direct contact with family and community members – for example, night-time baby monitoring technologies that relieve parents of the task of listening out for signs  of infant distress, or automated chatbots that provide elderly adults with some intimate contact and conversation?

Test 3: Take‐utu‐ea

Mead’s third test Take-Utu-Ea. This is the principle of restoring balance – in other words, the need to address imbalances, and resolve problems through an exchange of some kind. The three elements to this principle are: (i) take – the issue that must be identified; (ii) utu – the compensation that needs to be made to remedy this issue, and (iii) ea – the balance that needs to be restored.

In this sense, Take-utu-ea foregrounds the principles of accountability and taking responsibility for one’s actions, and acknowledging the consequences of one’s actions on  others. To contrast to Western concerns around AI in terms of ‘Fairness, Accountability and Trust’, the Take-Utu-Ea test raises the need for technology designers and developers to take meaningful and substantive responsibility for any adverse social outcomes and harms that result from their products. 

For example, this raises attention to the nature of  the social and environmental  harms that arise from AI use, and the appropriate reparations that should accompany these harms. For example, what level of compensation is appropriate for a prospective homebuyer who is erroneously denied finance due to racially-biased credit-rating software? Alternately, what form of reparation is appropriate when a content system consistently promotes the dissemination of disinformation, malformation, misinformation and other digitally-driven destabilising influences on societal and cultural cohesion?

Take-Utu-Ea raises the need for mechanisms to hold AI firms properly accountable for the consequences of their products being deployed in societies, and for these required reparative responses to be enforced. In contrast to the current situation where AI firms are pushing for ineffectual forms of self-governance and limited state regulation, the Take-Utu-Ea test might be seen to infer the need for much more rigorous specific oversight and legislation, or perhaps even enforceable forms of community control. Crucially, these mechanisms would be seen as a prerequisite to the development of any technology … not a belated after-thought.

Test 4: Precedent

Mead’s fourth test is Precedent. This involves making decisions about the future based on historical knowledge, established institutions, and prior techniques. In this sense, Indigenous cultures all have extensive sets of traditions, common narratives and cosmologies, and will always try to look back to examples from the past in order to provide appropriate guidance for what to do next. Here, Munn points to the Māori use of pūrākau (traditional narrative) and other story-telling traditions that are used to provide guidance for the future. 

Applying this approach to AI therefore stands in stark contrast to the aggressively ‘forward-looking’ and ‘disruptive’ mentality of much Western technology thinking – what Munn characterises as “now-focused amnesia”. It also points to a common sovereignty of Indigenous knowledge that cannot be “dominated, controlled, or co-opted by colonizers”. 

This latter issue has come to the fore over the extraction of Indigenous text by AI companies seeking to develop training datasets for large language models. From the point of view of software developers, appropriating a large online corpus of text is seem as fair game in pursuit of training algorithmic models to mimic the recurring patterns of language use. Yet, such ‘scraping’ of common narratives and shared language in this way is seen as wholly inappropriate from an Indigenous perspective. In short,  these cultural artefacts are not data points that can be extracted and co-opted for profit. Instead, this is a common living resource rooted in historical knowledge that is drawn on by Indigenous communities for the good of their communities.

Test 5: Principles

Mead’s fifth test is Principles – an additional ‘catch-all’ set of supplementary values and rules to help communities arrive at distinctive positions regarding decisions about new issues. These include  the  Māori concept of whanaungatanga which relates to the obligations of  kinship and relationships. This also includes the concept of manaakitanga (the need to rise above personal grievances and politics), and  the imperative to not damage the mana of the subject or user (i.e. their authority, status and spiritual power). Also of significance is the concept of noa – the opposite of tapu, that highlights the need to ensure that any new issue is integrated into everyday life in an appropriate way to those who use it so that it can become commonplace and no longer controversial. The final concept is tika – put simply,  “whether something is ethically, culturally, spiritually, and medically right” (Mead 2016).

These principles add a number of different perspectives through which any evaluation of AI might be approached. Again, many of these are concepts and concerns that are not usually considered in Western discourses around AI. For example, in contrast to Silicon Valley enthusiasm for the ‘shock of the new’ and the idea of ‘move fast and break things’, the principle of noa raises the concern that any new AI technology should be deigned to be as harmonious as possible with the social settings and established social practices that it is intended to be integrated with. 

Elsewhere, the principle of whanaungatanga raises the importance of technology that supports and enriches our relationships with those around us – such as extended family, colleagues, and peers. Similarly, the principle of manaakitanga warns against technology that might lead to divisive and antagonistic outcomes. All of these issues have historically been peripheral concerns in the development of AI over the past ten years or so, and only recently are beginning to be given serious attention. Moreover, those voices pushing such concerns tend to those outside of dominant groups in AI development (women and people of colour), and routinely dismissed as lacking expertise. Such a situation would not occur in a world where AI was approached through indigenous principles. 

Conclusions

All these principles, values and understandings offer a distinct break with the current dominant framings of AI as pushed by Western IT industry and policy interests.  An Indigenous framing of AI raises concerns around human dignity, collective interests and communal integrity, as well as framing impacts according to local norms. Crucially, these approaches also foreground the ways in which AI is entwined materially with the natural environment. 

Seen through these lenses, then, much of the current Western push for AI appears dangerously unbalanced and removed from the needs of people and land. Seeing AI in Indigenous terms offers a start contrast to Western imaginings of the complete AI-led ‘transformation’ of society, and extreme visions of an omnipotent ‘general artificial intelligence’ and possible ‘singularity’. When set against the Indigenous framings outlined in Munn’s paper, current dominant AI rhetoric such as this appears decidedly arrogant, hubristic, disrespectful and destructive.

Munn suggests two obvious ways in which these indigenous Māori principles might be taken further into the design and evaluation of AI. First, in the short term, these principles might be applied to the current design of AI technologies and tools – providing a set of questions that AI designers and developers can be encouraged to ask of their products-in-progress.

Second, is the longer-term co-option of these principles as a fundamental basis from which we continue to approach the future development of AI. As Munn notes, Indigenous principles are not something that can be attached superficially to the development of AI on an ad hoc basis. Indeed, raising the idea of an Indigenous perspective on AI runs the risk that computer scientists and developers simply engage in the tokenistic application of these values without any meaningful change in how AI is approached and developed.

Instead, Munn suggests that these principles (and others like them), need to be used as a basis from which to advance the sustained fundamental reform of AI along decolonised lines – forcing IT industry, policymakers and other drivers of AI to ground their actions and ambitions around larger questions of justice, inequality, and coloniality. This implies a de-emphasis on current valorisations of technological  speed, scale, novelty and wilful disruption. Instead, this promotes an approach to AI that is “slower, more considered, and more considerate of life in its various forms” (p.70).In short, this reminds us that other forms of AI are possible, and that guardrails for achieving a better form of AI have been in existence for thousands of years.

References

McQuillan, D.  (2022).  Resisting AI.  Policy Press

Munn, L. (2023). The five tests: designing and evaluating AI according to indigenous Māori principles.  AI & Society  https://doi.org/10.1007/s00146-023-01636-x

Williams, D. & Shipley, G. (2021). Enhancing artificial intelligence with indigenous wisdom. Open Journal of Philosophy11(01), 43-58.