A driving force behind the development of artificial intelligence over the past 70 years has been military funding – more specifically research and development funding from the US Department of Defence and national security agencies. The first decades of AI research coincided with the so-called ‘Cold War’ as the US and allies strove to gain any advantage possible over the Soviet Union and its allies. Against this background, there was plenty of military enthusiasm to pump funding into the development of automated tools that could process vast amounts of data to mechanise decision-making, pattern-matching, object recognition and prediction. This resulted in the establishment of AI as a field that was built around the core aims of doing these things at scale, speed and with high levels of precision and efficiency.

On one hand, many AI researchers would argue that these sources of funding have made little difference to the work that they have been able to carry out since the 1950s – allowing them to focus on wide-ranging problems that have resulted in plenty of civilian-facing technologies. Indeed, defence funding has initiated the development of AI that sits beneath everyday tech such as the Siri iPhone voice assistant and autonomous cars. As Philip Agre (1997) put it when looking back on the first few decades of AI research,  “if the field of AI during those decades was a servant of the military then it enjoyed a wildly indulgent master.” 

Yet, this aspect of AI development should not be glossed over so easily . The substantial military funding that continues to flood into AI research and development is not without any agenda and/or influence at all. Indeed, defence imperatives and logics have clearly influenced the problems that AI researchers have found themselves addressing as well the types of AI that have subsequently been developed. Similarly, military and security funding have also clearly shaped the characteristics of this technology, the wider purposes that it can be put to, and many of the underpinning norms that have come to define the AI community.

Thus, it is perhaps not surprising that AI technology developed to satisfy military priorities such as ‘battlefield management’ and overcoming “obstacles to military dominance” (Widder et al. 2024, p.18) are authoritarian in nature, relentless in their compulsion to extract, capture and process data, and largely unconcerned with unintended consequences, wider harms, or issues of fairness or social justice (all outcomes that might be justified in military terms as collateral damage). Similarly, it is perhaps not surprising that AI technology developed to strengthen US long-term national security priorities is well suited to targeted surveillance and threat detection, risk prediction and monitoring, as well as hierarchical ordering of command and control.

These are all qualities of AI that we will have to bear in mind when thinking about the forms of AI that are now being implemented in education contexts – AI technologies that were developed for decidedly non-educational contexts and concerns such as battlefield situational awareness, distributed warfighting and human-machine teaming. For example, what does it mean to be using intelligent tutoring systems with young children that actually originate from efforts to provide on-demand tuition to service personnel who might need to quickly learn how to operate unfamiliar equipment on a remote battlefield, or perhaps fix a ship’s propulsion system in the middle of an ocean. What does it mean to be using emotion AI in classrooms that was originally developed to model the mental states of military personnel – monitoring a soldier’s mood, concentration and engagement with their task-in-hand (a task more likely to be attacking battlefield adversaries rather than learning elementary school math).

Of course, it could be countered that the development of AI is prohibitively expensive – well beyond the budgets of education departments and school districts. In this sense, military funding of AI has resulted in the development of technologies that otherwise would not have been developed and reappropriated in education settings. Yet, from another perspective, the military origins of AI could be seen to have resulted in a form of technology that comes with specific logics and inclinations that many might feel to be wholly inappropriate to supporting learning, teaching and human development. One would hope that even the most authoritarian educator would hopefully be willing to concede that classroom management is not wholly equitable with battlefield management. 

While there is growing concern over the over-reliance on commercial AI bringing corporate logics into our classrooms and schools, we should not overlook the militaristic nature of this technology. What does it mean to be making our schools, universities and other forms of education reliant on technology that continues to be “implicated in the project of US military hegemony” (Widder et al. 2024, p.3)?

REFERENCES

Agre, P. (1997). Toward a critical technical practice: Lessons learned in trying to reform AI. in G. Bowker (ed), Bridging the Great Divide: Social Science, Technical Systems, and Cooperative Work. Psychology Press, (pp.131–158)

Widder, D., Gururaja, S. and  Suchman, L.  (2024).  Basic research, lethal effects:  military AI research funding as enlistment.  https://arxiv.org/pdf/2411.17840