Much of the ‘controversy’ around AI stems from fundamental differences in how tech developers and tech critics see the world. How can we stop talking at cross purposes?
**
The past five years have seen artificial intelligence (AI) become an increasingly popular topic of conversation in education, with this popularity driven by large amounts of optimism and fear. With these technologies beginning to play a prominent role in schools and universities, it is safe to say that public and professional understandings of ‘AI’ in education are being held back by different groups talking about the same technologies along very different lines.
In particular, there seems to be very little common ground between the views of those involved in the development, production and implementation of AI technologies, and those seeking to raise social concerns over AI ‘harms’, ethics and injustice. Much of the academic literature certainly points to a set of parallel conversations taking place around the topic of AI – with a fair amount of misunderstanding, resentment and self-righteousness on both sides.
This is especially the case when specific instances of ‘AI’ move out of the R&D phase and begin to attract more attention and scrutiny from broader audiences. It is perhaps understandable that computer scientists become irritated by what they see as ‘late-in-the-day’ criticisms of their work from ‘non-experts’. As Vassilis Galanos (2019) laments, many computer scientists have come to feel that AI innovation faces a continual challenge from ‘uncredentialed commentators’ and ‘expanding experts’ who perceive emerging forms of AI as an existential threat.
As such, Galanos’s arguments reflect a distinct sense of ‘them and us’ that is festering amongst AI computer science communities. For example, Galanos derides these ‘non-experts’ as peddling “a curious ‘counter-hype’ of critical stances towards AI”. The inference here is that anyone lacking sufficient understanding should stay in their lane, or else risk reaching ‘curiously’ misconceived conclusions. As Galanos continues, “while their expertise in other fields is undeniable, their knowledge of AI remains questionable”.
Even the most conciliatory AI specialists are aware of such schisms. Indeed, there is long-standing recognition that AI developers are a distinct expert group even in comparison to other areas of computer science. Elliott Soloway (a computer science professor whose students included Google co-founder Larry Page) went as far in 1990 to suggest that the gap between ‘AI folks’ and other computer scientists was as pronounced as C.P Snow’s famous description in the 1950s of the ‘The Two Cultures’ gap in Western society between ‘science culture’ and ‘literacy (non-science) culture’.
While ostensibly working in the same field, Solloway portrayed the AI research of the 1970s and 1980s as narrowly ‘driven by its own questions’ about formal characterisation of representational schemes, and a myopic focus on how to best develop effective truth maintenance systems. In contrast, Solloway saw little curiosity amongst his AI colleagues around issues ‘external to the computing mechanism’ – particularly what he described as “humans and their idiosyncrasies”.
This scism between people from very different backgrounds with very different interests and concerns can certainly be seen throughout the literatures that are now growing up around AI and education. The ‘AIED’ community might be characterised as concerned primarily with questions of technical efficiency – i.e. questioning how best to develop these technologies. On the flipside is a critically-minded community concerned primarily with questions of social justice, and often strongly questioning the very presence of these technologies in education.
This inevitably leads to distinctly different conversations taking place, all rooted in very specific approaches to making sense of what ‘AI’ is. Lazy observers might be tempted to stereotype this as ‘inhuman techies’ battling with ‘liberal snowflakes’. For sure, some people involved in the development and selling of education AI undoubtedly see push-back against ‘racist algorithms’ and protests against ‘robot teachers’ as bound up in broader trends of ‘woke’ digital activism and a ‘tech-lash’ against the machinations of the digital economy (see Roberts 2021).
Yet these schisms arise from significant – but often implicit – differences in the frames of reference being used to make sense of AI and everyday life. Here, for example, Peter Krafft reasons that AI developers see themselves as practically focused on immediate issues of improving the functionality of ‘technologies in use today’. Conversely, AI developers will often discount social critics as distracted by abstract ethical concerns that “overemphasize concern about future technologies at the expense of pressing issues with ‘existing deployed technologies’” (Krafft et al. 2020).
In contrast, social critics might well see themselves as raising long-standing concerns that are grounded in historical precedents – not least what Birhane and Guest (2020) term the “stagnant, sexist, and racist shared past” of fields such as psychology, neuroscience and artificial intelligence that converge upon the development of a current generation of digital tools that claim to infer characteristics such as gender and age, or ‘detect’ students’ emotions, motivations and intentions. From this point of view, then, it can be argued that “the AI community suffers from not seeing how its work fits into a long history of science being used to legitimize violence against marginalized people, and to stratify and separate people” (Van Noorden 2020).
These very different outlooks on AI and education therefore mark distinct differences in ontology – in other words, how one sees the existence of the social world. As might be expected, there are plenty of computer science researchers and AIED developers happy to presume the world to be broadly quantifiable, calculable and subject to statistical control. Judy Wajcman (2019) describes this as an ‘engineering’ mindset that conceives social systems as essentially programmable machines that can be engineered to operate effectively given the correct inputs.
In contrast, are many others who feel that these assumptions do not extend to educational contexts and people’s everyday engagements in the social contexts of schools and universities. Indeed, referring to the tendency of AI researchers to see the societal impacts of their work in terms of ‘social engineering’, computer scientist Bettina Berendt points to the limitations that can arise from the ‘problem-solving mindset’ that underpins professional outlooks and practices within computer science professions. As Berendt (2019, p.62) acknowledges, “I suspect that the reductionist tendencies in conceptualizing [social] problems and solutions are stronger among computer-science engineers”.
So, where does this leave us? Are AI experts and their critics destined to continue to talk at cross-purposes? While seeming to be a long-standing impasse, there are moves to brings these sides closer towards common ground. On one hand, there is growing recognition among critics that they need to develop more familiarity and at least a ‘working knowledge’ of the technical basics of AI. Informed and knowledgeable critique comes from a basis of technical familiarity and computational awareness. For example, Wendy Chun (2008) – taking forward N. Katherine Hayles’s notion of ‘medium specific criticism’ – stresses the need for those working in the social sciences and humanities to work hard to properly understand how any AI technology functions and operates before attempting to critique it.
Conversely, there are perhaps signs that some people working in the tech sector are beginning to embrace some of the growing critical push-back around AI issues. For example, we are seeing growing resistance from those working in the tech sector against the development of AI technology for military and security purposes, as well as concerns over the tendency for these systems to result in racial and gendered discrimination. Perhaps too we might see those who normally focus on the technical challenges of educational AI similarly beginning to become more aware of the social issues with which educational instances of this technology are associated.
**
REFERENCES
Berendt, B. (2019). AI for the common good? Paladyn, Journal of Behavioral Robotics, 10(1), 44-65.
Birhane, A. and Guest, O. (2020). Towards decolonising computational sciences. arXiv preprint arXiv:2009.14258.
Chun, W. (2008). Control and freedom. MIT Press.
Galanos, V. (2019). Exploring expanding expertise. Technology Analysis & Strategic Management, 31(4), 421-432.
Krafft, P., Young, M., Katell, M., Huang, K. and Bugingo, G. (2020). Defining AI in policy versus practice. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, February (pp. 72-78).
Roberts, J. (2021) ‘Woke culture: the societal and political implications of black lives matter digital activism. in Regina Luttrell, Lu Xiao, Jon Glass (eds). Democracy in the disinformation age (pp. 37-57). Routledge.
Soloway, E. (1990). The techies vs. the non-techies: today’s two cultures. In AAAI-90 Proceedings, January (p.1134) – http://www.aaai.org/Papers/AAAI/1990/AAAI90-170.pdf
Van Noorden, R. (2020). The ethical questions that haunt facial-recognition research. Nature, November 20, https://www.nature.com/articles/d41586-020-03187-3
Wajcman, J. (2019). The digital architecture of time management. Science, Technology, and Human Values, 44(2), 315-337.
Comments are closed.