This is a transcript of the first of two ‘Meet The Education Researcher’ podcasts that we’re putting out in August 2021 on the topic of AI and education. This episode features Professor Erica Southgate from the University of Newcastle. Erica is a great advocate for technology in education, but always with a critical eye. Here, we cover a range of issues, such as the importance of AI explainability, the idea of big tech companies working alongside educational communities, and the big learning related questions that AI throws up. This transcript has been lightly edited for clarity.
**
First off, Erica gave us a flavour of the AI technologies that we are already using in education. As she was keen to explain, this is not speculative technology that is decades away. This is actually technology that we’re already using, even if we often don’t realize it ….
[ERICA] I think there’s already really neat applications of AI. For instance, when I am using PowerPoint, I’ll get recommenders about the design of my PowerPoint presentation. AI is doing that through machine vision.
And then there’s the type of uses of artificial intelligence or machine learning algorithms for data mining which will produce, for instance, snapshots or profiles of learners. This I think is a current application that teachers in particular are thinking about.
I was at a conference very recently where there was a lot of talk about ‘analytic dashboards’ being included in learning management systems and online learning systems for schools. There was lots of talk about the type of data that might go into that dashboard, how it might be analysed through machine learning algorithms, and what it might represent in terms of the learner in order to guide both students (and also teachers) to get a sense of achievement levels or perhaps the well-being of the student.
**
One of the key things underpinning AI in education is the significance of data and data-driven automations. So next Erica was keen to discuss the importance of asking what data can – and cannot – do in terms of supporting teaching, especially when compared to the pedagogical relationships with students that teachers are expected to draw upon in their own decision-making.
[ERICA] So there’s a difference between a pedagogical relationship between a teacher and student, with the teacher interpreting and getting to know and making connections with students and their families to understand learning through a holistic sense of that child … and a machine extracting data using a pre-existing model, and a machine learning model that then creates its own algorithms to extract more data, and then presenting that back to a teacher who somehow has to interpret it as a representation of the whole child.
And so once we have this kind of intermediary of the machine, it becomes a whole different pedagogical and ethical issue. Particularly when children and young people have very little say in that. What we need to do, is really engage with communities around what these types of technologies mean, for the student, for the communities, and for the teachers that are connecting with them.
**
Erica is a great person to talk to because she works across schools and higher education. So the next thing we talked about was some of the interesting examples of AI that she’s come across in primary education, and what wider issues are raised about teaching and learning when these technologies begin to be applied in classrooms.
[ERICA] I recently did a few research projects on the use of virtual reality in schools. And we had some primary or elementary school children build virtual worlds for the learning of Italian, and it was a wonderful experience. But what was really interesting – and something we didn’t really predict which we should have – is that the children preferred the use of machine translation tools … rather than going to the teacher, or go into their dictionaries or their workbooks to get the correct Italian phrasing or translation.
The issue, of course, is that this is not the best way at the moment to learn a language and, in fact, it gives you many inaccurate and inappropriate translations. And so the disruption to something like language teaching will be immense from these types of tools. But what it has started is conversations around the relationships between this type of AI-powered application and traditional pedagogies with language teaching, and how teachers themselves need to skill up and think about that in relation to the curriculum. So, learning with AI, learning about AI, and learning how to thrive in an AI world, but thinking critically with their children. So it’s a really good recent example, I suppose, about the sorts of conversation that are emerging.
**
Next, we flipped over to talking about the use of AI in university. Here, Erica raised a couple of examples of different software that have become familiar to a lot of lecturers and students over the past couple of years – online exam proctoring, and plagiarism detection systems. And she was keen to stress the unseen issues that accompany this sort of tech – everything from students gaming the algorithms, through to how technology that’s supposed to make higher education fairer, can actually lead to serious disadvantages for some groups of students.
[ERICA] We use AI in plagiarism detection software that’s based on AI and pattern matching in particular. This can be useful for students to learn about plagiarism if it’s used as a educative or pedagogical tool. Of course, this technology can be gamed … and I see that often. So, if students understand how machine learning and pattern matching works, they can actually understand how to get around that in terms of plagiarism with academic work. And of course, any system can be gamed. There’s also online proctoring, which has become really big during COVID and the emergence of online examinations. That might sound like a great concept but, of course, many students don’t have ideal home environments where they can take an exam at home and be watched by somebody at the other end (or monitored by a machine) for particular behaviours. They might not have a quiet home space, and so when the machine detects noises it can put up a red flag which indicates there might be suspicious behaviour in terms of the exam. For instance, if you’re a mature aged student and have children at home, or if you have brothers and sisters and they come into the room and you look away, that could be another red flag for online proctoring. So really, these are very interesting systems being trialled in very large ways, particularly as a result of COVID. But they’re they are natural experiments, I suppose, in how they might work well, but also the kind of issues – and particularly the equity issues – that are at stake.
**
This unseen nature of how AI works, also carries over into Erica’s responses to the question of what big issues around AI teachers most need to be aware of. Here she points to the importance of ‘explainability’ – in other words, teachers knowing how AI systems are making the decisions that they do, and being able to justify the technology’s actions in a similar way to how teachers are expected to justify their own decision making.
[ERICA] So I strongly identify with the identity of being an educator – I’m a teacher educator. And one thing about educators and teachers is that the basis of what we do is to explain stuff. And if we’re using machine systems that we can’t explain why that machine made a decision that it did then we shouldn’t be using it. And often it is the case that those algorithms are proprietary, or that they’re so technically difficult, or that they use deep learning where even the computer scientists that create these machines can’t even understand the decision making process … that’s a problem for education. So, explainability and accountability are key. And teachers are made to explain stuff all the time to students, to parents, to colleagues. So we need to really think about the governance of these systems. So, teachers are bringing apps into classrooms all the time. Government departments buy systems, apps and platforms all the time. And we really need to think very carefully about the governance of this technology – particularly in this country in relation to a current lack of regulation.
**
This point about oversight and bringing diverse groups of people together to talk about the realities of AI use in education comes up a few times. Next, Erica extended this point in terms of how educators can work with the ‘big tech’ companies who are at the forefront of developing AI. In particular, she raises the idea of approaching educational AI as a community – rather than a purely commercial – effort. This is something that Erica sees as being in everybody’s interests, with the tech industry actually open to working with – rather than working against – education.
[ERICA] It’s easy to demonize technology companies, it’s a lot harder to think about working with them. I don’t think many companies want to create intrinsically evil technology. There might be some, but really, you know, they would be a rarity. So it’s really about creating those collaborations or genuine partnerships, and showcasing that in terms of the development of ethical AI. And really working with school communities to do that, because we need the voices and the input of children. We need the voices in input of young people and their parents and carers. In indigenous communities we need the elders involved. We need whole networks of people involved in the design. I really am an advocate for very careful incubation of this type of technology, with a robust research agenda attached to it so we can really see what this sort of technology is good for, what it’s not good for, and how we might iterate our design so that it’s better.
**
Finally, we talked about the big ‘AI and education’ challenges for the future. As far as Erica is concerned, one of the big issues that we need to get on with thinking about and researching is the changing nature of learning … and finding out more about what it means to learn with – and learn alongside – AI technology
[ERICA] There’s a huge body of learning science and learning psychology about how people learn. And what’s interesting to explore is how people learn with machines, or how machines might augment our learning, or augment our capacity for intelligence. So I think there’s a huge research agenda around augmentation of capacity and capability which is quite interesting. At the same time, that may very well impact on the way we conceive of what ‘good’ learning is. So, none of these technologies are pedagogically neutral – they have pedagogical or instructional principles and approaches built into the assumptions about who the teacher is, or what learning is, or who the learner is. And that’s the kind of stuff we need to uncover from the design perspective. So I suppose it’s a bit of a double-edged sword.
I’m very interested, for instance, in how I might have an AI buddy who might help me along my lifelong learning journey. But then how might that change the person I am or the opportunities that are available to me? You know, who owns that data? And what happens to it? So, from an instructional perspective we’ve yet to really see the potential for AI. I mean, in terms of the COVID pandemic there was never a good intelligent tutoring system to put into place for kids to learn particular things. You know, there’s some intelligent tutoring systems out there, but nothing on a widespread scale. So, we don’t really know what’s going to happen, but we should be asking questions about what’s the relationship between the triad of teacher, learner and machine? And then critically, what are the pedagogical or learning science assumptions built into those kinds of applications?
**
So that was Professor Erica Southgate talking us through some of the key issues currently surrounding debates over AI in education. If you found this conversation interesting, then do be sure to check out the other episode of meet the education researcher on the same topic from the same panel discussion, that features Dr. Val Mendes from UNESCO. In the meantime, do check out Erica’s writing on AI and education. Just go to: EricaSouthgateOnline.wordpress.com