In the September 2022 episode of the ‘Meet The Education Researcher’ podcast, Neil Selwyn spoke with Dr. Marie Heath (Loyola University) about her work encouraging an informed ‘techno-scepticism’ around the social, political, cultural and economic dimensions of technology in schools. Originally a high school Social Studies teacher, Marie is part of the ‘Civics Of Technology’ initiative, and is involved in various research and writing projects concerned with supporting teacher activism and community scrutiny of EdTech (nb. this text has been lightly edited for clarity)
[NEIL]: Compared to most education researchers who tackle digital issues, you are not taking a typical approach! So what triggered this critical turn in your thinking around technology in schools?
You know, I was always an early adopter. I was in high school during my late ‘90s and I was always sneaking onto internet forums and learning to code. So when I became a teacher I thought, ‘Well, it’ll be so great…I’ll be able to have students talking in ways that they couldn’t before, and connecting with all sorts of things’. So, I was really an optimist about technology and the democratization of information and accessing information. I was a Social Studies teacher, and I thought that technology would be really wonderful.
But when I was doing my dissertation work, I was working with two Social Studies teachers who were adopting ‘one-to-one’ technology programs which they had each initiated themselves. And they both took different pedagogical approaches – one was really particularly critical, one was more traditional. While they were both really reflective practitioners, in the midst of their adoption the district started its own one-to-one initiative … and it was a mess. I was struck by the ways that the District steamrolled the wisdom of the teachers, and also the ways that the District brought EdTech companies in. I was also struck by how one-to-one computing became less about this grassroots movement of these two teachers, and more about what the EdTech companies were promising.
[NEIL]: So that’s led you to disentangle the mix! As you say, you’ve got the Big Tech companies, the school administrators, the individual teachers, the parents, the families, the local community. From this, you’ve developed the idea of ‘techno-scepticism’. You describe this as an informed scepticism. Can you elaborate on what constitutes this stance towards technology ?
[MARIE]: In general, techno-scepticism is the idea of giving ourselves a moment to pause and decide if the ways about how we are thinking that we should use technology – or at least the things that have been told to us about technology – are truly what we need and desire, and best serve the purposes of democracy, justice, and education. For me, those are the guiding purposes that I try to move toward. I think that sometimes our reaction to thinking about technology is automatic – just internalising messages that we’ve received, you know: ‘This will be good’, ‘This will be progress’. ‘It’ll be engaging’ … whatever that all means?!
So, techno-scepticism asks us to just pause before any adoption of a technology and ask: In whose interest is this serving? What are the unintended consequences of this? What are the ways that our environment might irrevocably be changed by adopting this technology? Who could be harmed by this? In what ways does this amplify power? In what ways is this amplifying oppression? So techno-scepticism prompts us to pause and ask those questions. And then if the answer to any of those questions is that the technology isgoing to hurt someone or prompt change in a way that you don’t want it, then we need to stop and figure out what we need to fix before we adopt technology.
[NEIL]: So techno-scepticism involves slowing down our conversations around technology, and contextualizing the issues in terms of local context. That all sounds fine, but technology is a positive project. I mean, a lot of people might hear all this and say, ‘Hang on! You cannot encourage students to be critical about technology. You need them to be pro-innovation, you should encourage blue skies thinking’. How do you respond to criticism that you are a Luddite, that you are anti progress?
[MARIE]: I always try to come back to the things that I am for, rather than what I am against. So I less think that I’m ‘anti’ technology, and more that I am ‘pro’ a multiracial democracy … that I am ‘pro’ justice. And then I can say, ‘Well, does this technology serve those interests?’. And in doing so, we can pause empty talk about ‘Citizens for the 21st century’, ‘Engaging creative thinkers and learners’, ‘Innovators’ … like, what does any of that mean? Let’s just pause that talk, and ask does the technology make the world more just? Does the technology make the world more equitable? Does the technology make us work towards a common good? And if so, then let’s talk about how we can move toward those outcomes. And if not, then maybe let’s reconsider what we are doing. So, I’ll assume that most people are all for those things (i.e. justice, equity, the common good). How, then, can we work toward that? And is the technology that we are seeing in education helping us work toward that or not?
[NEIL]: So you’re not anti-technology… you’re pro-society?
[NEIL]: But, on the flipside, a lot of people are now actually quite prepared to think the worst about technology. Over the past few years we’ve seen a pronounced ‘tech lash’ against all sorts of digital issues – such as surveillance capitalism, the over-reach of Big Tech corporations, whatever Elon Musk is currently tweeting about. So, there is now a danger in pushing the techno-scepticism line that people descend into outright ‘techno-cynicism’. But I get the sense that you’re retaining a sense of hope about digital technology?
[MARIE]: Yes, I live in the world. And I love that we’re able to talk to each other on opposite sides of the world via Zoom right now. But I also pause to think, like, what does Zoom need to ‘eat’ to survive? Why should we not be doing this via Zoom? In what ways am I feeding Zoom? In what ways will be I appreciative that I fed it those things … and in what ways might I regret it?
[NEIL]: Absolutely. That’s a really balanced way of thinking about things! So, let’s move onto your ‘Civics Of Technology’ initiative. This is encouraging teachers and students to critically inquire into the effects of digital technology on their individual and collective lives. I know that there’s a research element to this, but I am really interested in the teaching and the teacher activism side of things. Could you give some good examples of how schools and teachers can be supported to engage more critically and sceptically with digital technologies? What’s coming out of the civics of technology project?
[MARIE]: Yes – I teach in a Masters of Ed-Tech program. So I work with teachers regularly, and one of the things that we do together is an ‘Ed-Tech’ audit. When you start to audit an educational technology you ask techno-sceptical questions. Of course, like you said, it’s really easy to slip into techno-cynicism and say, ‘Well, I guess everything’s awful. We shouldn’t do anything’. But I think it’s partly the Social Studies teacher side of me, as well as the kind of citizen that I want to be, that makes me want to believe that is everything is not awful. While some things are awful, we do have agency and we can then do something. So the second step of the Ed-Tech audit is working with teachers to think ‘how can I make change at different levels?’. So at a personal level, what are some personal things I can do to make change in my own relationships with technology? And then pedagogically, how can I have agency in my teaching to use technology differently? And then professionally, how can I educate and leverage the position I have as a teacher? And then finally, collectively, when we’re talking about systematic change, how can we leverage power through many communities, parents, children and teachers working together to make change?
[NEIL]: So this is a process of auditing, documenting, joining up the dots, figuring-out the bigger picture … I can see how that might work with teachers, but I personally have found it really difficult when working in schools to get students to think critically about school technology. At the end of the day, these are students in a school, and they think about school technologies along highly school-centric lines. Have you had any more success? How can you properly engage young people to think critically about the digital technologies that they are using in school … or the digital technologies that are used on them in schools? Are schools even the right place to try to do this?
[MARIE]: Well, I think so … at least I think that school is one place to do it. So, a colleague of mine – Dan Krutka, is a Social Studies educator, and just wrote an article with another Social Studies educator (Scott Metzger), where they analysed how technology is taught in Social Studies classrooms. So, Social Studies is a great place to teach about the social aspects of technology, because when we talk about any topic in these classes we teach society. That said, in the traditional Social Studies curriculum it seems that technology is almost always studied only in terms of weapons of war … canals … and the industrial revolution. And usually that’s technology – so we’re trying to change that. But there are natural places to reflect on tech in schools. I think absolutely this can be done in Social Studies, but also in the sciences. Science does a lot of thinking about technology. So, there definitely are natural occurrences that already exist in the school curriculum to develop lessons around technology and society.
[NEIL]: Let’s move on to research, as opposed to the teaching side of your work. I was interested in your 2021 paper where you conducted techno-ethical audits of Google Classroom and Google Meet. Can you talk us through this techno-ethical audit approach? What did you do, and what did you find?
[MARIE]: So, I wrote that paper with my colleague Ben Gleeson, and we looked at Google Meet and Google Classroom using a number of different techno-sceptical questions. Does the design of the technology essentially nudge pedagogy? Is the technology designed with justice in mind? Is the technology designed with ethics in mind, is it environmentally-just?
We posed this series of questions at a particular time and place during the pandemic in the US – when schools had the opportunity to choose for their students to either return to class or remain online. And we found that Google certainly held up its promise to recreate school structures. If you take school to be a system of power with a teacher on top of students, with a banking model of education where teachers are filling up students with knowledge … then Google Classroom and Google Meet certainly did a great job allowing that to be recreated in an online space. But the design of these technologies doesn’t really allow people (even if they were so inclined) to take a critical pedagogy approach. It’s difficult to achieve that when there is already a user who has more power than every other user, and who literally fills in the curriculum that then goes out to the students.
We also found that the surveillance logics of Google tended to amplify the surveillance of schools. So schools could now monitor students in more intense ways – monitoring how long students were online, how much were they ‘engaged’ .. and then send letters home to students for being ‘truant’ from school if they weren’t online for an extended period of time. In the United States, parents can be held legally accountable for their children’s truancy. So there’s this paradox where the school is increasingly surveilling students, but also asking parents to surveil students at home, and bring any absent children back over to the computer and put them back in front of the classroom. So the techno-ethical audit questions allowed us to look at the technology through lenses of both media ecology (asking questions about how the technology changes or continues the school environment), and also through the lenses of critical theory (focusing on issues of power and surveillance and asking who is most/least harmed by this).
[NEIL]: So this was essentially a techno-ethical audit of the school and how the school was using the technology. This definitely raises the question of what else you were not able to pick up through such audits. For example, I’m thinking about the lack of access we have as social science researchers to the code – meaning that we are unable to open the ‘black box’ of a platform such as Google Classroom. So, what else do you think you might find if you could open up the APIs and have a look at the software and coded architecture of these platforms?
[MARIE]: Actually, to just circle back to an earlier question, one of the things that I sometimes do with students is have them pull up all the information that Google now lets you see on what they’ve collected on you. The minute that you click on that it’s terrifying, because you realize the ways that everything you’ve done online intersects – so your maps, and your photographs, and your searches, and your Gmail, and whatever else. So, I’m sure that is happening on Google’s end of the student data, and that Google are putting together these incredible packages of information on our children … to sell on information about these kids to anyone who will pay, or perhaps even back to the students themselves.
[NEIL]: That’s really interesting – and even pointing out what you don’t know, is a powerful form of critique. It’s a fascinating approach. And you mentioned media ecology and critical theory, which leads us onto conceptual approaches and theoretical traditions. There is now a lot of useful critical tech approaches that are emerging from *outside* of education research. Which conceptual approaches towards technology have you found resonate most with your work in schools and education? What sort of questions and actions are they raising?
[MARIE]: I particularly like Ruha Benjamin’s ‘embedded injustice’ work and the ways that she asks us to both interrogate oppression and society, alongside the ways that technology reproduces and amplifies that. If we consider schools as part of society, then I think that it’s really easy to bring that approach to bear on schools, and reflect on the ways that Ed Tech can create and reproduce society in schools. This raises questions such as: What kind of society do we want? How is education and educational technology preventing that kind of society … or perhaps helping that come to fruition?
[NEIL]: So this raises important questions of power and control! As you say, there’s so much interesting work going on in the Critical Race space, and also Black feminist approaches to technology.
[MARIE]: Yes, yes, I really like the Black feminist approach to technology integration – the critiques that are being produced by the likes of Timnit Gebru and Sofia Noble – all these women are doing really powerful work. I just worked on an article where we tried to bring those approaches into EdTech and think about what that might look like, and call for more research that takes that position. Another article that I just wrote focuses on the many critical methodologies that don’t seem to be present in EdTech research. I really don’t understand why actually, I haven’t figured out why the field is still so resistant to adopting critical methodologies.
[NEIL]: Oh, I’ve got many theories! But that’s a whole new conversation altogether! But just a final question. I’m really interested in this idea of using different methods and approaches. One of the things that your work tries to do is to get teachers and students to ‘think otherwise’. You’ve made use of storytelling and visioning methods, and other ways of stimulating techno-sceptical ‘imaginations’. So you’ve taken various creative approaches in this work. What methods and approaches and mindsets do you think EdTech researchers (and education researchers in general) can adopt to get people thinking critically and thinking creatively?
[MARIE]: Well, I do like focusing on the worst-case scenario – encouraging people to take a ‘Black Mirror’ perspective and pause for thought. I find that it’s actually really hard for teachers to do a techno-ethical audit without first engaging in a dystopian analysis. So we’ll often do a worst-case scenario first. Think of all of the people that *could* be harmed, think of all of the worst things that *could* happen, then write a story about that. I often think of technology as being like the plant from Little Shop of Horrors – “Feed me, Seymour!”. So what does the technology want from us?. And once we’ve talked through some of these dystopian issues, then we can get on with the more traditional work of critiquing the technology. Once we did some ‘found poetry’ using the slogans of various Ed-Tech companies. It is uncomfortably dystopian when you start putting all of those meaningless phrases together – you know, ‘blue skies’, ‘innovation’, and all of that sloganeering. That’s disturbing.
[NEIL]: Yes, I saw an AI arts piece where they fed all the advertising slogans from the ‘Cannes Lions’ advertising festival into a natural language generator. And the software started producing new texts using the language of advertising … as you say, it was chilling. There is much to be gained from found poetry, storytelling – there’s all sorts of artistic interventions that we can use to get people thinking critically. This is a suitably inspiring note to end on, I think! Thanks for taking the time for chatting, Marie, this is clearly really important work! I wish you all the success under the sun!
[MARIE]: Thank you, and I hope you have a great day.