Clearly, teaching is a profession full of tasks that are much more complex than might appear – teachers’ work takes place in constantly changing circumstances, and requires high levels of intuition, insight, improvision and informed guesswork. As such, much of what teacher do does not play to AI’s strengths. In fact, a lot of teachers’ work fits Daron Acemoglu’s (2025) description of tasks that AI finds ‘hard-to-learn’, in contrast to the ‘easy-to-learn-tasks’ which are relatively straightforward for (generative) AI to deal with. ‘Easy-to-learn tasks’ involve a reliable, observable outcome metric, and clear and simple connections between the action and the outcome. Examples include acts such as boiling an egg or a security feature that only lets a smartphone owner unlock their smartphone. All these tasks have a clear desired outcome (a boiled egg, no security breaches) that can be achieved through a few simple actions. AI models can learn to perform these sorts of tasks in a relatively straightforward manner.

In contrast, ‘hard-to-learn tasks’ do not have a clear connection between performing an action and achieving the desired outcome. Often the desired outcome is not clear and working out what to do is highly determined by local contextual factors – many of which will not be immediately apparent. This is work that requires problem-solving, intuition, informed guess-work and other forms of expertise and familiarity with the specific people, places and processes being dealt with. Crucially, then, these are tasks where there is often not enough information for an AI system to learn  and/or it is unclear what information might be required to address the problem. To illustrate ‘hard-to-learn’ tasks, Acemoglu gives the example of a doctor diagnosing the likely cause of a persistent cough and proposing a course of treatment. This is a highly complex task. There are many past events that that might be contributing to a lingering cough. While there are common causes for coughs, there are also many rare conditions that should be considered. Crucially, there is no neat dataset of successful diagnoses and cures. At best, it is possible to train AI models based on the behaviour of human doctors performing similar tasks, but this is difficult because there is often no clear metric of success. This also means that any AI can only be developed to perform at similar levels to the average doctor. 

There are many parallels that can be made between the work of doctors and teachers. When in a classroom and engaging with students, teachers are involved continuously in these ‘hard-to-learn’ tasks. Noticing and then working out why a student might be distracted is a highly complex task that demands all sorts of knowledge and judgements about the student, their background and what is currently going on in the classroom. Even more trickly is working out a way of successfully getting the student back on-task, without upsetting the balance of the whole class. Teachers are constantly involved in similar subtle acts of noticing, encouraging, reminding, cajoling and directing – keeping up the momentum of a group of students involves a dealing with a thousand moving parts.

This is not to say that AI is useless for ‘hard tasks’ in comparison to humans. It could be argued that having AI that is as ‘good as’ an average doctor or teacher might suffice in situations where no doctor or teacher is otherwise available. Yet, this is not compelling justification for the huge amounts of efforts and resources involved in developing AI. Of course, as an economist, Acemoglu primarily sees this in terms of rate-of-return – estimating that AI will, at best, add a quarter of the value to hard tasks than they might add to easy tasks.  In other words, Hard-to-learn tasks do not offer much scope for large productivity improvements and cost savings. Given the huge amounts of resource and investment involved in developing AI for hard tasks, the rational for these sorts of AI development is not clear.

Reference

Acemoglu, D. (2025). The simple macroeconomics of AI. Economic Policy40(121):13-58.