Anyone wanting to make sense of technology and everyday life needs to pay due attention to issues of affect – i.e. deeply-felt experiences and sensations that move people to respond to technology in a particular manner. In short, one of the most important questions to ask of emerging digital technologies is what these technologies make people feel. What does it feel like to engage with a technology that appears to largely act independently of us but is also involved continuously in what we are doing? 

Clearly, the designers and vendors of these technologies have a set of expected responses and reactions in mind – for example, it might be hoped that we feel relieved, reassured, supported, convenienced, confident and/or entertained. In contrast, one of the feelings that people regularly report about their engagement with digital technologies is that they feel unsettled, un-nerved and otherwise disturbed – in short, these are technologies that people describe as ‘creepy’.

While frustratingly vague, the idea of finding digital technologies ‘creepy’ has been a long-standing feature of human-computer-interaction. For example, attempts to engineer plausible conversational agents and chatbots have been long associated with ‘uncanny valley’ experiences of conversing with ‘someone’ who is clearly not-quite-human. Elsewhere, people continue to describe the ‘creepy’ sensations of being confronted with a particularly well-targeted advertisement on social media, and or an unnervingly prescient ‘you might like this’ recommendation on Amazon.

A few authors have attempted to better explain what ‘creepy’ means in conjunction with digital technologies.  For example, Dylan Wittkower suggests that feelings of creepiness are aligned with technologies that create an illusion of intimacy, while acting in decidedly non-intimate ways. Wittkower makes the point that intimate human-to-human interactions tend to consist of high-volume flows of information that are consensual, reciprocal and two-way. In contrast, most human-machine interactions consist of flows of information that are high-volume, but largely one-way, non-consensual and unreciprocated.

In this sense, the experience of being on social media might well sometimes feel akin to being constantly stared-at, monitored and judged by another person, who gives very little back in exchange. Of course, most digital technologies are cleverly designed to distract us from this imbalance. However, on the occasions when this imbalance is noticed then one gets the feeling of what Wittkower terms as being ‘creeped upon’.

Alternately, Tene & Polonetsky 2013) point to data-related incursions that deviate from what individuals considered to be acceptable – when technology “rub[s] against the grain of existing social norms, creating unforeseen situations labelled creepy” (Tene & Polonetsky 2013, p.16). Similarly, Frank Pasquale points to ‘creepy’ instances when technology feels like a noticeable “deviation from the normal” – whether it be the bleed of our paid employment into our home lives, or when judgements are made about us without any clear ‘articulatable criteria’. All told, moments such as these serve to temporarily draw attention to the presence of the “unknown mechanical decision makers” that we have allowed to “sneak into our cars, bedrooms and bathrooms” (Pasquale 2020, p.135).

Of course, these breakdowns in trust, norms and intimacy and comfort are not intended to occur. We are not supposed to notice that we have uneven (and exploitative) relationships with the technologies in out lives.  Indeed, Tene and Polonetsky (2013) describe creepiness as often resulting from mis-matches between the norms of technology engineers and those who engage with their products. They also point out that creepiness might also result from initial clumsy negotiations around the technology etiquette of ‘early adopters’ of devices that might be used in public places (such as people’s initial discomfort at being in the presence of others wearing Google Glasses).

In all instances, the hope among tech developers and vendors is that the general public quickly adjust their expectations, and soon become inured to the feeling that ‘something is not quite right’. Instead, it could be argued that any instance of finding a technology ‘creepy’ is useful warning sign, rather than a momentary ‘weird’ aberration. Indeed, any time that one *does* notice a technology being ‘creepy’ should be taken as a valuable affective reminder that there is much about technology that we should not simply accept as ‘normal’ and try to get used to. 



Pasquale, F.   (2020).   New laws of robotics.  Harvard University Press

Tene, O. and Polonetsky, J. (2013). Theory of creepy: technology, privacy and shifting social norms. Yale Journal of Law and Technology, 16, 59-102 

Wittkower, D.  (2016)  Lurkers, creepers, and virtuous interactivityFirst Monday, 21(10)