I read some poems written by an artificial intelligence tool. I enjoyed them and then I felt confused about enjoying them.
Writers love writing about writers. Worryingly, this includes writing about whether we can get rid of writers.
In the 1920s an entire sub-field of literature studies, called Practical Criticism, argued that students should analyse works without thinking about the authors at all. In the 1960s two seminal essays – Roland Barthes’ Death of the Author and Michel Foucault’s What is an Author? – mused, in a very French way, on what authors are and whether readers care about them.
But underlying all this was a basic premise: writing needed to have a human author.
Fast-forward to 2019, and the company OpenAI has created GPT-2, an artificial intelligence (AI) system which can write its own poems, dystopian fiction, political analyses, and more. They aren’t releasing it because, in the words of multiple headline writers, it is “too dangerous”. They are concerned about uses by “bad actors”, particularly those who would mass-produce disinformation. But another implication is worth considering.
Might we be heading towards the actual death of the author?
Computers vs. humans
A hot questions nowadays: Are there any roles which computers could never take from humans? Popular candidates are creativity, caring, or managing people. I think the key question here is often badly phrased. I’ll briefly discuss caring to illustrate this, as it’s a well-discussed topic (there’s predictions that in the future medical diagnoses will be done by computers, while caring for ill people will be done by humans).
The key question is often phrased as will computers ever be able to care for people? But that assumes static, unchanging ideas of ‘care’. It also risks a lot of deep philosophical rabbit-holes, trying to define ‘care’ or arguing that true care requires empathy and understanding. Then you end up asking how we know that our doctor is really empathising with us, or how we can feel cared-for by animals, or having long arguments over definitions (while the tech carries on regardless).
I think a more immediate question is: will computers ever be able to do tasks that humans accept as care? [1]
Some might say no: people will never really feel cared for by a non-human. But I think that’s an assumption. I’d argue it’s perfectly possible that, after some initial discomfort, a person finds words of support from an AI provides the emotional impact they are craving [2]. I can imagine grandparents having to be convinced to see a doctor, because the computer is much more convenient – even more familiar. If that happens, the argument ‘yes but is it really care?’ risks being swept aside [3].
These are the thoughts I already had in mind as I read some poems produced by GPT2. Creativity is perhaps a less pressing subject than how AI might help (or alienate) sick people. But it still raises questions about roles we think of as intrinsically human, tied to emotional connections between people.
So how might we think about this: will computers ever be able to do tasks that humans accept as creative?
Creativity as Content
One way to describe creativity: combining new and existing ideas in interesting ways. Repetition of existing ideas is clearly not creative; but, as Fry and Laurie suggest, random collections of words such as “hold the newsreader’s nose squarely, waiter, or friendly milk will countermand my trousers” are not creative simply by being new.
This is a way in which AI might lay claim to being creative. The point of AI is drawing on past information to respond to new situations in ‘intelligent’ ways [4]. So AI art generators take an input – for example, ‘portraits’ or ‘marine art’ – and generate new images based on past examples:
Note: these pre-date OpenAI’s GPT2, and are part of ongoing work by Robbie Barrat and others.
But if we’re going to accept work as ‘creative’ in any useful way, it can’t just be new-version-of-old-thing. In the case of the arts, we usually want something that stirs an emotional response.
Now I think the above artworks do provoke some emotional responses – but it’s hard to separate from the knowledge that it’s AI art. As noted in this article, the pieces provoke an ‘uncanny valley’ response: they’re eerily close to the real world, but wrong in ways which produces a slightly unnerving feeling [5]. Of course, human artists produce such images too (more on that later). The point is that when I see the above images, my feelings are strongly influenced by knowing they’re produced by AI [6].
But I think OpenAI’s GPT-2 poems are different to the AI art in a few ways [7]. Let’s look at one, created from an input by the Guardian’s Alex Hern (seeing this poem was the original prompt for this post, fact fans):
In a line-up of poems, you wouldn’t easily pick this out as written by AI. And I find it easy to ignore that it was written by an AI. For me, reading the GPT-2 poems feels like those poems on the Tube – I just read the lines, rather than thinking about the author.
Getting away from AI-induced distraction frees up the mind for more interesting engagement with the content. Let’s look back at the above poem. For me, it does some of the jobs that a poem (or other art) should do:
It made me feel things (particularly feelings of emptiness, mortality, and infinity).
It allows for analysis; for example, how the repetitive language and lengthening/shortening of lines adds to the sense of seasonality.
It makes observations that are novel and interesting; I don’t tend to think of leaves as purple, for example, but it makes sense in my mind’s eye.
In that sense, I think such poems are a welcome contribution to the artistic scene. I’m glad I read them, and I can imagine people gathering to have fruitful and interesting discussions about them.
But, as many of you may be thinking, I’m massively oversimplifying the idea of just ‘forgetting about the author’. You may argue – fairly – that the existence of authors gives us more to engage with. So let’s have a think about that.
Creativity & The Author
Let’s step away from content itself and address the wider question of authorship. What are the implications of thinking about AI as an artist, an author, or a musician?
It’s worth emphasising, I’m talking about a simplified picture in which AI ‘makes the art’ with very little human input. But many interesting AI artists use the tools a bit like a synthesiser – technology which opens up new avenues for creative people, rather than replacing them [8]. But my point is that computers are making creative work with less and less human guidance. That, I think, is a different situation to synthesisers. It opens up the question of AI as creator.
A few immediate questions come to mind:
What will the moment that fans ‘meet the artist’ look like? (Positive: hopefully less ‘turns out they’re horrible’ moments).
What about performance? Will we ever see computers deliver stand-up comedy or slam poetry? (Answer: probably. See this from 2014). More difficult, will we ever listen to such a performance as we listen to a human performer, alive to ideas of personality, performance, and charisma?
Would an AI artist ever radically change direction, just because it felt like doing something new? Would a computerised Bob Dylan have ever had the Judas gig?
What will happen to all those discussions of how a work fits into an artist’s existing oeuvre; how their work reflects their personal life, views, or emotional state?
The last two points are probably the most interesting. Barthes and Foucault, those French authors I mentioned at the start of this piece, might be glad if AI moves us away from focussing on creators. Both felt audiences are encouraged to treat creators as having the final authority on what a work is ‘really about’, leading people to psychoanalyse authors and ask what artists ‘really meant’ – which, they argue, closes off lots of interesting discussions [9].
So maybe they’d like AI art (though it’s hard to tell what they think about anything, to be honest). Foucault ended What Is An Author by stating that “We can easily imagine a culture where discourse would circulate without any need for an author”, which would end “tiresome repetitions: Who is the real author?”. Well, maybe we’ve reached it [10].
But I have some concerns
Firstly, the lighter concern. Maybe it’s just my way of thinking about art, or maybe I have internalised the modern obsession with ‘the author’ that Barthes and Foucault talk about. But I like having a creator to think about.
When I see out-of-focus marine paintings, for both human and AI version I enjoy the eeriness of blurry seascapes, and muse about whether it would work as well if it was a city scene. But for the AI version, I just think of their out-of-focus nature as imperfect replication of marine scenes. If I saw the same in an Impressionist painting by an Impressionist human, I would wonder what about their personality / life / context pushed them to choose that style. And that’s part of the fun.
Secondly, the more serious concern. Creativity is a powerful tool for spreading human experiences of injustice, and offering visions of a better world. That requires living and experiencing the world, particularly the bad bits of it. I’m not sure an AI could ever do that. (Indeed, as has been shown in numerous cases, AI can replicate and intensify human prejudices).
And sure, we can imagine a world in which AI and humans create things together, in a good way. But if we accept that AI can do good creative work, we open another possibility: a world in which broadcasters, record labels, and exhibition owners just slide towards AI-as-default because it’s easier and cheaper. Why commission artists when you can just copy targeted advertising – set off endless A/B tests, build on content which excites interest, and terminate anything which does not? Why wait months for a screenwriter to produce a single film, when you can ten AI scripts by lunchtime?
Maybe that’s pessimistic. But, as with any technology, you have to ask the questions early. In the end, despite highfalutin’ ideas of emotion and spirituality and humanity, creativity might just be the same as anything else. When technology promises convenient, personalised, and emotional options, we’re prepared to accept it in to our lives. But when we let it in, it might crowd everything else out.
Notes & Important Caveats
[1] That’s not a perfect question either – one of the big concerns around modern technology is whether it’s the emotional equivalent of fast food, a series of dopamine hits that don’t help long-term mental wellbeing or social solidarity. So we might think we’re being cared for, while actually doing ourselves harm. As with any of these topics, it’s important to ask the question in multiple ways as none will ever capture all the issues fully.
I’ve not seen Ex Machina, but apparently a key question in that film is not whether the android Ava has consciousness; rather ““the challenge is to show you that she’s a robot. And see if you still feel she has consciousness”. This review is interesting on that question, and summarises lots of philosophy that I’m skimming over here in favour of a more sociological approach (not ‘what is x’, but rather ‘how does x function in society?’).
[2] A prominent opponent of this view is Sherry Turkle, whose latest book Reclaiming Conversation: The Power of Talk in a Digital Age argues that digital connections are reducing young people’s ability to empathetically connect to one another. In brief, I find Turkle’s view of empathy to be quite rigid and dogmatic, and seems to assume that any new form of connection is de facto a worse form of connection; for a critical review that aligns with my view, see here. Though even this review is given to assertions that old ways are best ways – for example, “of course, a message on a screen doesn’t have the same gravitas as a handwritten note”. I don’t think it’s “of course”, I think it’s an interesting question to consider.
[3] The implications of accepting that are quite severe. Firstly, it makes it harder to claim that there are jobs that robots could never take from us. Secondly, there’s deep questions about the future of society. One version of techno-utopia is a world in which robots do all the undesirable work, freeing up humans to enjoy emotional experiences with one another. But if we decide that robots can provide emotional stimuli just as well as humans, and more conveniently than trying to arrange time with friends, then why leave the house? I suspect those are questions we could be forced to confront at some point.
[4] So, for example, Instagram filters aren’t producing AI art, because they’re doing exactly the same thing to whatever image you input – there’s no intelligent response, just rule-following.
[5] Strictly ‘uncanny valley’ refers to humanoid beings that are like humans, but a bit wrong. That can be seen in AI portraits. But I think the emotional response is similar for non-humans in AI art too. You’ll note I’m skimming over art which doesn’t in any way attempt to represent real objects (surrealism and the like). That’s a can of worms I’m not going to open, sorry.
[6] Two things to note here. Firstly, there’s an interesting thought experiment to be undertaken: how would pretending AI art was produced by a human artist (or vice-versa) change the nature of the art? Secondly, proponents of AI art would argue that I’m missing the point here: good AI art is a genre on its own, not an imitation of non-AI art; is produced by humans using AI tools in interesting ways, rather than ‘AI art’ vs. ‘human art’. I’ll discuss this later, but flagging here as any AI art fans are probably (and rightly) getting a bit annoyed at me right now.
[7] This may be because I have very little artistic sense, or the language to understand/discuss how images work. Whereas I am reasonably experienced at analysing language. We should also consider that text has fewer degrees of freedom than images – images involve colour, proportion, layout, and more, which have to roughly match up to our experience of whatever is being depicted. I think a viewer is more likely to see a portrait with an off-centre nose as ‘wrong’ than a sentence with oddly chosen words.
[8] More prosaically, humans still build and train the AI. My argument is not that AI is completely separate to humanity and society (no technology is). My argument is that, in all probability, we’re going to encounter more and more situations in which people will say ‘an AI made this’.
[9] An interesting counterargument (sort of) comes from Henry Jenkins’ work on fandoms, particularly his classic book Textual Poachers, which argues that studying actual discussions of artistic works reveals that readers’ interpretations can be much freer than Barthes/Foucault would imply.
[10] Or maybe we’d just start asking ‘so is the real author the AI, the person using the AI, or the person who designed the AI? Or is it some human-AI-cyborg-hybrid-author?’.
Comments