top of page

Sideways Looks #27: ChatGPT and Potato Mode

Updated: Mar 18, 2023

Greetings from a snowy and sunny Berlin. I have returned after a week staying in the guest room of the always-fascinating Newspeak House in London – please do have a look at their upcoming events.

2023 continues the trend of late 2022, of fast-moving tech developments hurtling towards…. goodness knows what. In this post I’m discussing ChatGPT, the AI text-writing tool with revolutionary potential (I don’t think that’s an exaggeration). I wrote a previous post about ChatGPT, using it to invent new people and seeing if it exhibited stereotypes. It turns out ChatGPT loves the name 'Johnathan', is a bit obsessed with Audrey Hepburn, and hates Adam Sandler. And also that yes, ChatGPT does seem to reproduce some stereotypes - though not always where one might have expected. Here’s the post.

Thanks for the various nice comments I’ve been getting recently from readers old and new; I was happy to hear my experiences in the 2022 moving-to-Berlin retrospective were familiar to other people who've made similar moves. As always, if you enjoy these posts, please do encourage friends and others to read and subscribe; it very much rewards the effort that goes into them knowing that they’re reaching a wider audience.

Tools, Instruments, and Potato Mode

Technologies help humans solve problems. This involves different levels of effort from the tech vs. the human. One can think of a broad distinction between ‘tools’ and instruments’. ‘Tool’ generally brings to mind something like a screwdriver or hammer. They’re generally pretty simple. You can certainly be better/worse at using them, but a screwdriver which takes months to learn would be a bad screwdriver. ‘Instrument’ suggests something more complex - perhaps a musical instrument, or a scientific one. You can do impressive things with them, but it takes a period of apprenticeship and you’ll only produce rubbish for a while.

A lot of modern technologies are interesting in that they can be both tool and instrument. Web search can help you find a dry cleaners in seconds - or, in the hands of experts, reveal military and state secrets. Excel or Google Sheets can be used for basic shopping list with little skill, up to quite complex analytics in the hands of experts - or, somewhere in the middle, as useful productivity tools with just some investment of learning (see the surprising success of Kat Norton AKA Miss Excel).

To think about this divide I’m going to borrow a term from some software called Iramuteq, created by Pierre Ratinaud. We met this previously in Invite to a Data Party; it’s a topic modelling programme, which can ‘read’ large amounts of text and pull out key topics. I think there’s a lot of potential for these tools to be used more widely (with caution), but that’s not important right now. When you run an analysis, Iramuteq has a box you can tick called ‘potato mode’. Drawing on the idea of a couch potato, this means the analysis is faster but the programme does less work (meaning the output is less precise).

I think the idea that technologies can allow a potato mode and expert mode – and even a sliding scale between the two – is a useful one more broadly. But, unlike Iramuteq, the difference is usually much harder than ticking a box.

The distinction between the modes is partly about people – their skill, willingness, and confidence to use tech in interesting ways. In the aforementioned Data Party post I argued there needs to be help for people to find their own happy spot mid-way between potato mode and expert mode.

But it’s also about the design of the tech itself. Here, as often, market-drive logic can be a powerful driver – usually towards potato mode. There’s much more custom for Google, Facebook, Twitter, whatever to make their tech so simple that we can use it last thing at night and first thing in the morning. Yes there are expert users, but they are not usually valuable customers. See, for example, Musk’s Twitter changes making it harder to be a researcher or fact checker; or making it harder to see posts in simple chronological order, rather than via algorithmic recommendations.

But we’ve talked a lot about Musk and social media in these posts recently. And there’s a very interesting new technology on the block to consider.


For those who’ve missed it, ChatGPT is an AI tool developed by Microsoft partner OpenAI. You write a ‘prompt’ – a question, a request, or even just a thought – and ChatGPT produces a response. It’s incredibly good at producing responses to even extremely specific prompts; the now canonical example being “write a biblical verse in the style of the king james bible explaining how to remove a peanut butter sandwich from a VCR”. It also ‘remembers’ your previous answers, allowing you to have a back-and-forth with it. It’s very impressive, and fun. And, like a lot of impressive and fun tech, it also exploits low-cost labour in poor countries, has a lot of biases (though OpenAI have arguably done more to address this than most other companies), and draws on the work of creatives whose livelihoods it might now imperil.

The actual ability of ChatGPT is not that new. So-called ‘Generative AI’, which can write poems, make art or music, and even create videos with dialogue, existed before (see examples). But ChatGPT is so easy to use; you type a sentence, and you get a response – which is probably going to be some combination of cool, fun, or interesting – back in seconds. Like Twitter the design is not just simple, but slightly addictive; the machine types before your eyes, sometimes with slight pauses, playing on anticipation. It also encourages you to get in a back-and-forth conversation with the machine, trying new things out. And screenshots of the conversations are extremely shareable. Someone even integrated it into a physical typewriter to create The Ghostwriter.

With this simplicity, it has been taken up a lot more widely than previous Generative AI tools. Within 2 months it has reached 100 million monthly active users, a record growth in user base (TikTok, Instagram, and Spotify took respectively 9, 30, and 55 months to reach that figure). It’s prompting urgent discussions about e.g. banning it in schools (which ironically have probably accelerated its uptake) and the future of work and creativity. It is quite possible that historians will point to ChatGPT as a moment when we visibly shifted into a new technological paradigm.

Generative AI: Potatoes & Experts

Despite its simple interface ChatGPT, in line with its generative AI predecessors, is improved by skilled use. A whole field of ‘prompt engineering’ has already arisen, where people experiment and learn how small changes to prompts produce different answers (if you’re in London you can go to ‘Prompt Jams’ created by Newspeak House founder Edward Saperia). There’s also some good discussions about how teaching students to use AI well would be much better than just banning it – particularly if using AI is going to become part of workplaces. Business school professor Ethan Mollick is particularly active here; he’s got interesting suggestions around, e.g., getting students to mark essays written by ChatGPT, or to get over writers’ block. Teacher Cherie Shields has also written about using it well in high schools.

But other discussions about ChatGPT start to invoke the possibility of potato mode. One big question now is whether ChatGPT could even replace Google Search. ChatGPT is not, yet, a search engine. While it can produce confident-sounding answers, it makes mistakes and invents sources. Unlike Google, it cannot tell you where it found information. Its main advantage is quick creation, not accuracy. However, this may be temporary; Microsoft is looking to integrate features of it into products including Bing, which is reportedly scaring Google. As one wag on Twitter put it: "OpenAI did what used to be considered impossible. They made people want to use Bing."

So what might a search tool with the power to replace Google look like? An argument, as made by e.g. Casey Newton and Kevin Roose, is that Google isn’t simple enough. It’s cluttered; it makes you do work. It gives you a list of snippets of possible answers, that you then have to select and read. Where Google once promised that the first results would be the best ones, those are increasingly now adverts. Wouldn’t it be better, the argument goes, if a hyper-accurate ChatGPT could just answer your question in one go?

There’s a host of, erm, tricky problems here. What would learning to rely on a computer-generated answer do for people’s media literacy? What does it mean, philosophically and sociologically, for a computer to ‘produce an answer’ rather than give you source material? Who’s to say that answers from ChatGPT-search wouldn’t also become dominated by advertisers (perhaps much more subtly)? Also there’s various questions around the business model, relationship with publishers, etc. discussed at length in this Decoder interview with Microsoft CEO Satya Nadella.

But my concern is more the general social impact when such technology becomes widespread and affects general expectations of how we work, share information, and suchlike. The internet could have been a place which helped become more intelligent consumers of diverse information sources. Some people do use it that way, and in many ways the internet is probably still better than the old world of limited information sources. But making it even easier for people to just accept what a machine tells them is a good step forward.

So where do we go, Potato?

Beyond search and beyond ChatGPT, we have two visions for a Generative AI future. The first is one where people are skilled, confident, and incentivised to use Generative AI as an instrument for improving human intelligence. The tools could rapidly assemble information, which humans then carefully check. Artists and creatives could use them to play around with ideas, without needing to spend extra time and resource on re-drafting (I’ve previously suggested one could improve creative writing by deliberately avoiding ideas ChatGPT produces, on the basis these are likely to draw on familiar clichés). Tasks which require standard ‘boilerplate’ content, such as writing policies or code, can be sped up so the humans can focus time on perfecting the final output. We may even be able to complete tasks faster and give ourselves more free time, or redirect human labour towards caring for one another (though note that didn’t happen with computers, or the internet).

The other vision is one where Generative AI is used as a tool for producing more stuff at higher speed, in the process bypassing human intelligence. Even if people want to use the tools in a more expert way, wider pressures push against that. People will come to rely on fast answers to questions, rather than slower and well-considered research; businesses will come to expect that anything, from text through to entire films, can be produced much faster and by many fewer people – with all sorts of potential socio-economic consequences. Anything which uses slower, more manual methods, becomes a luxury; as a result fewer people build up experience as actors, or photographers, or whatever.

These two visions aren’t mutually exclusive. But, cynically, I expect that we’ll see the technology support potato mode and neglect expert mode.

Oh no, Potato

So how could we stop that? Well, the designs could become more frictional, making encouraging more deliberate and considered use of tools. There are some precedents for this. The dating app Hinge has had success with a design which pushes against rapid ‘swiping culture’; various social media platforms now sometimes ask ‘are you sure’ before you post something aggressive (though, as I argued for the Tony Blair Institute, that could be scaled up a lot). But it’s not a strategy that I see becoming popular with customers, and hence with companies who design for customers.

The other option is changing education, so that people are better equipped to use technology in a considered way. Changing education is hard. Attempts to change school curriculums are often frustrated.* University teaching is heavily reliant on people who are often employed for research skills, with very little pedagogical support. On-the-job training? Sure, more of that would be great, but again risks addressing short-term economic needs rather than broader problems of e.g. media literacy.

In sum, ChatGPT is new and exciting and undeniably fun. But technology is only as good as the society it’s placed within. And if society incentivises speed, productivity, and minimising human effort… well, technology can help meet those incentives, regardless of whether this is actually good for the world.

AI technologists talk of the paperclip problem; a robot designed to optimise production of paperclips, such that it eventually kills all humans to strip their bones for minerals to produce more paperclips. Maybe we can think of the potato problem; a society incentivised to produce and consume potatoes. That maybe doesn’t sound too bad; potatoes are tasty, straightforward, and very moreish. But that society probably won’t be very healthy.

* I regularly hear policy-minded people say ‘why don’t we just add finances / media literacy / politics / citizenship / mental wellbeing / etc. to the curriculum?' Which all sound good, except that when the Department for Education gets all those requests from different sides you can see why they might become resistant to them. FWIW I would radically overhaul the curriculum to focus on such things. But it wouldn’t be as simple as ‘just adding’ things.


Fun Fact About: Government Departments

This week the Rishi Sunak shuffled the structure of government, merging and creating departments. There are arguments for and against such changes to the Machinery of Government (MOG), laid out in good threads by Giles Wilkes and Owen Jackson. There’s no perfect government structure. For what it’s worth I think the new structure is OK, though I’d maybe want the new Energy Security and Net Zero to also cover critical infrastructure and resilience in general. Putting digital under the new Science and Technology department means maybe officials will be exposed to more discussions about innovation in general than when I was in DCMS. I suspect the main beneficiaries will probably be Labour, who are likely to inherit a broadly sensible structure but after all the restructuring faff is dying down.

But serious thoughts aside, I couldn’t help but remember this old story from New Labour’s Alan Johnson in 2005…

“I had a pen and paper ready when he [Tony Blair] called as promised: ‘It’s the Department of Productivity…’I grimaced as I recorded a capital ‘P’ on my notepad. It was an ugly word to include in a departmental name. ‘… Energy (En), Industry (I) and Science (S)... Four days later I met the Prime Minister on the rose-garden terrace at No 10, surrounded by a battalion of advisers in wicker chairs. We chatted about the challenges I faced. ‘Anything else?’ Tony asked as he prepared to call it a day. ‘Yes, there was one other thing,’ I said boldly. ‘Why has the name of my department been changed to Penis?’ There was silence.”



Audio: The New Statesman Audio Long Reads is a good general listen. But over December and January it introduced a fun twist – digging out very old articles and reading those aloud. So you can hear interviews with Trotsky and Stalin (by HG Wells) from the 1930s, and Angela Carter on maternity wards in 1983 (here and here). Now they’re back to producing new pieces, and Lea Ypi’s piece on Albanian immigration is extremely moving.

Theatre: Having now watched the West End run of Lemons Lemons Lemons Lemons Lemons, I can confirm it is still intelligent and very touching. I see they’ve weirdly missed the main point in the advertising, so – it’s about a couple who have to navigate relationship difficulties despite a new law which means everyone can only say 140 words per day. It will also be performing outside London. Also the excellent Lehman Trilogy is returning to London.

Museum of Comedy: My wonderful comedian friend Zoe Tomalin recently had her birthday at the Museum of Comedy. I feel silly not realising there’s a Museum of Comedy of London, but of course there is. It’s not actually really a museum, more a comedy venue. But the space is nonetheless full of loads of great comedy paraphernalia, making for a really interesting space to just have a drink surrounded by old puppets and posters, even if you aren’t watching a show.

Finally, if you’re into politics, social media, and also real pettiness, I recommend following this bot which tweets whenever a senior UK politician follows or unfollows someone. It’s surprisingly interesting and entertaining; giving insights into the kind of people politicians are interested in; general pettiness like Liz Truss unfollowing Rishi Sunak earlier this year; and the time Number 10 Downing Street unfollowed an account called @FENT, with 7 followers and the biography “so wack evan a batter could do it”, to which @FENT responded ‘what a d*ck’.

(Though, with Musk closing down free access to Twitter’s API tomorrow, maybe that’ll be one of the many fun bots which will cease to be. Here was a fitting farewell conversation between @infinite_scream and @infinite_bees).


Thanks for reading. Please do share with others using this link, and let me know your thoughts via this short poll.


96 views0 comments

Recent Posts

See All

Invent-A-Human ChatGPT Tests

In which I use ChatGPT to invent over 250 people, to test it for stereotyping; and discover that ChatGPT loves the name 'Johnathan', is a...


bottom of page