Hello,
Here’s the first of a few posts I’ve been assembling piecemeal in a range of places – airports, planes, and trains, various Westminster cafes, the London Tech Week conference, and amidst distracting an 11-month-old Havanese puppy.
There has been a lot going on in technology (and indeed the world) of late; also I’m finding that the reopened world, and the aforementioned range of working environments that comes with it, gives more opportunities to have my initial thoughts on a topic challenged. And that’s good. The Havanese puppy in particular had very strong views on data visualisation.
In this post I’ll be talking about whether computers have become conscious. There’s other ones in the pipeline on visualising & learning/decisionmaking, and the emotional side of precision. As always, shout if there’s anything else you want me to talk about.
Some other updates:
I was a runner-up in the Bennett Institute Public Policy Prize,
My research project on how governments used evidence and data with Geoff Mulgan and the International Public Policy Observatory continues, and we still want input from people who know about that topic,
Please pass any of those on to anyone who may be interested. Have a good weekend, whatever you’re doing with it. Berlin is very warm, and as such I may be visiting a See - which confusingly means a lake and not a sea (another of those False Friends from last newsletter). Oliver
Thoughts on: Conscious Computers
There has been a lot of justifiable excitement around Artificial Intelligence (AI) recently – even more than usual. Within the last year or two, we have seen new models such as GPT-3 (by OpenAI) and PaLM (by Google) which can answer questions, hold conversations, or even explain jokes. We’ve also seen text-to-image tools, most notably the April release of DALL-E 2, which can turn weird and wonderful text prompts into images. The below, for example, shows 'velociraptors working on a skyscraper, in the style of "Lunch Atop A Skyscraper" 1932' (via @Dalle2Pics):
Other similar programmes, including less powerful versions accessible for general use, are now flooding out – allowing more people to see for themselves the power of these new tools. One must always be cautious of tech over-hype, and clever people are helpfully probing the limits of these programmes. But nonetheless, they are undeniably impressive – and hold huge potential for fundamentally changing a great many professions. Inevitably, the question rears its head – are computers becoming so clever they actually have some kind of consciousness? Or, to use the current buzzword, are they becoming sentient? These mumblings were supercharged earlier this month, when Alphabet engineer Blake Lemoine released a text conversation with another model, LaMDA, which he claimed showed the AI had a consciousness and sense of self.* Many respected AI commentators have strongly rebutted the claims, but it has re-opened the old question of AI sentience. My feelings on the whole discussion can be summarised as: It’s annoying at best, deeply worrying at worst. But it's one that commentators on AI may be forced to engage cleverly with. This post explains those feelings. It’s going to be a slightly roundabout path to get there, but bear with me. There’s two related problems to cover first – definitions, and distractions. Then we can get to the clever computers.
Problem 1: Definitions
I’ve said before that I’m sceptical of definitions. I’m what’s called a social constructivist. I believe facts arise from a combination of (i) how the world is PLUS (ii) various social factors which make it USEFUL for people to agree on a fact. USEFUL is the key word. Let me explain via the medium of a Wendy Cope poem:
He tells her that the Earth is flat - He knows the facts, and that is that. In altercations fierce and long She tries her best to prove him wrong. But he has learned to argue well. He calls her arguments unsound And often asks her not to yell. She cannot win. He stands his ground. The planet goes on being round.
This is a great poem and a bad example of how facts usually work, because it’s an unusually simple problem. ‘Round’ is an easy concept to grasp, and the roundness of the Earth is easy to confirm (admittedly, if you count flying into space as ‘easy’). But for centuries before that, knowing about the world being round has been useful for navigation – or, to put it another way, believing the world was flat was unhelpful and you might end up lost at sea. Many concepts are not as simple to grasp or test as ‘roundness’. Even things we think of as clear scientific ‘facts’ are often fuzzier than just ‘this is how the world is’. Historical attempts at rigorously labelling biological phenomena – particularly diseases – are increasingly being replaced by ideas of ‘being on a spectrum’, or biology interacting with environment to produce particular outcomes. It’s more complicated, but much more useful in thinking about labels which matter (e.g. is dyslexia a disease in a society with no writing?). On the physics side, David Kaiser’s great book How the Hippies Saved Physics tracks how developments in quantum mechanics shifted between ‘is this idea useful’ and ‘is this idea true’. So the definitional question – ‘is this thing [round/sentient/whatever]?’ – is often unhelpful. That doesn’t mean we shouldn’t attempt to define things, especially where shared meaning is important – most notably in the law, and also when creating new terms (very interesting paper on that here). But it means definitions flow from usefulness, and social impacts – rather than some essential, universal qualities. The better question is often ‘what changes if we call this thing [x]?’. Related questions are ‘does calling this thing [x] progress conversations?’ and ‘what are potential bad effects of calling this thing [x]?’. We’ll think about that for AI shortly.
Problem 2: Distractions
“DON’T DESERT ME BOFFINS” bellows Robert Webb’s character in the parody talk show Big Talk, frustrated at his panellists turning his Big Questions into detailed and well-grounded answers. I’m on the side of his panellists (see also the great distinction between foxes and hedgehogs). Big Questions can be alluring and exciting, particularly to a commentariat increasingly trained in the Big Thinking world of universities. But done badly, Big Questions can also distract and confuse. Sometimes this can be a deliberate ‘dead cat’ strategy. The classic example is debates about the certainty/responsibilities of climate change being used by certain groups to delay pro-climate action (this still happens). Another example, one I’m very invested in, is debates around the effects of social media on society. If I was Cambridge Analytica, I’d be delighted that people criticise me as some shadowy cabal of hyper-innovative geniuses – rather than as a bunch of shysters who systematically exaggerated their abilities. That’s not to say we shouldn’t debate Big Questions of society, democracy, morality, etc. In the social media example, there are important questions around whether social media is fragmenting the public sphere, ruining democracy, etc.; and privacy concerns around personal data. But these should acknowledge some realities: for example that the effectiveness of social media targeting has been very much exaggerated and misunderstood (there's a good BBC documentary on that); or that the old world of different families getting different newspapers was a pretty ‘echo chamber’ world too. (I think there’s an argument that social media is exposing us to more viewpoints than previously – but we need to learn how to debate opposing viewpoints better, but we’re seeing and hearing more of them. So that can be seen as progress; a very different debate to the one we usually have about social media.) Pre-emptive philosophical debates can be good; we shouldn’t wait for hypertargeting to start fragmenting the democratic sphere before we worry about it. But we must acknowledge the risks of framing debates in certain ways – what gets forgotten, misunderstood, or just lost in the mass of words written on the topic.
So.... Sentient AI?
I don’t think AI sentience is like the Earth being round. I don’t think sentience is as clear as roundness. I don’t there’s an equivalent of flying into space and clearly seeing the Earth is round (some point to approaches, like the Turing Test, but there’s various problems). Using the social constructivist view, the question changes from a philosophical one about consciousness, and into a debate about usefulness and impacts. The potential impacts of AI, sentient or otherwise, are huge. Some argue, convincingly, that AI is a greater threat to humanity than climate change. How might debates about sentience affect that? Obvious one: if computers become seen as sentient, does that mean people start giving them legal rights, find ways to reward them for labour, protect them from abuses? Would doing otherwise be seen as abusing or enslaving a sentient being? If so, how might that impact upon our abilities to respond (perhaps very quickly) if they get out of hand? Such debates could get complicated very quickly, for example if countries or companies use ‘sentience’ as a way of protecting potentially dangerous AI assets. And you don’t want complicated debates when catastrophe is looming. Another one: The way we describe things, particularly complicated things, can shape its future development. Emily Bender notes a good distinction between seeing AI as a ‘brain’, versus a ‘map’ or ‘telescope’ or other tool. I, and others, prefer the latter; technology (particularly AI) is so often pitched as an end-to-end solution, rather than a tool which humans can use as part of a process. See, for example, the recent Cosmopolitan "AI cover" where the headline drastically undersells the human effort – weirdly made very clear in the article itself – involved in producing an AI image. Do we really want to be encouraging debates around whether AI is ‘intelligent enough’, where the mark of success could easily slip into ‘can it displace human thinking’? Or should we rather accept that AI can now produce super cool images, but the human coaxing involved is also cool (and possibly a good thing for the future of e.g. art markets)? I'm sure there's plenty of other issues. The key point: This should be a debate about usefulness and impacts of language, not 'correctness'. Does that mean we shouldn’t debate AI sentience at all? Well, I suspect that genie is already out of the bottle. But I don’t think sensible people should keep feeding the genie – either by claiming AI is already sentient or writing long posts rebutting such suggestions. We should instead be finding better ways to describe the distinctive features of Artificial Intelligence – and, more importantly, deciding what it should and should not do – rather than reducing the gap between Artificial and Human Intelligence. We should avoid the temptation to use other forms of intelligence as easy metaphors to try and describe (or hype) what is going on. It also means some pragmatic positivity; some of the 'they're not actually that good' criticisms of recent models simply do not resonate with the simple fact that these tools are doing astonishing things – with the risk of alienating the casual onlooker, who probably will be impressed. And finally we – where ‘we’ is as broad a range of people as possible – should be focusing effort into explaining what AI is actually doing, what it might do, asking if that’s ok, and actually finding ways to direct it towards good outcomes. Obvious in theory, really hard in practice, and (as I argued previously) I don’t think our institutions are currently set up to do it well. We’ll have to find ways to bring together the technicians, the wordsmiths, the dedicated tech-lovers and the broader audiences. But, as long as we don’t get distracted, that’s something humans can do; we’re clever like that. * = The engineer Blake Lemoine, who claimed his AI chat partner was sentient, was then put on leave. BUT, for leaking proprietary information – not, contra many reports, because he believed the AI was sentient. Whether you believe Alphabet's claim is an open question; but the multiple examples of reportage that he was 'fired because he said AI is sentient' are a good example of commentary jumping past actual detail.
Fun Facts about: Birthdays
I turn 31 on the 31st. This once-in-a-lifetime event, when your age matches your date of birth, is known as a ‘Champagne Birthday’. If you’re reading this, I suspect you may have already missed yours – sorry. I told this to my German teacher, who promptly introduced me to a similar German concept – Schapszahl Geburtstag, when your age is a double number (11, 22, 33, etc.). So you can look forward to one of those instead. (Also – in Germany it’s considered bad luck to wish someone happy birthday before their actual birthday. So birthday parties often start the day before, so people can give birthday wishes at midnight).
Recommendations
Newspeak House (Space/Community). A terraced house by Brick Lane in East London, that has been converted by technologist, community organiser, and games designer Edward Saperia into ‘The London College of Political Technology’. It features an events space, and also living space hosting a rotating roster of 7-8 Fellows who work on tech-politics problems. The events are regular and wide-ranging, and my personal favourite is their Wednesday night ‘Ration Club’ dinners where anyone can turn up and chat over homemade food. I thoroughly recommend – the fellows and guests are always fascinating, and also fun. They’ve also got a great bot account which tweets a huge range of fascinating tech/politics resources. Tech For Good (Community). I’m grateful to these folks for introducing me to the Impact Hub, where I spend most of my days. But more broadly they help to build connections and funding to support schemes which help tech do good. Their meetups are often good fun. Pod Save The World (Podcast). This is a geopolitics podcast hosted by two former Obama staffers. They’ve been doing interesting and well-informed coverage of various issues lately, particularly the War in Ukraine and elections in Latin America. Finally, Westminster Drinking: I’ve been back in the UK and hanging around my old haunts in Westminster a lot. Westminster is famous for many things, including how bad the surrounding pubs are. But there’s a few good places, which are definitely worth visiting. The Insitute for Contermpary Arts has a tucked-away little cocktail bar, Gordon's Wine Bar is lovely but good luck getting in, and if you just want a pub the Chandos and the Coal Hole are good bets.
Thanks for reading. Please do share this on, and let me know your thoughts via this short poll.
Comments