top of page

Generative AI: The Disinformation Machine Gun?


Quick thoughts spurred by a really interesting analogy raised by Gary Marcus at the AI UK 2023 Conference.


The AI UK 2023 Conference, hosted by the Alan Turing Institute, unsurprisingly featured multiple conversations about generative AI tools, like ChatGPT for text or DALL-E for images. Discussions included whether they could lead to new forms and waves of disinformation. As I had this discussion on one screen, on my other screen were various tweets about AI-generated images of Donald Trump being arrested and how it showed a ‘new era of disinformation’.


(I also found it ironic that the first tweet I saw referred to a “good amount of people who genuinely believe that Trump is arrested” with no clarity around what a “good amount” meant or evidence for that claim. Sure, that’s a normal tweet – but if you’re going to complain about lazily spreading claims online…)


Discussions of disinformation nowadays often come with the caveat ‘disinformation isn’t new, obviously’; then sometimes try to unpick whether disinformation today is a continuation of old-style propaganda, or something new. In What’s the Point of the Past? my view was that the question should be: “If the problems have been around before, how have people tried to solve them? Can we borrow from that? Or are there things today which we can use but past people couldn’t?”


During the AI UK Conference a pithy analogy for thinking about this came from Gary Marcus, psychologist and frequent critic of AI hype. It was something along the lines of – violence didn’t start with the machine gun, but the machine gun made it possible to inflict much more violence at much higher speed.


I spent a bit of time yesterday reflecting on the machine gun analogy. I think it’s good because it helps with considering what the risks are, and whether and how we adapt current defences. From that perspective, I landed on not the view that we already live in a world where bad actors have disinformation machine guns; but Generative AI is like giving a sniper rifle to bad actors, and a machine gun to everyone else.


Disinformation Machine Guns – What Defences?

With machine guns, the risk we want to minimise is that people get injured or killed. That also happens with knives. Ways of stopping that are (i) stop people carrying knives as weapons and (ii) enforce that with police who are more heavily armed. Option (i) might work with machine guns, but the risk is that someone could kill many more people before police arrive. Option (ii) becomes much less effective if the criminal has a machine gun. The approach to knives also has a particular balance – it lets people have knives at home for all sorts of other non-violent reasons, while reducing the risk they could cause if carried in public. But the ‘non-violent reasons’ vs ‘risk’ balance for machine guns is very different. So, in a lot of European countries, there’s a different defence – you need a license to even own a gun. The difference in harmful outcomes between these countries, and the US approach of treating guns much more like knives, is clear.


So, to disinformation. The harm with disinformation – or, better, information warfare – is that people either (i) believe false things or (ii) people don’t know what to believe anymore and give up trust. That’s old. But a machine gun of disinformation would mean bad actors can create disinformation with previously unforeseen speed, scale, and reach. Under those circumstances, ways that we might previously have tackled mis- and disinformation – libel laws, right-of-reply, trusted voices – won’t have the speed and scale required to tackle the problem. This can be worsened if high-speed information warfare successfully weakens those previous defences (a bit like if machine guns were used largely against police officers). One could also question if existing defences were successful in dealing with information warfare before social media. If not, focusing on the technology may be missing the key points.


The disanalogy with machine guns is that the harms of disinformation are far less clear. As I’ve said before, I’m unconvinced that it’s helpful to blame votes like Trump or Brexit on social media disinformation. Responses in the West to Kremlin disinformation around Salisbury or Ukraine have largely failed to undermine the pro-Ukraine case; there’s a more complicated case in non-aligned countries (see e.g. this research I did with CASM Technology), but again it’s hard to disentangle from wider historic and contemporary geopolitics, and the question of whether targeting populations does much vs. courting political leaders. Nonetheless, maybe mass disinformation is a tipping point factor on top of other contexts; or maybe we haven’t yet seen the worst effects of mass disinformation. And if that were to happen I’m not convinced the defences we have are sufficient.


Generative AI: Machine Gun or Sniper Rifle?

One can easily see how generative AI falls into that ‘machine gun of disinformation’ model. But my view is we were already in a machine gun disinformation world. The idea of spraying many falsehoods across social media is a tactic which has been regularly deployed in the past (it’s known as the ‘firehose’). Paid or ideologically-driven humans could already produce disinformation at a bewildering enough speed and scale. You don’t need AI to make disinformation convincing enough. If you wanted an image, you could probably find an old one (e.g. an existing image of Trump surrounded by police officers). Yes someone could reverse image search; but the point is that not many people would, just as they wouldn’t assess if something is AI-generated. It’s worth remembering that the background of Section 230 – one of the most consequential laws in online regulation – involves a guy called Ken Zeran being unable to take down rapid-fire disinformation about his company on an AOL message board fast enough to save his reputation. That was in the mid-1990s.


But I don’t think generative AI is a step change here. Where I am more concerned about new harms from Generative AI – as with much related to digital technology – is less that it’s a machine gun, and more that it’s a sniper rifle. The issue is how it helps hyper-targeted, not widespread, violence. Generative AI allows for the likeness of just about anyone to be easily superimposed into highly realistic pornography, violent videos, or some other deeply disturbing content, probably for harassing victims, families and friends. The concern I raised in Musk, Micro, Macro is a Catch-22 of either (i) widespread harm happens or (ii) widespread harm fails to materialise, at a macro level it can seem like ‘Liberal doomsters’ exaggerated, distracting from a more micro level where things are worse for specific people and groups.


But also… we’re giving everyone a machine gun

But one final, different, way the machine gun analogy got me thinking about Generative AI. Maybe we already live in a world where bad actors have disinformation machine guns. But the point of gun licensing is not just to make it harder for bad actors to cause violence. It also reduces the risk that generally not bad actors cause major harms through mistakes, passions, or something else. While it was previously easy-ish for bad actors to make fake-but-convincing content – easy enough to produce at a worrying speed and scale – Generative AI makes it very easy for anyone to do that. As above, I don’t think that’s a step change on top of what already exists, in terms of undermining society, democracy, etc. But it does raise questions of whether mishaps, mischief, or other mistakes could more easily be magnified into much worse impacts.


So maybe, like guns, we should license Generative AI until people demonstrate they would be responsible users. I imagine that would be really straightforward, the AI creators would be totally behind that, and it would have no issues whatsoever.


More seriously, maybe Generative AI should not spur new concerns about technology – it should intensify much older questions of how we deal with human vulnerabilities to mistakes and misinformation. That’s a much more complex discussion than focussing on particular technology; but it looks increasingly unavoidable.





106 views0 comments

Recent Posts

See All

Comments


bottom of page