top of page

Sideways Looks #22: Tech Fought The Law, and the Law Lost

Hello,

This post is about technology and breaking the law (feat. Elon Musk). It’s a bit depressing, but there’s stuff about Eurovision at the very end. For next time I’m planning a piece about why “command centre rooms” with big screens are a bit rubbish, but personal dashboards with lots of sliders are great. And after that…. Well, feel free to suggest something.


Other ongoing / recent non-newsletter work which may be of interest:

As always, if you like this newsletter please do share it. Have good weekends – I’m enjoying a sunny Berlin with occasional torrential downpours, e.g. midway through a cycle across Templehofer Feld.


Oliver



 

Thought for the Week: Tech Fought The Law, and The Law Lost

 

As you may have heard, there’s been a bit of a saga about Elon Musk buying Twitter. Matt Levine has given brilliant commentary throughout (particularly praiseworthy given that all the news seems timed to happen for whenever he tries to have a day off). This jumped out at me from one of his contributions:


Elon Musk has made it very clear that the rule of law simply does not apply to him, and this has worked well for him. If he wants to ignore the merger agreement that he signed, he will. If you take him to court, he will put up a brutal fight and make things as unpleasant as possible for you. This puts his counterparties, like Twitter, in a tough position. They have a contract. But so what?


I’m not adding yet another piece to the ‘Musk buying Twitter’ mountain (though I’ll briefly summarise my views on that at the bottom). I want to talk more broadly about law.

The fact that enormously wealthy companies and people, like Musk, can dodge laws is depressing but unsurprising. Outside tech, the UK Prime Minister has also recently been exposed as a lawbreaker – but accountability mechanisms are so weak that he can basically choose to stay in office. In the distant past (mid 2020) the UK government explicitly stated they planned to “break international law, in a limited and specific way” (though, for all the mockery of the phrase, it’s important to note that countries fairly commonly break international law and that might not always be a bad thing).


But this isn’t just an us-and-them, elites-versus-normies issue. In Covid times, there’s a decent chance you regularly saw people breaking the law – on a train, or in a supermarket, going for a second jog in a day. You may even have done so yourself on occasion. But how much was Covid law policed? And how intensively could, or indeed should it have been policed?


I Am Not A Lawyer (or, in reddit parlance, IANAL). But I did work on tech policy – mostly around the GDPR – so here’s my take on some central dilemmas of law I encountered there. Real lawyers, or other experts, please do correct or enhance any of the below.


How to make a law work


Key question – should laws aim to:


1. stop bad things happening, or

2. provide redress after bad things happened?

They aren’t mutually exclusive; serious punishments for doing crimes (option 2) can be intended to deter crimes (option 1). But if there’s little chance you’ll actually be caught, the deterrence effect is weaker. So you have to realistically think about how you’ll enforce the law; and here each of the two options entails different trade-offs.


If you want to stop bad things happening, you need an infrastructure to monitor people, allow/disallow activity, and so on. A clear example is driving. A huge industry exists to assess whether someone is legally permitted to drive. Speed cameras, traffic policing, and suchlike are widespread. The reason is obvious: When bad things happen on roads, they are often really bad. You want to stop them, not just provide redress after.


But often it is impractical or inappropriate to build that kind of infrastructure. So instead of systematic monitoring, you rely on victims coming forward to claim post-harm redress. One example here is defamation. Most democratic countries would not want a system of censors pre-checking the probity of anything before it appears in the public sphere. If people are slandered, they can claim redress after. This saves the need to maintain a costly and intrusive monitoring architecture, but means more bad things happen. It’s a trade-off.


So, to tech policy


Let’s take the GDPR. It tries to proactively stop bad things happening with personal data. Organisations are required to register with data protection regulators, and may need to name a Data Protection Officer as part of doing that. They’re supposed to proactively minimise the risk of harmful data breaches. But in practice it’s very toothless – for example making people attend 2 hour courses they immediately forget. It’s usually enforced by chronically understaffed regulators. No wonder compliance has been sub-par.


But the bigger bite of the GDPR is that it gives people the power to punish bad behaviour. People can make ‘Subject Access Requests’ about data held on them; if a company hasn’t been good at storing data well, finding the material to meet the request can be a real pain. And if it turns out that the company has breached GDPR, you can face hefty fines – theoretically €20 million / 4% of worldwide annual revenue, though in practice they’ve been much lower.


So I’d argue the GDPR is much more like defamation than driving. That’s not because of how the law is drafted. It’s the enforcement infrastructure around it. The same law could be enforced by super beefed-up regulators making spot-checks on companies (the Information Commissioner’s Office already has some snazzy police jackets). Using personal data could require some kind of license, which requires you to demonstrate that you are storing data safely, and a regular MOT to check you’re maintaining those behaviours. That would be true whether you’re Amazon, a charity, or some small business already working round the clock just to stay in business.


None of that hardcore enforcement happens. One could argue this is fine, even right. I suspect only extremely privacy-conscious people would argue the risks of data misuse is high enough to warrant such a system. But what about all the other stuff that technology is increasingly doing? Promoting harmful material online? Choosing who to hire? Diagnosing illnesses? Criminalising, even killing, people? If we stick with a GDPR-like system, relying on victims to know and fight for their rights, might that hasten the worst visions of tech philosophers – the last human left alive desperately trying to litigate their rights against all-powerful robots, surrounded by a sea of paperclips?


Writing a law is only part of the battle


Covid showed us that the content of laws does matter. They guide us towards expectations of good behaviour, particularly when we’re in unfamiliar territory. They put brakes on mass adoption of harmful behaviours. And they create powers to stop, and/or get redress for, individual bad behaviours. But it was also a reminder that content is not everything. Even well-drafted laws don’t automatically make people behave in certain ways. And that was the case with a virus that we were keenly, sometimes physically, aware was a problem. What about the often abstract, often invisible, but still extremely pervasive world of technology?


There are a lot of important tech laws being drafted right now, particularly around artificial intelligence and social media. It’s probably not long before we see more laws around self-driving cars, the metaverse, etc. There’s a lot of important debates about drafting (for example this excellently brutal, if rather technical, critique of the EU AI act). Enforcement plays a role in these debates – drafting laws which can realistically be both followed and enforced, which give the right powers to regulators, etc. But thinking about enforcement also requires thinking about questions of power, resource, and expertise – precisely the things which make technological worlds so unequal.


There’s various ways we can look to changing enforcement, potentially quite fundamentally. Some are mundane, but still difficult (e.g. massively increase headcount, training, and resources for enforcers). Others are more innovative, and bring up all sorts of new issues (e.g. force devices to be connected to some central government regulatory machine which can auto-enforce certain activities). I think these debates should get at least as much attention as the drafting of laws. Whether we look to more traditional regularity failures – from the novelty of GDPR, to the sorts of building regulations that failed around Grenfell – or news stories from Elon Musk to Covid, I think it’s obvious that laws often don’t work as intended. And I don’t see that getting better; I can see many reasons it’ll get worse. So it’s worth really asking the question of whether we need to do laws differently.


Additional Thought #1: Timing


Regulation can be like a joke in many ways. But one specific comparison – timing is important.


Some wise advice I was given in government: You always want to try and avoid the point where you are caught between two difficult options. A counterfactual history of road regulation is interesting from this point of view. Imagine a world in which early carmakers, seeing the opportunities for market expansion, rapidly built roads independently of states (perhaps a carmaking behemoth merged with a major road-builder, and bought other plucky road-building startups along the way). Slow action by people in government – many of whom quite enjoyed the benefits of the new roads – meant that driving on fast, traffic-light-free, roads became widely enjoyed.


The clear danger of these roads meant that everyone, including the companies, agreed that something must be done. But trying to introduce things like (i) mandatory, intensive driving tests and (ii) traffic lights proved unpopular. Months of expensive training before you can even drive a car! Having to completely stop every few hundred meters! Think of the productivity losses! Truly, the Nanny State overreaching itself.


But in the real world, the highly intrusive measures of driving licenses and traffic lights are widespread and widely accepted. Mass driving evolved slowly enough, and with state influence integrated from early on, to incorporate these interventions. It also helps that road regulations are a national problem, and the potential harms of roads are very visible and close to home. I’m not sure any of this is true for many of the major tech policy problems facing us today – whether that’s AI, climate change, or information technology.


Additional Thought #2: What I think about Musk buying Twitter


As promised above, and given a few people have asked me for thoughts. I think a lot of the most concerned takes are overblown; I do not think Musk owning Twitter would turn it into a massive far-right hate speech space, akin to Gab or Parler but much bigger. Some rough stats: According to Twitter, they took down a few million tweets and accounts in the 6 months Jan-June 2021. By comparison, in 2020 there were apparently 200 million active users and 500 million Tweets per day. So even accounting for objectionable content that had not been taken down, and if all the banned content had been left up, and also had maybe increased with Musk’s encouragement – there’s a long way to go before hate speech makes up a substantial chunk of Twitter. I don’t think change in ownership would be enough to do that.


What Musk owning Twitter couldquite likely do is make the experience considerably worse for a marginalised minority of users, who will now have even less confidence that they can get redress against abuse (and might face even more abuse, if Twitter becomes seen as a space which welcomes unpleasant forms of “free speech”). It’s worth remembering that things on Twitter aren’t always great for such users right now. But at least it was, maybe, trending in the right direction.


Separately, Musk might push for unpredictable tech changes to the platform, many of which would probably be directed at making money and might make the experience of Twitter quite different and maybe worse (e.g. paying for verification).


Also Musk is a man with a deeply worrying approach to treatment of humans, and I’d rather such people just owned less stuff in general.


On Trump potentially being allowed back on: I don’t want Trump back on Twitter, but I don’t think it matters to his chance of re-election. I don’t think Twitter helped him get elected the first time – he got elected largely with votes of many groups who are least likely to use Twitter, and I think he was always going to get loads of media attention even without Twitter. I think we can blame The Apprentice much more.



 

Fun Fact about: False Friends

 

I continue to be baffled entranced by the joys of German. There’s a concept in language learning called False Friends, or falsche Freunde in German. These are words which look/sound like words in your mother tongue, but mean something different. For example, seriös sounds like serious, but has a more precise meaning of ‘respectable’ or ‘professional’. Rente means ‘pension’, not ‘rent’. Schmuck isn’t an insult – it means ‘jewellery’. And best of all: gift means ‘poison’. Lots more here.



 

Recommendations

 

If you liked (or were alarmed by) this week’s post, you might also like the work of James Plunkett, in particular his book End State. He’s a former Gordon Brown policymaker who’s now asking good questions about how states should function in the 21st century.


One of my new favourite Twitter accounts is Dr. Alice Lilly from the Institute for Government. She tweets excellent explainers about whatever the heck is happening in Parliament at any given moment, interspersed with personal reflections and some very funny content (including excellent self-parody).


The new Florence + The Machine album is very good. My favourites are this (depressing) and this (also depressing, but to a peppier soundtrack).


Finally: Eurovision. Those who enjoyed it, like me, may also enjoy French journalist Marie Le Conte tweeting about it from inside a British party (scroll up and down to read the whole thread, it’s a thing of wonder).

56 views0 comments

Recent Posts

See All

The Jigsaw Puzzle of AI Auditing (script)

Script I wrote for a DisinfoCon2024 panel on "Auditing generative AI models: identifying and mitigating risks to democratic discourse"

Comments


Thanks! Now check your inbox (and junk folder) to confirm your email

bottom of page