Hello you,
Last week saw results from two landmark social media cases in the USA. These cases are the latest of many focusing on questions of social media addiction, mental health impacts, personal safety etc. especially related to young people. I was in the middle of writing another post about this topic, particularly how evidence is being used in the EU context, which I’ll save for a follow-up post. For this post I want to consider how these - potentially very important - US cases may relate to EU approaches to regulating online platforms.
To note, I am more expert in EU policy than in the US, and also not a lawyer or law scholar. This topic is also one where I’m “thinking aloud in public” to try and work out my stances. There may be mistakes, misunderstandings, me getting tangled in my own thoughts. Let me know if so!
Before I get to that, some plugs. I’m speaking in Toronto about (i) the EU’s Digital Services Act and (ii) Digital Sovereignty in and around the Democracy Exchange Summit mid-April. I’ll also be in Brussels for CPDP penultimate week of May. Let me know if you’re around either.
In my day job I continue to do work on topics including researcher data access (whether to public or non-public data), implementing the Digital Services Act, and online harms including non-consensual sexualisation. I was interviewed on the latter by Netzpolitik and the European Economic and Social Committee; way more cool outlets than my colleague Lejla, who was merely interviewed by Glamour Germany. I’m also part of a cool new project in Switzerland, exploring close academic-NGO collaboration methods with the University of Zurich, which I’m very excited about. If any of those are relevant to you, get in touch.
Bellwether Cases
The first case happened in New Mexico, and was focused on the safety of Meta’s platforms, including whether they enable child exploitation. It was brought by state attorney general Raúl Torrez (i.e. a political figure arguing they are standing up for citizens in their state). The jury agreed with Torrez against Meta.
A day later came results from another jury, in Los Angeles. This was from a lawsuit brought by a plaintiff known as K.G.M. against Meta and Google (as owners of YouTube), arguing that the platforms’ designs had negative impacts on her mental health, including addiction. She had also sued TikTok and Snap, but both settled before the trial. The jury supported her argument.
For a longer summary of both, see here.
Although K.G.M. is just one plaintiff, this case was a “bellwether” which can open up potentially thousands of similar cases in the near future, with potential to add up to billions of dollars of damages.
Such cases are a familiar US approach to governing online platforms - individuals come forward and claim harm, courts rule on the harms, and companies may then need to pay damages and perhaps make changes to avoid similar such cases in future. Other places (including the EU) are instead looking to laws and regulations, including social media bans for young people. (Some US states are also trying various laws, but it’s a mixed picture).
It’s worth remembering that US court cases like these have impacts beyond who is deemed “guilty” at the end and has to pay damages. And these impacts are relevant to other countries too. The discovery process can reveal documents - which can then be used by researchers and journalists more broadly. This can sometimes be a more effective way of gaining transparency than, for example, existing laws mandating data access (though some argue that publishing this information then leads to mischaracterisations). And, of course, such cases create precedents, arguments, and reference points which can be used in other cases.
It’s also worth remembering the costs of the trials, if they are allowed to proceed, can be substantial - even for companies - in addition to any damages paid. Plus there’s the press reportage, demands for senior execs (even CEOs) to testify, etc. Arguments about how effective fines are as a deterrent can sometimes miss these broader effects.
I’m not going to relitigate (pun intended) the rights and wrongs of these cases in particular. Most of the opinions I’ve seen regard this as a win for platform liability. One, particularly fully-argued example, is from Mariana Olaizola Rosenblat of the NYU Stern Center - or this podcast from Your Undivided Attention, including stories from co-presenter Aza Raskin of what it was like to testify. Such pieces argue platforms have deliberately taken choices which “hook” users’ attention, with too little regard for negative effects.These cases, supporters argue, means platforms finally have to provide evidence about their decisions, and be accountable for the effects.
There are counter-arguments from Mike Masnick of Techdirt, (also here in podcast form on Ctrl-Alt-Speech if you prefer that), who has long been an interesting opponent of social media addiction arguments; also from Eric Goldman, law professor at Santa Clara University School of Law. These counterarguments say that the cases will lead to platforms doing things like (i) avoiding proactive risk assessments, as the docs may end up as evidence in court and (ii) being much more restrictive on their users, in ways which may have negative consequences (including for e.g. vulnerable users’ access to important information and connections).
A lot of the arguments hinge on whether platforms are being held liable for just their design choices, or also their content. Focused on design was a novel approach in these cases, which allowed them to get further than previous cases. The distinction between design and content is important in the US context, where online content - as a form of “free speech” - is protected in a couple of ways.
Design vs. Content - US vs. EU
In the US content is unusually strongly protected by the First Amendment, which means laws are very limited in how much they can make platforms do content moderation (i.e. how platforms choose to remove, leave up, promote, or limit the visibility of particular posts). Arguments on First Amendment grounds rely on a pretty US-specific cultural trade-off, that limiting government power is solidly worth the drawback of limiting protections that the state can provide; not one which easily translates to other countries.
The other key US idea is Section 230, a rule from 1996 which basically says “platforms are allowed to do content moderation without that being a reason to hold them responsible for content which they host”. This may seem intuitively weird - I am making choices about what can appear on my platform, but then I can’t be held liable for content which appears after those choices? But the idea is that otherwise companies would have to either (i) avoid content moderation entirely, which would make them unusable or (ii) do content moderation, then accept liability for all content on their platforms, which means much more tightly monitoring and controlling every piece of content on their platforms - extremely challenging (arguably impossible) at the scale of the modern internet. Even within the US, whether Section 230 is still fit for today is a heated debate and one I won't get into here.
The general idea of “avoid making platforms liable for all their content” is reflected in non-US laws - for instance the EU's Digital Services Act makes it illegal for EU countries to oblige platforms to proactively monitor all content on their platforms. But the argument “does making laws about design actually lead to laws about moderating content” is less of a red line in Europe (though this is often exaggerated and misrepresented into claims that EU laws are designed to do censorship - to which, no, see this rebuttal, and the US administration’s own research failing to find EU censorship).
It’s also worth noting that courts in EU Member States, not just the EU, are also litigating on various matters - Mark Scott has a good round-up. But the DSA, in general, is the EU’s flagship approach. National laws have to fit round it; and also, given the US Congress’ inability to pass Federal laws, is likely to be the main Important Online Platform Regulation in the northern hemisphere for some time to come.
But beyond legal distinctions and similarities, the important narrative trend of focusing on social media companies’ business and design decisions - how people are served content, rather than the exact content itself - is also present in the EU.
Business Practices, Not Politics
Here’s how that shift looks in the EU. The previous (and worryingly gung-ho) EU Commissioner Thierry Breton attacked platforms (particularly X) for the sorts of content they were hosting (much to to the criticism of groups who otherwise supported the DSA). There were also codes of conduct on Disinformation and Hate Speech - though they focused on both design and content, obviously the names draw attention to the latter.
Breton has since been replaced with Henna Virkunnen, a more “normal” Commissioner. The EU Commission also has become more cautious of overtly political fights with the US Government, who - as above - have attacked the DSA repeatedly. So the Commission has gone for less divisive and less partisan approaches - which often involves going after poor business practices.
In the EU’s fine against X, they avoided politically tricky questions of whether X is (e.g.) hosting hate speech or disrupting elections; the emphasis was on X having shoddy business practices around transparency - see this analysis by Matteo Fabbri, also me and LK Seiling commented specifically on the data access aspect. A lot of the Commission’s recent language around online platforms, including the proposal of a potential new Digital Fairness Act,* makes heavy use of consumer protection arguments.
The Trump administration and allies will nonetheless still misrepresent and attack all and any EU laws. But the EU’s current approach may be a good way of appealing to parties, including in the EU, who may be tempted by Trump-esque criticisms of the DSA but engage with “business behaving badly” arguments more than “hate speech and disinformation” arguments.
This isn’t just about US platforms. Relevantly for our topic of young people and mental health, in February the EU Commission announced it had:
“preliminarily found TikTok in breach of the Digital Services Act for its addictive design. This includes features such as infinite scroll, autoplay, push notifications, and its highly personalised recommender system.”
… “TikTok did not adequately assess how these addictive features could harm the physical and mental wellbeing of its users, including minors and vulnerable adults. For example, by constantly ‘rewarding' users with new content, certain design features of TikTok fuel the urge to keep scrolling and shift the brain of users into ‘autopilot mode'. Scientific research shows that this may lead to compulsive behaviour and reduce users' self-control”.
(“Preliminarily” means that TikTok now has a right to defense, to access documents that form the EU’s evidence base and challenge them, etc. This is how the Digital Services Act works in principle - it creates a back-and-forth dialogue which eventually leads to some resolution).
Note again that the question is a design and business choice one - did TikTok make certain choices, and then fail to assess their impacts on young people? However, whatever the narrative parallels, the approaches to regulating is different to the US court cases.
In particular, the Digital Services Act uses language around assessing and mitigating “systemic risks”, which include “any actual or foreseeable negative effects in relation to… the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being”. How exactly a “systemic” risk can be determined is unclear and highly debated - I co-wrote this “dual track” proposal with Michele Loi for AlgorithmWatch. But it’s clearly different to the approach of the US cases; it doesn’t focus on harms to one person, but rather potential risks to many people.
The EU approach trades off clarity and exactness (“what actually happened to this one person?”) for proactivity and breadth (“how will you assess and reduce a broader set of risks?”). I generally lean towards the EU approach, as I prefer encouraging proactive risk mitigation. I accept it has issues around how to clearly comply, how to minimise regulatory overreach, etc. - which I think should be addressed with iterative and transparent dialogues between regulators and regulated parties (see the Dual Track proposal). But comparisons with the US cases has made me think: Before these two decisions, there was little to no accountability for harms; now, afterwards, the courts and platforms may be suddenly dealing with 1000s of cases. An iterative, dialogic approach from the EU may seem more attractive now - less of “one case, then a floodgate opens”.
A practical point of comparison, too, regarding timeframes. Regulation / legislation is often seen, often correctly, as a very slow method. But in this instance, both processes had a comparable timeframe. The Digital Services Act has been in force since 2022. The two cases above originated, in one form or another, in 2022/2023. This, of course, doesn’t account for the time spent negotiating the DSA - it was originally proposed in 2020. But it also doesn’t account for various earlier attempts in US courts which led to these cases. This comparison is a bit stretched, as both methods are trying to do something different - produce specific vs generalisable results. But I think worth noting, the idea that courts are more efficient than legislation may not, in this case, be clearly true.
One final point in which it will be interesting to see if the EU parallels the US cases: To what extent does focusing on business practices make the alleged harm - in this case, addiction - somewhat of a side-issue, or will it become more central? There are some quite deep-seated questions, advantages, and disadvantages which stem from that question; I’ll finish this piece by drawing them out.
Is this addiction?
I find it interesting that in the US cases, as far as I can tell, “addiction” was never clearly defined. This is unsurprising. Social media addiction as a concept has never been conclusively codified, and may never be. This is the point I plan to develop in the follow-up blog, but to summarise.
There is no consensus around a medical definition. There have been attempts to introduce “internet addiction” into the Diagnostic and Statistical Manual of Mental Disorders, and they have failed, alongside other addictions which are not tied to specific substances (the one exception is gambling addiction). Rhetoric which paints social media as a new tobacco has many problems. There are arguments that any “internet addiction” should instead focus on the behaviour the internet is facilitating - sexually risky behaviour, problem gambling, etc. - not the technology.
Researchers and courts can, of course, develop definitions. You have a broad definition - e.g. a 2017 paper used in a previous case, School Districts v Social Media, states “use of Facebook that interferes with regular life, or amounts of Facebook use that exceed a person's desires/intent may indicate an addiction.” But (depending on some of the words like “interferes”) that could be a low bar, and raises the question of whether “addiction” is an appropriate term. Or, if one is committed to demonstrating “addiction” in a way that might convince doubters, you can go for a higher bar, somewhat clearer markers of harm - such as impacts on sleep or (from the Cleveland clinic) “experiencing problems at school or work and in friendships or other relationships because of your social media use”. But then the key word is “because”. It’s a hard job to conclusively pinpoint the causal effects of social media in highly individual circumstances, particularly at scale.
Maybe courts can just rely roughly on the "I know it when I see it" argument used in 1964 by US Supreme Court Justice Potter Stewart to define obscenity.*** This may be necessary, to avoid the real questions at stake (“is this product design causing harm”) becoming sidetracked into definitional ones (“is this really addiction?”) - such side-tracking can even be a deliberate tactic. But less charitable interpretations can point to verbal slights-of-hand where “addictive design” is used as a rhetorical device without ever justifying the label. I’m not sure how long such arguments can survive critics without a more robust reflection.
But do such justifications really need to go as far as proving “addiction” to say that platform design is bad? We can find alternative metrics or terminology. . In a “Heartbreaking The Worst Person You Know Just Made a Great Point” moment, in 2023 Elon Musk came up with the great metric of “unregretted user minutes” (which has also been studied, not just tweeted). In some of their internal studies Meta talk about “problematic use” rather than “addiction”. Personally, I have always preferred to talk of “compulsive” rather than “addictive” design. Products which encourage users to scroll compulsively and feel bad afterwards aren’t, in my view, a net good for the world - even if they aren’t going as far as addicting us. The excellent (and free!) book Stand Out Of Our Light by former Google engineer James Williams makes this point - technology can also be designed to give us what we want to want (flourishing, learning, etc.), not just what we want in the short term (easy entertainment).
But whether “compulsive design” is a regulatory matter brings you to complex moral and political questions about what is “harm”, the role of business and technology within society, etc., which I’ll save for the follow-up post. But the bigger picture point to take away. Underpinning regulatory approaches are visions of what good regulation looks like.
Are regulations supposed to be letting platforms run largely free, stepping in only when users (or some number/percentage of users) have experiences so bad that they are comparable to drug or gambling addiction?
Or should regulations be incentivising platforms to think of how users would be overall happier - or at least, make design options apart from ‘maximum attention suck’ more available to users - whether or not that’s the easiest route to monetising people?
Or some middle ground? Strong punishment for clear harms (addiction), plus different countervailing forces against market pressures towards unhealthy design (compulsion)…?
To decide that may ultimately require grappling with the question of what “good” looks like. And that’s the kind of political - even moral - question that current narrative shifts towards “is this business behaving in a shady way” seem to want to avoid.
* In reference to the Digital Services Act: Yes, another possible EU digital regulation. My personal take - and this is absolutely my personal take - is that the EU needs to rely less on drafting new laws to address problems and (i) focus on implementing existing laws effectively and (ii) find better ways to update existing laws if they have gaps or issues. The latter is, I’ll accept, challenging for many reasons - but I think it’s very important.
** In reference to the fine focusing on Tiktok’s business practices: Plus the idea of protecting young people is also relatively bipartisan. Also TikTok is (or was at the time) a Chinese-owned platform; and the case was conveniently announced just before the Munich Security Conference, where many Europeans were concerned Marco Rubio was going to do a J.D.Vance and attack Europe for censoring American platforms. I have no certainty over whether the timing was deliberate, but it seems convenient - “we don’t just go after American platforms, actually”.
*** In reference to Justice Steward “I know it when I see it” argument: I am aware this approach has been debated extensively and I’m still not sure where I land on it. Any favourite readings on it, please let me know!
Fun Fact About: Bauhaus / Dessau
Last weekend I went to the town of Dessau in Saxony-Anhalt, about 1h30 southwest of Berlin. This is the home of the Bauhaus, the central school / workshop / meeting place from which came the famous design style (or, to use an untranslatable German word, a form of Gesamtkunst). I didn’t realise that the Bauhaus as a building had only operated for less than 15 years, from 1919 to 1933, before it was closed by the Nazis as it was quite a left-wing place. (Today, Dessau and the surrounding region of Saxony-Anhalt is one of the strongholds of the far-right Alternative für Deutschland party).
I’ll say, unless you’re Bauhaus-mad, the building and exhibition are probably not worth a special journey. But I did subsequently learn that Dessau is an important example of a city where population shrinkage - 104,000 inhabitants in 1989, to less than 70,000 today - has resulted in ambitious schemes to demolish and the rewild outer parts of the city, creating an “urban island” surrounded by “green corridors”. It’s one of a few examples in the former East Germany. It’s been running in phases since the 2000s, renewal of the plans were discussed last year in a citizens’ dialogue, and for anyone who (i) reads German or is prepared to online-translate a PDF report and (ii) is very keen on urban renewal, the plans for 2025/2040 are discussed here.
I’ll say, from my brief trip, I didn’t get a feel of an urban island surrounded by green corridors. But I don’t think the aim is a beautiful tourist destination; rather, to tackle the depressing feeling of empty unused space which, I’ll be honest, does strike me in quite some areas in former East Germany, including parts of Berlin.
Although the exhibition wasn’t particularly special, the day out in a group of arty friends was highly enjoyable - thanks to coordination from Vittorio Cerulli (who consults for companies on achieving socially good purposes) and Michael Berger (who does design and photography and has interesting thoughts on both).
Recommendations
A work-related one, but I was the lead author on AlgorithmWatch’s Guidance for using Generative AI responsibly. Apparently various people are finding it useful, which is good to hear, and maybe you will too. Or, alternatively, you can enjoy this promotional video of a conversation between me and me.
This piece - "AI got the blame for the Iran school bombing. The truth is far more worrying" by Kevin T Baker - was excellent, and also an interesting example of a piece being picked up by The Guardian from an original Substack piece. It argues against focusing on the (incorrect) idea of the Claude LLM being pivotal to the story:
It has also occluded something deeper: the human decisions that led to the killing of between 175 and 180 people, most of them girls between the ages of seven and 12. Someone decided to compress the kill chain. Someone decided that deliberation was latency. Someone decided to build a system that produces 1,000 targeting decisions an hour and call them high-quality. ... Calling it an “AI problem” gives those decisions, and those people, a place to hide.
Outside AI, I continue to read, and worry, about how to make government and the state work better in an age of modern technology, to deal with the general dissatisfaction that is (in my view) leading to worrying political developments. So here’s some books about that,
Last time I mentioned Re:coding America by Jennifer Pahlka. That is really good and worth a read. Also her Substack Eating Policy, though it gets a bit more detailed and wonkier than the book.
Last time I said I was partway through Abundance, which I’ve now finished. It was fine, but not sure I got much from the book that isn’t already in the million thinkpieces and podcasts it’s spawned - and I still feel it avoids some of the harder political trade-offs of “streamlining” regulations, which somewhat inevitably makes accountability harder (which may be an acceptable trade-off, but it’s still a trade-off).
Sam Freedman’s Failed State: Why Nothing Works and How We Fix It is excellent, if you’re interested in the UK. I have recommended his blog (Comment is Freed) more times than I care to mention.
When the Clock Broke, by John Ganz, looking at political dissatisfactions and Trump-esque populism around the 1992 US election, was eye-opening (I always saw Clinton as a Blair figure, riding in on a wave of optimism and public support. It also had plenty of vivid characters and stories, particularly around Ross Perot - who after he became rich bought back the bungalow who grew up in and - because it had since been painted - had the entire building taken apart and rebuilt with the same bricks, turned around to reveal their unpainted side. “The house sat empty but immaculate, watched over by a caretaker who lived above the garage”. But I’m not sure I learned any particular big picture stories, even if I was very engaged. But there’s an interesting afterword precisely about these challenges of drawing parallels with the past, with a part I think worth quoting at length:
This book was based on an intuition that the disparate phenomena I catalogued were actually reflections of a single underlying phenomenon. In her review for The New York Times, the critic Jennifer Szalai used Raymond Williams's term "structure of feeling" … Williams provides a helpful concept insofar as it attempts to combine vibes with a structure: a bounded shape, an identifiable set of qualities.
…' Elements by themselves never cause anything. They become origins of events if and when they suddenly crystallize into fixed and definite forms." In like manner, this book catalogues not the causes of Trumpism so much as its constitutive elements…
….this book can be read as the fossil record of Trumpism. Another possibly helpful idea from biology is homology, anatomical structures that may serve different functions but share a common ancestry, and analogy, structures that serve a similar function but have evolved independently, due to evolutionary pressures.
I think, in the confusing times we live in, that’s a pretty good justification for writing about anything.
A European Response to Social Media Addiction: Also, Bauhaus.
Last week saw results from two landmark social media cases on social media harm in the USA. But is that relevant for Europe?