Admin Note: I am combining both my website and my newsletter onto one site (Ghost). This is the first post of the new site. I'm in the process of porting all my old posts here too (Wix makes it very hard to extract old posts, to the extent I am having to write custom code to do it...). If you notice any technical issues, or if you want to take this opportunity to unsubscribe, just let me know.
A Better Internet & How To Get There
As group chats seem to be entering the news a lot these days (the Onion, Reductress), I’ll start with my own story about one. I am part of a reasonably large online chat group which discusses political matters from multiple perspectives (I won’t give more details than that). In recent months it became an incredibly toxic space, with debates repeatedly descending into aggressive personal attacks.
I wanted to stay in, for fear of missing interesting insights or views; but I also felt unhappy whenever I opened the space. Others felt the same. There were numerous discussions about how to “solve” this, but all solutions descended into meta-disagreements – what would rules be, how to decide on authority, should we really be “calm” in the face of views we strongly disagreed with, etc. But there was clearly a strong view that the toxicity was going too far.
Then someone made a suggestion. We split out a separate space where there would be an expectation of high-conflict discussion. If things got heated in the main space, the conversation could be moved to the conflict space, visible to those who had opted in for that kind of discussion but not unsettling everyone else. A deceptively simple change to some infrastructure cutting through a difficult social tangle.
That vignette is not intended to mean that all digital problems can be solved with technical design choices. It is intended to say that when things are getting tricky, and problems seem intractable, maybe it’s time to widen the toolbox.
This is the first of a two-part piece. This first part is summarising some current themes in general social media discussion – in particular a possible shift from regulation to “pro-social design”. The second will be a more personal take from me on “what next,” and the barriers in the way.
From Regulation to… What?
As you may have noticed, there are a lot of concerns regarding politics-in-general at the moment. Many of these relate to my core interests, viz. how to make tech products (especially those related to sharing information) work towards socially good outcomes.
Previously I have seen regulation – in the sense of governments, courts, and similar institutions laying down rules for platforms/companies to follow – as an important tool towards this goal. Regulation has many issues, of course. There are political/conceptual questions: How to counterbalance those powers to avoid political capture, how to develop rules which are specific enough to follow but broad enough to address relevant issues. There are also implementation questions: Are regulators actually able to enforce rules, how to update them, what are unintended consequences (particularly when multiple regulations interact), etc.
But nonetheless I still think regulation is an important tool against a fundamental problem of giving companies too much power to act in their own self-interests: that the immediate self-interest of companies and their users may have negative impacts on other people, society, etc.
However: While many, companies included, have always pushed back on regulations, the return of Trump has raised fundamental questions of whether tech CEOs have any real incentive to follow regulations – maybe the reverse is true, if those regulations are coming from the hated EU. And it’s not just a coalition of Trump-y politicians and business owners; figures from Mario Draghi to Ezra Klein are also raising (more nuanced) criticisms of regulation from a more progressive standpoint. These concerns should be engaged with seriously, in my view (though without lumping all “regulation” together, or retreating to the banal canard “we aren’t against regulation we just want good regulation”). But the anti-regulation mood does leave me watching the politicisation of communication channels, and the unconstrained outputs of AI providers, with an even greater sense of powerlessness than before.
There are other tools besides regulation, of course. Shaming companies can be difficult (again particularly nowadays, as “vice signalling” seems to be increasingly popular amongst the power-holders of America). But it can work – the Cambridge Analytica scandal does seem to have prompted some thinking about social responsibility amongst social media companies, even if it was based on mistaken (and in my view misguided) premises. One can also try and find overlaps between business interests and social good, though that’s also difficult (as I have noted, certain parts of EU regulations ensure the sorts of transparency that Musk, Zuckerberg, etc. claim to want, but that’s not likely to help their case). Good research is a tool which can support these other tools; though companies are making this harder too (again, a key part of EU regulation was to halt the increasing barriers companies are creating to research).
One possibility – a difficult one, but one that comes with a ring of positivity – is creating alternative technologies. These would provide many of the benefits of existing technologies while also paying greater regard to the negative impacts. The hope is that (i) enough users will move over to these new technologies to make a difference and (ii) the process of developing these alternatives will eventually make it easier for other (larger) companies to also join in. Think of electric cars, vegetarian/vegan alternatives, etc. The idea is not that alternative tech would solve social problems, but provide ways in which we can get certain benefits while minimising negative effects.
In relation to online spaces, this idea is often captured by terms like “pro-social design” or a “healthier online ecosystem”. Design and ecosystem have different connotations, which capture different aspects of this discussion, so I’ll deal with them both separately.
Design
The logic behind pro-social design is to try and design systems which reduce the following issues. Big online platforms (also search engines and other things, but I’ll use “platforms” as a shorthand) have allowed many more people to access, create, and share a wider range of content. However these technologies also reduce barriers to things like: Sharing or receiving misleading content; harassing people; making inflammatory statements in ways which can spread widely; finding material which strengthens existing beliefs, however wild those beliefs are; consuming endless content for extended periods of time; getting into fights; making ill-advised comments without considering who can see them; etc etc.
The way platforms are designed can even encourage such behaviours. By default many platforms foreground content which gets “engagement” (likes, shares, comments, views, etc.). The side-effect is that outrageous or inflammatory content is often promoted, as they get that engagement, and therefore such behaviour is incentivised. This may because people like to be inflammatory and get the attention (see Donald Trump), or simply because it’s the best tactic in a competitive market for attention. Such behaviours are not totally new. Politics has always been sensationalist, as have many other fields – see the old journalistic saying “if it bleeds it leads”. But these behaviours can now involve a wider array of actors, and encounter considerably less friction and accountability, even in extreme forms.
Besides the political issues, too, there’s a general “benefit the user” argument. It is a human problem, not just a tech problem, that our attention is often grabbed by “clickbaity” stuff – even if that’s not a feature we actually like about ourselves, or content we’d really want to see lots of given a conscious choice. Some scrolling of comedy skits, cat/dog videos, etc. – can be a mood booster. Endless scrolling instead of something more fulfilling (or sleep) – cause for regret. As can doomscrolling or watching unpleasant fights online. As James Williams put it in his excellent (and free) book Stand Out Of Our Light, social media often gives us what we want, not what we ‘want to want’.
There’s various alternatives for algorithmically deciding, from the huge range of possible content, what a user will actually see and in what order. I’m going to drastically simplify a couple to make the point.
One option is to prioritise “high quality” content. This arguably harks back to earlier days of the internet: Google’s early success arguably came from its Pagerank algorithm, which was very effective at sorting out “quality” content from the high amount of dross online (even in the 1990s). This was largely done by whether a page was linked to by lots of other pages, which was seen as a marker of quality and relevance; also whether a page was linked to by other “high quality” pages. But it’s quite hard to apply that logic to social media content, which doesn’t have the same “linking to sources” dynamic as web pages. Another crude way is to pre-select certain content as “quality”, but that raises questions of who does that and how they decide (and how to avoid reinforcing existing authority structures). Or you can use user surveys of what users find to be “quality”, but this relies on users filling our surveys.
Another interesting and increasingly-discussed alternative are “bridging” approaches. These aim to prioritise content which appeals across diverse groups, rather than content which massively appeals to one group and another group really dislikes. This is the system upon which X’s Community Notes system is based, to ensure a Note only appears on a tweet if enough people who may usually disagree with each other approve of it. (This approach, I discovered recently, is also used by Anthropic to train its AI models). The idea’s been floating around for at least a decade, but it’s gaining renewed attention due to (i) more urgent discussions of how to address polarisation and (ii) the attention on X Community Notes. There’s even been experiments in how it could be used in negotiations as fraught as the Palestinian peace process.
A final example is improved filters, to give users more fine-grained control over what they want to see. This can be done by the platform itself or provided by a third party (so-called “Middleware”). There is lots of room for improvement here: It is frustrating that there very sophisticated tech has been developed to classify content and users to get engagement (particularly for advertisers), but filters for user experience are often blunt “block this keyword” or “unfollow this person”. But there are also experiments by e.g. the Skyfeed plugin to Bluesky, a platform which is often seen as the leading Twitter alternative. I’ve started playing around with it; it’s not always super user-friendly, and can still be quite blunt in its tooling, but it’s a good start. And an important thing about Bluesky is they welcome, rather than block, this kind of external experimentation.
In concluding, I’ll note a concern I have. These approaches might assume a certain kind of p engaged user; someone who comes to the internet to talk about issues, bringing their existing views (which the platform has to somehow detect), looking to absorb different views, and happy to put in the effort to do so. That happens, of course, but it represents a particularly engaged sort of user. Also, there is always a dilemma about how much you make it a user’s – rather than a platform’s – responsibility to address problems, particularly at scale. But I’ll elaborate on those further in the 2nd post another time.
Ecosystems
Where design might focus on individual tools or platforms, the ecosystem view thinks about how different platforms, audiences, conversations etc. might connect together in positive ways.
Some of the ideas aren’t new. A classic one is to have different spaces with different expectations of behaviour. Reddit and Mastodon take this approach, with many sub-spaces where different moderators set different rules. Indeed, one could argue this is simply the norm of human behaviour. It was a weird feature of Facebook, Twitter, etc. that the personal views, jokes, and rants of normal people could be seen by millions of total strangers, who might be operating on very different rules or assumptions of how to behave online (known as “context collapse”).
But some are more ambitious. Organisations like New_Public or the German broadcaster ZDF are interested in seeing how journalism and other mainstream media could use some social media style networking approaches to hear from, engage with, and inform audiences in better ways. There are also various experiments in trying to connect digital discussions directly to politicians and government, in ways that could even influence policy, the most famous being g0v in Taiwan.
A final important idea which is often floated is the idea of the “Fediverse”. This begins from the premise that a major issue with the digital world is the non-availability of real choice in the market for platforms. Some platforms dominate in certain kinds of content, so you miss out if you’re not on them; also you tend to collect networks of people, and it’s annoying to have to start again if you decide to move to a new platform. Which means people get “locked in” to Big Platforms, forced to use them to be digitally connected. You might hate Musk and what he’s doing to X, but you still want access to all the followers you built up, and also to keep posting and reading relevant content without also having to do that with X on top of Bluesky, Mastodon …. So what if there were networks of platforms where you could keep followers when you move between them, post to and read from them all from one place, etc? This is the idea of the Fediverse, which is already under construction (and even has buy-in from Meta, in the form of their Threads platform).
The ecosystem view has, somewhat by necessity, high ambitions. Proponents want to reduce the power of the current Big Tech dominated ecosystem and increase the space for alternatives. The design view I discussed early can be ambitious – and the two views are complementary, not competitors. But you could take a design view that just says “we create our cool platform for just us, we’re not trying to change the world”. The ecosystem view is playing against the Big Tech incumbents.
The problem with the ambition – which I strongly support, by the way – is it sharpens the problem I raised about design. Are these spaces appealing to enough people outside the most engaged users? On a panel about New_Public at the re:publica conference last year, one of the panellists – I forget who – referred to the risk of “Broccoli Tech”. I.e. stuff which is maybe healthier than alternatives, but just less appealing than a TikTok or a Facebook. Platforms which, by the way, may have a role in converting disengaged people into voters – often for more extreme parties. It seems unlikely Broccoli Tech would help attract such disengaged audiences.
Given its centrality to the ecosystem view, I think the issue of achieving wider uptake is an important one. Lots of the most successful case studies point back to Taiwan – a country with a very unusual history and context (I’m partway through the book Plurality, which discusses this and other ideas related to much of this piece). Can we make such ideas work in other contexts? How? Is this just a case of waiting? How would we know it was succeeding?
Those are big questions, and a lot of them relate to a distinction I’ve been circling around in this piece – between creating better spaces for engaged online participants, and also trying to shape the experiences of more passive audiences. I hate to do this to you all again but – that’s for Part II. This piece has got long enough. For now, what I’ll say is I think there’s loads of great ideas out there. My interest is in studying successes and barriers in getting them actually taken up. So if you’re involved in such efforts – please reach out. I accept a variety of communication methods. Though if you’re too toxic I may put you in a separate room.
Fun Fact About: The Berlin Television Tower
I recently showed a friend around Berlin. Arguably her favourite sight was the Television Tower (Fernsehturm), a structure which dominates the Berlin skyline with its strange shape – which my friend referred to, in a phrase I will continue to use, as the “pointy disco ball”.

Can you guess what the structure at the top is actually based on? Think of the historical and political context of the time. Answer is under these other facts (copied/summarised from Wikipedia and also an excellent walking tour my friend Michel did of Karl Marx Allee – another great source of excellent historical facts).
- With a height of 368 metres (including antenna) it is the tallest structure in Germany, and the third-tallest structure in the European Union.
- At the European Broadcasting Conference in Stockholm in 1952, which was responsible for the coordination of frequency waves in Europe, the GDR [East Germany] – not recognised politically by most countries at the time – was only allocated two frequency channels. Under these circumstances, it was impossible to cover Berlin's urban area by multiple small broadcasting stations without interference…a powerful large broadcasting facility at the highest possible location was required.
- Alongside its actual purpose of providing the best possible broadcasting services, the role of the tower as a new landmark of Berlin was increasingly gaining significance. For this reason, in 1964 the government demanded that the tower be built at a central location, an appeal that was supported by the SED leadership.
- When the sun shines on the Fernsehturm's tiled stainless-steel dome, the reflection usually appears in the form of a Greek cross. Berliners nicknamed the luminous cross Rache des Papstes, or the "Pope's Revenge", believing the Christian symbol a divine retaliation for the government's removal of crosses from East Berlin's churches. For the same reasons, the structure was also called "St. Walter" (from Walter Ulbricht [first leader of East Germany]). U.S. President Ronald Reagan mentioned this in his “Tear Down This Wall” speech. [This fact seems particularly fitting given recent news].
Finally, the answer (drum roll). The top of the structure is based on Sputnik, as an homage to the satellite launched by the Soviet Union in 1952.
Recommendations
For those looking for or offering Tech Policy adjacent jobs in Europe (including the UK), I continue to run a Googledoc of jobs I hear about (updated more sporadically than I’d like, but you can also add jobs yourself).
Public Service Announcement for hayfever sufferers. Start taking antihistamines 2-4 weeks before The Pollen begins. I was a bit sceptical (sounds like a way to sell more antihistamines) but I did that last year and it seemed to make a big difference. If, like me, it’s the June pollen that gets you, start now or soon. You’re welcome.
I often include links in the main text as citations/evidence, but a lot are also pieces I’d recommend, so to repeat them: Stand Out Of Our Light, Plurality, prosocialdesign.org. I’d also add two important pieces by the excellent Daphne Keller - Amplification and Its Discontents and The Rise of the Compliant Speech Platform - and a complex but very interesting paper proposing a specific design for pro-social media that I’ll probably discuss more in Part 2.
On that paper: I also find Audrey Tang and Luke Thorburn, two of the authors, worth following on these topics in general. Audrey Tang is one of the leading lights in this field, particularly as former Digital Minister of – you guessed it - Taiwan.
In these crazy times for tech and democracy, the podcasts I am getting most value from are Lawfare, Ctrl-Alt-Speech, 404 Media, and (parts of) the Vergecast/Decoder (because some parts are more about gadgets). Also the newsletters Better Conflict Bulletin and of Mark Scott and Jamie Bartlett.
Improv Theatre: I started classes in this last year, and have my first “proper” show tomorrow. People do classes for a variety of reasons, including to build confidence, and I can actually see loads of improvement in my fellow participants and surprisingly transferable skills (e.g. around seeing where problems are and trying to fix them). To connect it back to the themes of this newsletter, please see this piece How MAGA Media Is Like Improv Theater. It’s co-authored by Kate Starbird, who is very worth listening to on topics of online networking and disinformation – and, as such, has been subjected to a lot of attacks from Republicans.
Finally, Vienna: I recently took my parents to Vienna. They, like many others (myself included) loved it. I would particularly recommend the Leopold Museum, which brings together many intellectual and historical strands together into art and interior design.
Sideways Looks #35: A Better Internet Part 1, also Hayfever
Alternatives for a better internet, when regulation seems increasingly in doubt.