top of page

Sideways Looks #34: More on Meta, also Rupert Murdoch

Hello you,


It’s been around two weeks since Mark Zuckerberg announced changes to Meta.  There was a slew of pieces (best ones for me were this this and this).  I try not to write these posts as instant responses to events, but rather watch and absorb things in the meantime.  So here’s some thoughts a couple of weeks on. 

 

First observation: The raw politics here have been a major problem.  Zuckerberg’s video and Joe Rogan interview (in which Zuckerberg misrepresents reality multiple times), plus the specifics of how the changes were presented show currying favour with Trump rather than a good-faith exercise exercise in problem-solving.  The surreal humour account Dril did a spot-on description back in 2017.  This unhelpfully distracts from serious and difficult conversations we should be having around content moderation on social media platforms, based on what we have learned from over a decade of experiences.  

 

(I, and AlgorithmWatch colleagues, said more about this political side in our statement).

 

I am trying to do a mental exercise: if Zuckerberg had made a similar announcement but in a more good-faith way, how would I have responded?  It’s also a prompt to consider the fundamental underpinnings of free speech and content moderation online.  That is a task well beyond one post, but I’m going to put down some initial thoughts below.  

 

But before this hypothetical “Zuckerberg is reasonable” world, some additional thoughts on the real world situation.  Some of these are a bit nitty-gritty so if you aren’t interested in the details of these particular changes maybe skip part 1.

 

 

 

Part 1: More on the Meta changes

 

For anyone who needs a refresher on the changes: Meta’s blog post, bearing the name of their incoming President of Global Affairs Joel Kaplan (a Republican, to replace the more liberal Nick Clegg) is less blatant pandering to Trump and more interesting than Zuckerberg’s videos.  Here is my annotated version.



 

My immediate response to the changes is that they don’t seem optimised to solve the problems Zuckerberg wants to address (or claims to).  In particular, Meta could keep professional fact-checkers but also add the new “Community Notes” functionality – and allow the fact-checks to be Community Noted, to hold the fact-checkers to account.  (Here is a good explanation of Community Notes).

 

If Zuckerberg really felt that potential hate-speech was over-moderated, then he could just change the balance of “how bad does something have to be” in general; say that in general grey area cases we’ll be more lightly moderated than before.  By announcing that he’s particularly interested in loosening moderation around gender and immigration, he’s basically waving a flag for attacks to start on those grounds. Also slashing his content moderation teams, who were there to address many of the “mistakes” he raises, in the last few years was probably a bad idea (though he claims that will now be reversed).

 

It is nonetheless unclear how much large scale impact Meta’s changes will have.  The numbers aren’t always clear, though some of my own investigations suggest fact-checking maybe had a larger footprint than has been reported.  (Note that in this post the EU regulation, that Zuckerberg criticises, to get better data on so-called “censorship”.  The anti-censorship crowd should like such sunlight!). 

 

It’s even harder to judge the effects of allowing more slurs etc.  I’ll admit I slightly under-estimated how much X would degrade as an information source under Musk (though largely due to changes to verification, and the way this means people can pay for more exposure even if they spout nonsense).  But Meta platforms are way more complicated than X, with many many more users in more global contexts, and a mix of friendships, businesses, private groups, etc. etc.  Plus it isn’t always clear how the US vs. non-US distinctions will work. So it’s really a “watch this space” situation. 

 

Another response: Zuckerberg’s repeated references to “censorship” is oversimplying reality.  There are many ways to respond to certain kinds of “unwanted” or “negative” content which go beyond removing it.  You can promote/demote content to be more prominent in newsfeeds, you can add notes to things while keeping them visible, auto-prompts suggesting that certain content might be inflammatory before you post, etc.  People will have views on whether these tools are good or not, but to call them “censorship” is somewhat overblown.  As regards fact-checking, Meta’s approach was nearly always ‘limit visibility somewhat’ rather than ‘delete completely’.

 

I agree with Zuckerberg that Meta has previously made mistakes and that changes may be good.  Republicans and their sympathisers love to point to two particular examples, over-removals of “Covid misinformation” and (brief) suppression of a story about Hunter Biden’s Laptop.   Both of these were, in my view, mistakes.  But the Republican responses to this, including from Musk and Zuckerberg, have been drastically exaggerated.  Also Republicans, including pro-Republican misinformation, tend to dominate the most-engaged-with-content on Meta – far from being “censored”.  Trump was allowed to stay despite breaking Meta’s policies multiple times – in my view giving him special treatment was probably correct, but hardly shows censorship!  Meta also makes mistakes against more left-wing / liberal causes, including pro-LGBT+ content and the war in Gaza.  Also a reminder that the main free-speech cheerleader, Musk, has blatantly done much worse things than the stuff he accuses his opponents of (here, here).

 

But Meta’s changes are more helpfully analysed primarily through technical/organisational questions than ideology.  Mistakes will happen at the scale and complexity at which Meta platforms operate.  Various people / organisations, many on “my side” of the debate, should often have been more careful about going into instant “full attack” mode.  Joe Biden’s comments that Facebook was “killing people” during Covid were ill-advised, when they were facing a challenging task. 

 

However – at times like the recent announcement, or around similar Meta decisions – ending their data access programme for no good reason, or pushing a shockingly poor election monitoring dashboard onto the EU - there isn’t really much constructive response one can have. 

 

OK that’s me done with Meta.  Let’s get to the bigger question – how should we think about setting the rules online.

 

 

 

Part 2: Who runs this (online) town?

 

While on a bus yesterday to the island of Schwanenwerder just outside Berlin, to listen to the inspiring former Canadian Supreme Court Justice Rosie Abella (her story is incredible), I drew the below triangle of where power can lie regarding online content:



A hand-drawn sketch in red of a triangle, at the corners are the words "Private Company", "Users", and "Govt"

 

Coincidentally just a few hours later on a Tech Policy Press podcast interview with Kate Klonick, I discovered the same idea had been proposed – and much more developed – in 2018 by Jack Balkin.  The article is long but interesting if you’re into this topic (if quite US-focused).  But the below are my thoughts, not Balkin’s. 

 

In an idealised world, you obviously don’t want all the power concentrated at any one of those corners.  Why?

 

For government / the state, it’s kind of obvious.  The ultimate risk, of autocratic control of information backed up by the law, is very bad.  But even aside from that, government is unlikely to be resourced, “in the weeds”, or responsive enough to deal with content moderation in an appropriate way.  It’s also just a bad look for governments to get too involved in these kinds of questions.  However governments can play a useful role, in particular creating serious accountability for the worst excesses of other actors – as discussed below.

 

If you give platforms too much power, then this can concentrate power in the hands of a small number of largely unaccountable people.  For big online platforms that can be a lot of power.  As one CEO put it “I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet”.  It can also mean prioritisation of business motives at the expense of everything else. This can have particular issues if (i) some decisions are good for business but not good for users, like stuffing a platform too full of adverts [1] or (ii) the decisions are good for users but can have bad collateral impacts on non-users, like distributing harmful material about innocent people or facilitating offline violence.

 

Platforms don’t necessarily like all this decisionmaking.  The Meta changes, and those in favour of them (irrespective of the politics), argue Meta is making too many decisions about content at the platform level (with some pushing by governments).  So fact-checking by authority figures appointed by Meta is replaced by user-driven community notes.  There will be less proactive moderation of potentially harmful behaviours (e.g. harassment, hate speech), relying on users to report them instead. 

 

All these changes are ostensibly pushing power, and also responsibilities, away from platforms and towards users.  And it’s less obvious why that would be a problem, compared to hoarding of power by governments or platforms.  People power, right? So let’s think about users.

 

 

 

Power to the People?

 

Let me give my view of people on social media, which informed how I thought about the “counter-misinformation” work I did for the UK government.  There are many people who are actively seeking out content that I would regard as “bad”, if still legal – conspiracy theories, vicious partisan politics, dangerous medical misinformation. The immediate responsibility of the platforms and the state there should, somehow, be to avoid that turning into off-platform harm (e.g. through radicalisation to violence).  It is not to generally stop access to such content – if it’s legal – because we’ve decided “people shouldn’t be looking at this”.  That is unlikely to be doable, unlikely to be effective, and relies on unrealistic ideas of distinguishing good from bad content.

 

At the other end of the extreme to these very active information-seekers, there are many many other people who are passively absorbing content.  They very rarely post, certainly not publicly and/or about politics.  They may not have strong views on the things they are seeing, nor reason or inclination to fact-check them.  They are probably not spending time curating their social media to get only highly-relevant and high-quality information, and we can’t expect them to. This isn’t to dismiss them as dumb or lazy (as some of the bad arguments about “fake news” claim).  It’s just the nature of how social media often fits into people’s lives.  When thinking about social media I tend to keep this group in mind, not the core of hyper-engaged and active users.

 

A key trade-off of putting more power on users, is it means putting expectations and burdens on them.  From some moral or political stances, this is a good trade-off.  Their argument: People should critically analyse what’s placed in front of them, not rely on authority figures; they should be able to accept unpleasant behaviours, or use available tools to protect themselves if need be.  The core of liberal democracy – the idealised “public square” that social media is sometimes compared to – relies on the ability to dissent from what authorities tell you.  (For a cogent formulation of this position, see this piece from Alex Hohlfeld – Alex is a friend and a good contributor to the field of European online regulation, even if I disagree with many of his premises).

 

I think this is an idealised view of liberal democracy, applied to social media.  While I agree liberal democracy needs to allow dissent against authorities, it also doesn’t function when authorities just vacate the field.  Social systems often naturally incentivise bad behaviour.  In the sphere of political philosophy this is Thomas Hobbes’ view; to simplify, in a state with continual conflicts for authority, the winning strategy is violence and strongman behaviour, and life is “solitary, poor, nasty, brutish, and short.”

 

Online this manifests in many ways.  It is much easier to get attention if you are prepared to be outrageous, even to the extent of outright lying or attacking other people.  Zuckerberg himself noted in 2018 that content which is nearly but not quite bad enough to violate policies tended to get more engagement. Which incentivizes these kinds of behaviours (including from political figures).  There can be financial rewards too - whether scammers, or the now-famous Macedonian teenagers who learned that spreading fake news during the 2016 election would get them money via clicks on adverts.  If people are determined to harass, intimidate etc. someone online, or you want to completely escape a particular topic, that can be difficult to fully accomplish with existing tools.  I’ll discuss that more in a later post.  The key point for now: when authorities vacate the space, leaving it to users, bad stuff can rush in.

 

So a fundamental philosophy of “hand control to users, platforms hand back their authority” is not, to me, as good as it may theoretically seem.  I think platforms have a right and a responsibility to impose some authority on their own spaces.  Even Wikipedia, that stellar example of collaborative user-generated work, has a strongly authoritarian core – there is a lively debate of whether Jimmy Wales is a benevolent dictator - but with, crucially, democratic processes to change who gets to wield authority.  This isn’t necessarily to patronise or disempower users.  It’s just to acknowledge that many people want to use the platforms in ways which don’t require lots of energy and engagement.  And if the platforms ignore that fact, bad actors will capitalise on it.

 

The obvious followup point is “ok but how much authority?”.   I actually prefer the question of “how is that authority exercised and challenged”.  And this is where technology should be good – it should be providing creative and interesting approaches to these questions, which may not have been possible before.  There are exciting possible developments in this field.  It’s frustrating – though not surprising – that tech companies, which present themselves as the forefront of clever solutions, are reverting to blunt and unnecessary trade-offs.  But this post has got long enough, I’ve raised enough challenges, so on a (hopefully tantalising) promise of forward-looking ideas in a future post, I’ll leave you for now.  It’s my blog, so I have the authority to do that.

 

 

[1] There is an argument that platforms would always use their power to please users, as otherwise users would just go elsewhere.   But we should consider that for many platforms (particularly Meta ones), they have become so ubiquitous that being a non-user is often difficult, either socially or economically.  Also their business models capture networks, which we can’t just easily shift between – you can’t take all your customers or friends to a new platform with you, even if you dislike a platform. (One proposed solution is interoperable protocols but that's for the next post).  Also, making money online – e.g. by pumping a platform full of ads – isn’t necessarily good for users.  Cory Doctorow’s concept of “enshittication” lays this out more.

 


 

Fun Facts About: Murdoch and Censorship


I normally give a fun fact, and normally it’s unrelated to the rest of the post.  But I’ve just finished – as part of my attempts to draw parallels between online and pre-online media – a book from 2018 called The Murdoch Method by the economist and frequent Rupert Murdoch advisor Irwin Seltzer (~86 years old when the book was published, and still writing at age 92 – coincidentally over Christmas I was reading the book just as my dad was reading one of his columns).  I would recommend for anyone interested in media, business, or generally interesting reflections from a broadly centre-right perspective.

 

A few things I found interesting.  Firstly, there is a strong running theme that Murdoch is driven by a strong desire to unsettle “the establishment”.  In Seltzer’s words “the greater their outrage, the louder the satisfied chuckling in [Murdoch companies’] executive dining rooms”.  This sounds to me like Musk before Musk; although, Murdoch reportedly sees working with regulators as a good thing to do (following a strategy Seltzer calls “inform, complain, compromise, and cooperate”) and has some reservations about pushing things too far. 

 

Seltzer also argues – without fully excusing Murdoch – that some of the excesses of Murdoch executives like Kelvin McKenzie of the Sun believing that Murdoch wants things pushed as far as possible, which Murdoch does not effectively reign in until too late.  I suspect there’s also a like-attracting-like personality element too.

 

Seltzer also, despite sharing Murdoch’s strong business-focussed attitudes, clearly finds it hard to decide how he feels about the “negative externalities” of some of Murdoch’s businesses. 

 

Finally, something that really struck me – particularly with regard to the circumstances discussed above – is that Seltzer describes conservatives as pro-censorship.  This manifested in business issues when Murdoch was trying to bring together the more conservative side of his business with the growing creative part:

 

My [Seltzer’s] purpose was to acquaint the politically liberal, censorship-averse creative teams at Fox [entertainment] with the limits some conservatives, in and out of government, were pressing to have imposed on their assaults on existing American cultural norms.

 

This came to a head during a 1992 cross-Murdoch-industry event when Stephen Chao, president of Fox Television who had been asked to speak on censorship, hired local waiter-model Marco Iacovelli to strip during his speech to illustrate points in his argument.  The audience contained future vice president Dick Cheney.  Murdoch fired Chao soon after.

 

This won’t come as a surprise to anyone who’s looked at the longer picture of political views of acceptable behaviour.  But it was an interesting experience to read during these times.  Seltzer relays a story of Republican mayor of New York, Fiorello LaGuardia who found a “balance between individual freedom and government intervention”:

 

“Potential audiences could frequent strip joints or buy off-colour magazines – neither was banned – but would have to try a bit harder to get access to those art forms, stripper aficionados by ferrying across the Hudson to New Jersey”

 

That sounds a lot like putting warning labels over online content but without banning it to me.  I wonder what Zuckerberg would think.

 

 



Recommendations

 

Podcasts: There’s going to a lot of hype around tech in the new Trump term.  For some more considered takes I’ve been enjoying Lawfare and the Gradient. The Gradient’s interview with Thi Nguyen was an excellent re-immersion for me into some of the big philosophy/sociology of science topics that underpin a lot of discussions about technology. Lawfare goes beyond tech, though all their output is generally great; the archives of their, sadly discontinued, show Chatter includes many fun gems, including this one about spy disguises.

 

Improv theatre: I forgot a thing in my end-of-2024 newsletter!  Last year, after meaning to for ages, I finally started Improv Theatre classes with Rob Rogers and Scratch Theatre.  If you’re in Berlin, I’d recommend them.  If you’re not in Berlin, I’d still recommend improv classes.  It is so much more structured and less “just go and be funny” than one would expect, and I can how it’s successful in building confidence in people.  I’m generally a confident person, but for me it’s been wonderful in having a few hours a week where I can genuinely switch off my work brain and concentrate on, for example, being a dog whose trying to fool a bank robbery or a guy who’s running a squirrel wig factory.

 

Funnies:  I am not going to comment here on the TikTok ban beyond stealing a recommendation from Lawfare, Michael Longfellow’s SNL wonderfully deadpan piece on the TikTok ban.  Ironically I couldn’t get it on YouTube, so I have to link to TikTok.  It’s still not as good as my favourite SNL skit when the amazing Kate McKinnon gradually reduces her fellow cast members – including Ryan Gosling – to tears of laughter.  This became a series which was so popular that when Kate McKinnon left SNL they used a twist on the format as her final appearance.

 

 


 

Thanks for reading. Please do share with others using this link, and let me know your thoughts via this short poll.

 

 

 



75 views0 comments

Recent Posts

See All

Comments


Thanks! Now check your inbox (and junk folder) to confirm your email

bottom of page