top of page

Sideways Looks #13: Can a Computer tell you're Happy? Also, CGI

Updated: Sep 3, 2021


I hope you have had a good week with minimal trouble from weather or viruses. This week, I’ll be talking about ‘sentiment analysis’ – using computers to detect how positively people feel about things, and some of my concerns about that.

There will be no Sideways Look next week. I’ll be enjoying a holiday in exotic Grantham, where I believe I shall be visiting a water park and playing an elderly earl in a murder mystery. I presume not simultaneously.

Please do keep your thoughts coming via the poll, and if you know anyone who might find these newsletters interesting please do share with them via this link.



Thought for the Week: Sentiment Analysis


There’s a good anecdote in one of the various accounts of New Labour (I think Servants of the People), when No10 staff were analysing press responses to a major news story. They’d gone through with highlighters marking positive and negative responses. They were pleasantly surprised by the results – until they saw that Jonathan Powell, Blair’s chief of staff, had highlighted pretty much all his cuttings as negative. Had they been overly optimistic? Had this experienced operator noticed negativity that they’d missed?

Turns out, of course, he’d just got the highlighter colours mixed up.

This is a pre-algorithm version of sentiment analysis: look at some text, decide if it’s positive or negative. Or possibly if it’s happy, sad, angry, or similar. In the old days it would be done by humans, who are (generally) good at picking up nuance but also have a habit of disagreeing on interpretations. Nowadays, it’s increasingly done by computers.

Whenever you say something like ‘computers are being used to interpret feelings’, one can crudely predict two responses. One is can they really do that? The other is wow isn’t it cool that they can do that? So I’ll respond to both of those.

🤔 Can they really do that?

On the surface, it seems hard. Humans don’t just say straightforward things like ‘I love the Sideways Looks newsletter!' (to pick a random example). We use sarcasm and irony. We give mixed and qualified opinions. And, crucially, we express positivity/negativity about particular things. Take the sentence ‘I hate using the internet, but the Sideways Looks newsletter makes it worthwhile’ - is that positive or negative?

Very quick-and-dirty examples of sentiment analysis take a ‘bag of words’ approach, where words are given positive/negative scores and these are added up. It’s obvious how that can get things wrong. But there are much cleverer approaches – for example programs which (i) separate fact-statements from opinion-statements then (ii) check what thing the opinion-statement is actually about and finally (iii) see what words are being applied to that thing.

And there are even more sophisticated approaches, based on machine learning. In short; if a human can learn to distinguish between things, a computer probably can too. The human just needs to teach the computer enough examples. If you ever wonder why you sometimes have to click pictures of bikes or traffic lights to get into a website – you’re training systems that will be used in self-driving cars. In a recurring theme of these newsletters (see here, here) the computer need not recognise why something is or isn’t a bike; it just learns to recognise the features. Which, in some ways, isn’t so different to a human. I personally find recognising irony much easier than explaining why something is ironic.

The problem here – this sort of learning is 'domain specific’. Teaching a computer to recognise a bike won’t help it pick out a cyclist in a line-up. This is where humans win – we can draw on broader context knowledge of cycling to know that the fitter-looking person with cuts on their ankles and clad in lycra is more likely to be the cyclist. So you can train a machine to recognise a frequently sarcastic statement like ‘wow isn’t this weather great’, but that may not generalise to all sarcasm ever.

(This echoes a much wider debate about domain-specific AI vs. 'artificial general intelligence').

But if you’re training a machine to recognise positivity and negativity about a specific domain, why not do better than generic terms like ‘positive’ or ‘sad’? I used to do sentiment analysis on tweets about Brexit in the Theresa May era. Asking for 'negative about Brexit' created bizarre groups combining A.C. Grayling (hated all Brexit) with Nigel Farage (liked Brexit, hated Theresa May’s Brexit). It didn’t tell you anything useful about actual opinions. That was much better summed up as ‘hard Brexit’, ‘soft(er) Brexit’ and ‘Remain’. And, as it turned out in 2019, a huge swathe of people who didn’t align strongly with any of those camps, but just wanted some Brexit to get done.

So the answer is – yes, computers can recognise something we might call ‘sentiment’ or ‘opinion’, but with limits and trade-offs between accuracy and generality. As such, it's worth asking if it's really the best use of the available tech.

🤩 Isn’t it cool they can do that?

A contrary view - even the crudest ‘bag of words’ sentiment analysis can be useful to a good analyst. If you’re looking at something which everyone is being super negative about, telling a computer to only give you positive stuff might uncover divergent or niche opinions. You may have to scroll through a long list of things the computer got wrong – e.g. ‘this story is amazing’, not meant in a complimentary way – but at least you’ve applied some filters in a bid to uncover something interesting.

But that’s the point – tools are only as good as the users. I’m reminded of a professor who told me she’d once tried to organise an event on Twitter and it hadn’t worked, from which she confidently concluded that Twitter was a bad tool for organising events. I didn’t have the guts to point out that I have never successfully played an Elton John song on piano, and had therefore concluded the piano is a bad tool for playing Elton John songs.

But sentiment analysis looks easy. And that leads to two problems. The first is making real analysis less attractive. In my experience people, whether technically minded or not, don’t naively say ‘84% positive? Sounds great!’. They’ll say things like ‘yes obviously it’s not completely accurate, but as we've got it…’. You can try to respond that the 84% number doesn't really mean anything, actually sentiment analysis should be used as filters to identify interesting things, or maybe we should train a bespoke machine learning model to pull apart actually useful views… but if things are busy, see how far you’ll get.

The second, deeper, problem is how a convenient-but-imperfect measure for a thing can become the thing. I’ve had debates about 'good schools' and 'neurotic people' in which my attempts to say ‘well these are quite difficult and value-laden and risky concepts’ have been met with ‘there’s literally numerical definitions’. And it’s quite hard to shift that thinking once it’s embedded, particularly in big organisations or networks where complex measures don’t always travel well.

(Related ideas about measurements-becoming-things are Campbell's Law and Goodhart's Law - though the latter was actually formalised by Marilyn Strathern rather than Charles Goodhart).

So the ultimate risk is that can a computer tell you’re happy? becomes the computer tells you you’re happy. I think that particular example is probably unlikely to happen – but who knows? More importantly, this kind of quantification is happening all over the place, from job hiring to justice. It’s convenient, it allows damaging backlogs to be unblocked at pace. But once embedded it may be pretty hard to escape. I don’t know what a computer would say, but I feel pretty negative about that.

In case you’re wondering, by the way, this piece was apparently 65.3% negative. I'll try and be more positive next time.


Fun Fact about: CGI


Remember the RandomReader from Sideways Looks #8? I’ve been trying – with mixed success – to use it habitually. This week it led me to the Develop 3D magazine, which in turn led me to the story of special effects technology used by Disney to create The Mandalorian series. Basically, they surrounded the set with enormous screens which projected the backdrop around the cast and crew. The footage is incredible, and also amusing to see set dressers casually wandering across an alien planet. I’d be really interested to know the psychological effect on the actors; I’ve always imagined it’d be really hard to get into role when your surroundings feel more like a parking lot.

A related fact – working on the film Interstellar allowed astrophysicist Kip Thorne to develop extremely powerful new tools for visualising black holes, from which (as he told Wired magazine in 2014) he reckoned he could get at least two new scientific articles. Looks like he managed it.




Podcast on political science: I’ve recently discovered the Not Another Politics Podcast, which looks at news stories by dissecting related academic political science articles. It’s in the same podcast family as Capitalisn't, which I’d also recommend (more economics/finance focussed).

Article on housing: It’s not my political alignment, but I do follow the Conservative Home website to try and understand other points of view. This piece on different views on housebuilding within the Conservative party was a good example of that.

Eco-friendly clothing: I’ve finally decided the pandemic is easing enough that I should probably update my wardrobe. A familiar problem of sustainable shopping is the price tag. Rapanui is the most reasonably priced UK-based site I’ve found so far, particularly if you buy their bundles.

Free online learning: As someone who tries to make money from creating training courses, I probably shouldn’t reveal this. But the website Coursera has loads of really good free courses.

Dog furnishings: I recently bought some dog bookends as a present for a friend. When it arrived, the poor thing had broken at the leg. So I contacted the sellers, who unexpectedly let me keep the broken one and sent me a new one. The broken one was eminently fixable, so I now have my own dog bookends (now named Billy). So this is a recommendation for the very good service of the George Whitstable website, as well as the concept of dog furnishings in general.


Thanks for reading. Please do let me know your thoughts via this short poll.


11 views0 comments

Recent Posts

See All


bottom of page