Hello you, This week’s Sideways Look coming to you from the floor of a train from London to Sussex, after I was unable to find a seat. Nature is healing. (Relatedly, due to the dodgy internet there’ll be fewer hyperlinks than usual. I’ll try to make things searchable, but if anything is particularly of interest please feel free to contact me). As always, very keen to hear your thoughts via this short poll. Though answers along the lines of ‘more content from train floors’ will be rejected. Have a lovely sunny Sunday, Oliver
Thought for the Week: Experiments and Creativity
The “Feynman Method of problem-solving” goes as follows:
Write down the problem.
Think really hard.
Write down the solution.
Despite being named after Richard Feynman, one of the 20th century’s most famous scientists, it’s arguably not a very scientific approach to problem-solving.* It doesn’t involve that crucial step of stepping outside your own head, and testing your ideas against the harsh real world. Also known as “doing experiments”. Within science, experiments can seem to have very rigid rules. Some say an experiment has to involve (1) proposing a specific hypothesis and (2) confirming whether it is right or wrong. Some say experimental results have to pass thresholds of statistical significance (e.g. using the oft-misunderstood ‘p value’). These are useful guides, but not black-and-white. For example, Einstein’s ideas didn’t pass all initial experiments. Also the threshold at which you deem an experiment ‘complete’ might depend on the risks of getting it wrong (e.g. discovering a new star vs. making a vaccine). But the fundamental idea of experimentation – do something to expose your idea to the world – is a broad and largely useful guide. Thinking about the experimental mindset has interesting implications beyond science. Apparent experts in politics and society have been criticised for coming up with theories without engaging with real people; and continue to do so even when their theories don’t seem to work. (This criticism is expressed most, erm, forcibly by Dominic Cummings, but is also argued by more moderate figures such as Philip Tetlock). One can also think of an election as a big experiment; a party does things it predicts will win votes, and this prediction is tested in the real world. A big problem is elections happen so rarely, and there’s so many complex factors, that it’s very hard to determine reliable and consistent findings. For an example at the other end of the rarity scale to elections, we can think of social media adverts. Good social media marketers will try out loads of different adverts at the same time, each slightly different, to see what performs best and rapidly adjust accordingly; this is called ‘A/B testing’. You might not be able to explain why something performed better, but you can see from the data that it did (there’s a very long-running debate about the implications of this, centring around an article called 'The End of Theory' by Chris Anderson). One thing I find particularly interesting about experimentation is its relation to creativity. Obviously it takes a certain level of creativity to come up with ideas to test. Also, the idea I raised about A/B testing – something works but you can’t explain why –happens with creative people. But on the other hand, if the experimental mindset is a check on people going ‘yeah that feels right to me’ – well, isn’t that what creative people do? When a band strikes off in a new creative direction, they’re probably not doing it in an evaluate-and-iterate way, but because they just feel that sound works for them. And I for one am glad that happens. I’ve written previously about technology and creativity (in a piece called “Extermination of the Author” – a fairly long one, but I’m quite proud of it). To briefly summarise, being able to A/B test at scale could be a challenge to conventional creatives – why employ a creative person who you hope can consistently produce stuff which resonates with an audience, when a computer can produce loads of content, vary it, test it, and refine it to give the audience what they want? Could the role of creatives be just to come up with a first draft and some things to vary, and then leave an AI to do all the redrafting? I hope not, and I doubt I’m alone it that. Relying too much on experiments about the world we do have can be a bad guide as to the world we could have. As Henry Ford apparently said, about whether people would use cars, “if I asked people what they wanted they’d have said faster horses”. Also, as is being increasingly discussed in relation to artificial intelligence, relying too much on data from the world around us can risk – algorithms for hiring which are biased towards white men, policing algorithms which are biased against black people, and so on. Testing thoughts is good, and (speaking very broadly and anecdotally) I think people should be encouraged to do it more often. But we should also be open to ideas and approaches which aren’t (yet) obviously ‘good’ according to the data. Experimental results shouldn’t be the destination, but experimenting should be part of the journey. And on that note of journeys, I’ll leave you with one final failed prediction:
* I mentioned how the Feynman method doesn’t seem very ‘scientific’ because it misses out experiments. I can talk at length (PhD length, to be precise) about why the word ‘scientific’ can’t be defined too tightly. I won’t do that here. But I can add an interesting historical point.
The association of science with experiments is actually relatively recent – beginning in around the 16th century. According to one of the canonical texts in history of science, 'Leviathan and the Air Pump', its origins relate to political questions of political culture that were hotly debated: whether ‘good’ answers to problems should come from lowly human work, or only from appeals to higher authority.
Fact about: Learning from failures
I’m finally getting round to reading the Christmas present my brother gave me, in April (a very 2020 Christmas). It’s a book called Austerlitz, as my brother (i) knows his books and (ii) knows I’m very interested in Germany. There’s a passage in it which I found quite relevant for this week’s theme of testing and improving:
In the twenty years that building work took, warfare technology also improved, rendering the new defences obsolete. The answer, apparently, was to build even further out.
This in turn reminded me of a historical example, now having a bit of a renaissance on social media, of trying to improve fighter planes by analysing where ones which had been shot at had received most damage. I won’t spoil the twist here; this blog is one of many which tells the full story.
As the great comedian Peter Cook said, in his character of Sir Arther Streeb-Greebling: “Yes I have learned from my mistakes and I’m sure I could repeat them exactly”.
Will do fewer recommendations this week, what with my limited internet access. I will re-recommend my new approach of subscribing to YouTube channels of organisations which put on interesting events; it’s an easier repository of their recordings than trying to keep up with all their events.
Relatedly, I will recommend the YouTube channel of the Alan Turing Institute for some great events bringing together genuine cutting-edge discussions in technology and related philosophical and social questions. Sometimes quite technical, but I’ve found that persevering through the bits I don’t understand is still worthwhile as the conversations can change direction quite quickly.
Thanks for reading. Please do let me know your thoughts via this short poll.