There’s a lot of discussion about algorithms at the moment. Algorithms are nothing more than recipes. If people say ‘algorithm’ they normally mean the recipe for whatever they’re talking about. A mathematical algorithm for finding a solution? Think the recipe for finding the solution.
Why do I care about algorithms and whether we should really call them recipes (the analogy isn’t perfect, don’t @me, I’m quite aware)? Mainly because the discussion about algorithms in the public sphere relates almost exclusively to social media and how these processing recipes lead users to ever more extreme and unpleasant content.
I’ve been quite concerned with this book over the last few days. Reminded as I was by a lecture by the author in which they said something I had entirely missed in my general thinking about the kind of content we’re shown online. I am almost embarrassed to admit it as well – because I like to think I ruminate on economics quite a lot.
As a reminder, the book is called: The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power and is written by Shoshana Zuboff. Zuboff has written a lot about this subject but this book is (despite the cover being uninspiring) a very good piece of work.
I don’t want to talk too much about the book except I want to draw out one key idea because it should turn your world upside down a bit.
First though. We have been told all over the place by tech-bros, concerned citizens (I’m in this category), opinion piece writers and others that the algorithms which we look at blaming for the slow radicalisation of people as bland and formerly innocent as our grandmothers, our friends and our children are i) in need of fixing and ii) often beyond understandding.
We’re told that these algorithms are often the product of unconscious bias (such as when facial recognition software didn’t recognise PoC as human or when Google associated PoC with gorillas in image search software). We’re told it’s a side effect but one which makes them money and so they’re loathe to change their ways. We’re told it’s the tail wagging the dog – unfortunate but fixable.
Zuboff dismisses this idea and reminds us these companies have made their fortunes by learning about us. So far so not surprising. Yet Zuboff then reminds the economically literate among us what that learning is good for. It’s not good for knowing what we did in the past because we can’t make money from that. Nor is it good for knowing what we’re doing now – again, I can make money on what you’re interested in NOW but it’s not the prize. The real prize from this learning is to know what you’re trending towards tomorrow – because then I can make real money from knowing your future tastes and preferences.
Zuboff then reminds us about the point of advertising – not simply to let us know a product is available, but to create a felt need we didn’t know we had and then sell us the solution for that sudden new found desire.
In short, these algorithms are designed to do two things.
- They’re designed to predict what we’ll want to buy tomorrow
- They’re designed to push us into buying products we don’t know we needed today.
Algorithmic drift into showing you more extremist material such as racist content, anti-vax nonsense, anti-elite conspiracies serve the two goals above. Why? Because these drifts don’t exist in a vacuum – social media companies (and let’s be honest, we’re only really talking Google and FB in liberal societies) are selling these predictions to companies – telling them they can guarantee purchases and eyeballs on adverts. Deliberate drift to extreme material is proven to guarantee both of those things.
Furthermore, there is an argument which goes like this: SM companies could see extremist material was both attractive to many people and a direction society was moving in, in part because of their exposure via SM companies’ activities, and they had a choice:
i) do they change their business model to avoid these excesses, or
ii) do they lean into extremism knowing their activity will appreciably shift society that way and thereby increase their revenue
Zuboff, among others, suggest only the second of those two options can be true without regulation.
So in the discussion around free speech this week (and possibly next?) you’ll see lots of back and forth over whether private companies have the right yadda yadda yadda. What you won’t see (yet) is much on whether these companies deliberately created these environments exactly with the intent of fostering extreme content to increase revenues.
My proposition is this: the tail never wagged the dog. The algorithms we’ve seen were designed explicitly to monetise user data by predicting their behaviour and nudging them towards it in order to create opportunities for companies they were pitching their services to. This has always been the dog wagging its tail.
Over the next few months as regulation becomes a more central concern of liberal governments (with the possible exception of the current far right UK conservative government) one key plank of companies’ defence will be it wasn’t their fault – they were, at worst, as surprised as us by the outcomes. Do not believe them. This isn’t about free speech – that is a distraction – and a different argument. This is about whether companies with our personal lives stored on their servers should be required to treat that data not as if it’s their never ending gold mine but as if it’s something to which privacy and political standards around propaganda and manipulation should be applied.
January 13, 2021 at 12:41 pm
this is exactly to a T what i have been thinking since the invention of the mobile phone, just never had the words to put it as eloquently as you have, my thoughts were, if those that control for their own benefit know where you are and where you go it wouldn’t be long before they could predict your entire life and not only make money off of it, but use it in such a way as to target your next decision and your next,
good piece thanks for sharing
Geoffers
LikeLike