The Chase

As everyone knows, George Orwell was one of a few (including Aldous Huxley) that almost 80 years ago, foresaw the potential use of tech as a way that us humans may - either through our own behaviours or through those of others - constrict our potential under the cloak of believing we are increasing it.

1984 is often cited as the ‘go to’ novel that best outlines his thinking. However, his book ‘The Road To Wigan Pier’ written just before the Second World War – was maybe more profound as to the social changes that then, as now, were prevalent (and always have been I suppose) - global uncertainty, technical ‘progress’ and society beginning turn against itself.

Here’s a very brief extract, provided via the wonderful blog of John Naughton

”The logical end of mechanical progress is to reduce the human being to something resembling a brain in a bottle. That is the goal towards which we are already moving, though, of course, we have no intention of getting there; just as a man who drinks a bottle of whisky a day does not actually intend to get cirrhosis of the liver.”

Nice hey? He was speaking about the factories driving manufacturing broadly in the North of England – but you can see the connection to today.

·       Discovery

·       Education

·       Knowledge

·       Self fulfilment

·       Individualism

·       Craft and Integrity (in the product)

·       Respect (of the customer and user)

 

·       Dullness

·       Reduction to the mean

·       Personalisation

·       Stagnation

·       Discrimination

·       Self-doubt

·       Sameness

·       Indignity

 

Which group do you think encompasses the inevitable outcomes of the wide adoption and use of AI technologies presented by large tech companies?

Come to think of it which group was also broadly used to describe the outcomes of the adoption of Social Media? Of content platforms? Of even (and I’m not making this up – I have investment decks) ‘supply to demand’ platforms such as JustEat and Deliveroo?

You know the answer. Despite reading my nonsense you are not an idiot.

We’re told and sold one set of outcomes, improvements and deliverables and, whilst maybe not completely ending up with the other (because although digital is binary, its not – if you get what I mean) another counter set of outcomes ends up trickling into our personal and professional lives (they used to be called unintended consequences – but that line is wearing a bit thin now).

As one example, as a society, we’re vainly trying to deal with a broken social media landscape and its impact on reasonable discourse, mental health, connection even democracy. But the constant evolvement of new platforms or new functionality on existing ones keep us coming back for more – eating and drinking when we know we are damaging ourselves.

As another, we’re ignoring the impact of supply to demand platforms on individual integrity, on the decency that used to be found in work, all in a desire to feed the need of having a drink picked up and delivered from a Starbucks 5 minutes away, after spending hundreds on a new coffee machine having seen a video on TikTok created by someone who only the week before was producing videos for pile cream.

We are at risk of reducing technical advancement to nothing more than filling the gaps created by our use of the same technical innovation.

It’s not our fault really, its the absolute ‘genius’ of not those who have created the tech in the first place – but those focused on its monetisation. So therefore they; (BTW the below is not a completely considered list or flow – more a work in progress for another piece).

·       Encourage use of the tech to save time and money or increase ones reach and visibility.

·       Fill those time saving hours with functionality, content, services that creates demand and need (aka the addiction of attention).

·       Therefore, train users in how they can benefit from using the tech

·       Create a core pull to return (discounting, personalisation, connections)

·       Normalise spending hours and money without thinking across and on these platforms.  

You could say “Well, it was ever thus and we’ve always dealt with it” and that is true – albeit ignoring the increasing wealth, life chances and global growth gaps. But, we’re in a different space now - what’s to stop the Daddy of all tech developments since the conception of the web – AI - replicating and magnifying a hundred fold our Social Media, demand platform experiences and to an extent compliance?

Side note for self - BTW – looking at my own experiences of working with the team to grow the Holidaymaker platform and business, its very easy for me to say - and for us to be focused on - ensuring that we utilise tech and AI in a responsible manner and will not create a ‘locked in’, ‘human ignorant’ system. But what if the competition think less about this and smash away – how will we react? How will I react?  

Not too much by the look of it, but I do wonder if the human condition - which admittedly has succumbed to and created the tech landscape and social environment we see today – may have one final card to play.

Self-Protection. It’s a human condition.

“Californians are more concerned than excited about advancements in AI,” Catherine Bracy, the CEO and founder of TechEquity told me. “Many feel it is advancing too fast, and are concerned about AI-fueled job loss, wage stagnation, privacy violations, and discrimination.” (Nearly half of Californians think that AI is advancing too fast, according to the poll, compared to just a third that think the current pace is acceptable.) “Both Democrats and Republicans agree that AI will most likely benefit the wealthiest households and corporations but not working people and the middle class.”

“Our polling finds Californians echoing what we are seeing in poll after poll from across the country: voters are telling their representatives not to trust tech companies to self-govern,” Bracy says. “And this is not because they are anti-technology. It’s because they want companies to be held accountable, and aren’t willing to sacrifice safety and fairness for innovation.”

The above two quotes are extracted from a report based on a survey completed by Tech Equity, the survey was completed in the core of tech California and Silicon Valley, so you would be hard pressed - with some exceptions in China - to find a more tech dependent economic region.

But and taking into account the natural restrictions and a level of implicit bias that comes with all surveys (especially one conducted by a business called Tech Equity)  even in California – there are considerable fears about the advancement of AI and its impact – its impact on individuals and society.

So what – the capitalism genie escaped the bottle in the 1920’s? What’s to be done? Regulation? Splitting up of tech companies? A slowing down of AI?

All very unlikely, too much money, too much investment, ego, pride, ‘achievement’ etc.etc. at stake.

Rambling through researching and putting down on paper this piece – I’m beginning to wonder whether humans are telling us in tech, what we need to do all under the banner of self protection.

·       They are worried about AI, but they are addicted (and have been trained to be so) to using existing tech and platforms in a certain manner.

·       They rely on tech in all aspects of their life, but don’t really understand it.

·       They are fed stories (be it in press releases, posts, streams etc.) that tell them how wonderful tech products are – but they can’t really feel it. In fact they feel somewhat demeaned in using many of them,

·       The majority of people know AI will impact their ability to work and earn in the future – that is a humbling thought and requires significant understanding and empathy when building these solutions. Context, Understanding and Consideration.  

Another human condition is that its rare that we even think about a situation or cause until we are directly affected by it and even then only if we are directly affected by it daily or financially – BTW – this is kind of the way fascism starts. That’s why keeping an eye on Reform UK is wise….

Anyway, the fact that everyone in my personal network asks me about, and also has a view on AI is telling. Irrespective of profession or life stage people have an awareness of its impact and instinctively ‘feel’ it’s going to affect them.

·       So, what would be wrong in us outlining how and why AI is being used in our products? And be honest about it? A bit like the calorie content on food packaging?

·       What would be wrong if we created more friction in our products to allow people to stop and be aware of what they were doing, what AI was doing and where the crossover was?

·       What would be wrong if we educated people on how they themselves can use AI driven tech to support their own personal and professional endeavours? To allow them to make up their own mind and decide how they can best protect themselves from the impact of AI.

Of course it wouldn’t be wrong to do any of these things – but to empathise with humans when the train is already running away and all the ‘data’ suggests that (much like social media in its early days) everyone is using it and loving it – despite evidence to the contrary it seems much easier to go along with the ride, for now.  

”The logical end of mechanical progress is to reduce the human being to something resembling a brain in a bottle. That is the goal towards which we are already moving, though, of course, we have no intention of getting there; just as a man who drinks a bottle of whisky a day does not actually intend to get cirrhosis of the liver.”

Dave McRobbieComment