digitalwellbeing.orgHow to thrive in our hyper-connected world

Humane: A New Agenda for Tech (summary and video)

The Center for Humane Technology has just unveiled its new vision for technology to an audience of tech luminaries in San Francisco’s SFJAZZ Center. A culmination of six years of work, the ambitious agenda is to realign technology with human wellbeing. The event was presented by the Center’s two co-founders Tristan Harris and Aza Raskin. There’s a full video archive below – worth viewing – but here’s a summary of the key points made in the 75-minute presentation.

Tech’s One Big Problem

If in doubt, blame tech. The current “tech-lash” against all-pervasive technology in our lives is noisy, disparate and disjointed. As a result, we are not making progress in solving the problems that tech creates – or is accused of creating. What we need is a common language, and a common understanding of the actual problem to solve. Harris, Raskin and their team believe they’ve found what we need.

Technology is downgrading humans.

Technology is downgrading our attention spans, our relationships, civility, community, habits, nuance, critical thinking, breathing, mental health, creativity, romantic intimacy, self-esteem, productivity, common ground, shared truth, mindfulness, governance, nations, values. Tech’s one big problem is that it is downgrading humans.

Sure, the rap sheet for tech accusations is long, from propaganda bots, misinformation, manipulation and fake-news, to tech addiction and mental health issues, to information overload, polarisation, radicalization, “outrage-ification“, and “vanity-ification“. But all these hazards and accusations – more or less substantiated with evidence – fit under one umbrella idea that tech is downgrading humans. For Harris and Raskin, this sets up a common mission and a common language for the industry to rally around – to reverse tech-driven human downgrading.

Attention Economy is to Blame

Of course, technology is like any other tool, it can be a benefit or a hazard. The problem according to Harris, is that hazardous technology is rewarded in toxic “attention economy”, where companies compete to extract human attention to monetise, modify or manipulate. He who dies with the most eyeballs, wins. Harris calls this the extractive attention economy and he believes it is tearing apart our shared social fabric. To steal your attention, tech plays to your attentional biases and weaknesses from click bait to fake news to slot machine psychology that is built into devices. Why? Bcause the fundamental currency of tech in an ad-supported ecosystem is your attention. The result is human downgrading, as we are stripped of agency, autonomy, and attention.

Preying on Human Weakness

Harris suggests that tech industry gurus have been worried and preoccupied about the wrong thing; an impending “singularity” – that future moment when technology overwhelms human strengths. Instead, he suggests that we should be more worried about a real moment that has already passed when technology overwhelmed human weaknesses. For example, Harris notes how social media and recommendation systems exploit human weaknesses in the form of anxieties, biases and insecurities around social validation, social comparison and social status. And now AI exacerbates this by overwhelming our weaknesses with profiling and predictive modelling to exploit predictable biases in what we want to see, hear and do. The imminent arrival of deep fakes will only exacerbate the problem further. The net result is that we get hooked on social, hooked on YouTube or Netflix, and hooked on our devices.

Flat-Earther Kyrie Irving

Harris illustrated how technology can prey on human weaknesses by downgrading our beliefs and critical thinking, calling out NBA star Kyrie Irving who has recently apologised for suggesting the Earth is flat, blaming YouTube. This is unsurprising. According to Harris, of the 1 billion hours of YouTube content viewed daily, 70% comes from recommendations, and when it comes to the flat vs. round Earth “debate”, 90% of YouTube recommendations promote a flat Earth. Flat Earth beliefs may be relatively harmless, but other beliefs may be less so – including one top 10 global health threat to humanity. Until recently online recommendation systems promoted anti-vaccination content, despite the World Health Organization warning that anti-vaxxer beliefs are now a top 10 global health threat. Add in the finding from MIT that fake news can spread faster and wider online than facts, and you have a systemic problem with a system designed hijack attention by hook or by crook.

Building on Human Strengths

So if tech’s big problem is that it downgrades humans by exploiting our weaknesses, then what’s the solution? It’s certainly not as simple switching your screen to grey scale.

As you’d guess, it’s about upgrading humans by building on our strengths.

Fellow co-founder of the Center, Aza Raskin, likens technology to the cello, a tool that liberates and extends human creativity, talent and imagination. The purpose of technology is about “taking the things about us that are most brilliant and extending that“.

But how can tech build on what’s brilliant about humans? Harris suggests we need what he calls a “full-stack socio-ergonomic model of human nature”. In other words, we need to promote psychological literacy in the tech industry. Not to exploit weaknesses, but to better understand and build on human strengths and values. Specifically, by inspiring a new race to the top and realigning technology with human strengths, virtues and values. This could include a (Re)Generative Attention Economy that saves human attention rather than steals it, humane social systems that build on the positive things that bind us together rather than the divisions that tear us apart, and humane AI that assists and augments human intelligence rather than preying its weaknesses.

The Trust Economy

Perhaps the most insightful and useful contribution from the presentation is the call to switch the technology race from competing in the attention economy to winning in the trust economy.

Rather compete for people’s attention, technology should compete to earn people’s trust.

It’s a simple idea, but it could, I believe, shift the debate meaningfully to a more positive agenda for tech (albeit with non-negligible consequences for ad-funded tech). If tech companies were rewarded for competing for our trust rather than attention, then good things will happen. Why? Because of the basic psychology of trust. We trust others when we think they can help us (competence dimension) and when we think they want to help us (warmth dimension).

If we switch from the current race to bottom in stealing attention to a new race to the top to earn trust, then we create a world where companies want to help us and can help us. That is a good thing. Right now, the tech industry may excel in perceived competence, but it may have work to do in the warmth (wants to help) dimension. In shifting to the trust economy, we could build an ecosystem that will allow tech companies to thrive based on demonstrating their competence and willingness to help, not harm, humans. This is an exciting idea and one that my friend and colleague Ron Hofer is actively pursuing in his trust patterns project.

Secondly, tech could build trust by walking the build-on-what’s- brilliant-about-humans talk. It could start by adopting an established and validated framework about what is brilliant about humans. The psychological VIA (value in action) framework of 24 human strengths across six dimensions would be a useful contender here. Widely used in psychology, from coaching to counselling and therapy, the VIA framework could be adopted by technology with a positive mission to help people exercise their character strengths.

Overall, many may dismiss this proposal for a new tech agenda as a buzzword-infused mission to solve tech overload with yet more tech. But I think it could be more than this – if we make building trust a foundational principle.

Video – Humane: A New Agenda for Tech

Written by
Dr Paul Marsden
Join the discussion

  • I’m a WhatsApp addict….after watching social dilemma… I’m thinking am I a fool for letting someone to control me….I want be awake…

  • I watched the social dilemma and will be presenting a lot of its main points in my rhetoric class. I’m hoping to continue to change the conversation and help with efforts of awareness. Thank you for your efforts to help reform and change technology to be more humane.

  • Desapegate un poco de las redes, hazlo gradual encuentra tus pasiones de la vida real, desintoxicate de grupos y personas que ni conoces, mejor haz amigos en la vida real.

Digital wellbeing covers the latest scientific research on the impact of digital technology on human wellbeing. Curated by psychologist Dr. Paul Marsden (@marsattacks). Sponsored by WPP agency SYZYGY.