Updated: Oct 19, 2020
Recently, I was chatting with one of the AI experts on the Untapped team. We were talking about the potential of AI to empower, to help people achieve their goals, even to increase social mobility - everything about AI that we endorse at Untapped.
As we reached the end of our conversation, he revealed a shocking counterpoint.
In China, the government is rolling out a scoring system for citizens based around mass surveillance combined with AI learning. This means that individuals’ behaviour can be constantly monitored and then assigned scores based on ‘social good’; if you buy nappies, that’s plus-points. Buy alcohol, down your point score goes.
Yes, it sounds like the stuff of Black Mirror, but it’s here and it’s real.
The darker side of AI isn’t confined to China. Fears have grown about the potential for AI to be used by hackers for sophisticated cyber-crimes, and a report earlier this year explored the prevention of the malicious use of AI.
New technologies always provoke understandable anxieties. Many Victorians infamously feared the railways when they first arrived ( ‘railways madness’ was a real condition). Such terrors are stoked by easy media stories. And there have been some high-profile mis-steps in the development of AI that have been easy to click ‘share’: Not least Microsoft’s infamous AI chatbot, Tay, released onto Twitter only to be removed 16 hours later as the bot’s conversation devolved into a storm of racism, sexism and general unpleasantness.
At Untapped we are passionate about using AI to improve lives, not monitor or mimic behaviour. I work on the ‘human’ side of the company as a UA (Untapped Accelerator), but I also get to work with it as a powerful tool, and the AI I share with users frequently throws up deep and fascinating insights into character. It’s an amazing tool to work with, facilitating reflection and change.
But like any tool, it’s all about our motives when we use it. Although I believe strongly in the power of AI to help us grow and learn about ourselves, I share an innate human fear of the new, and particularly that which I don’t entirely understand (because as I said, I’m no techie). It makes me think about earlier sci-fi, from H.G. Wells to Asimov, which articulated the anxieties of their time. After all, 1984 saw the first untethered spacewalk and the release of the Apple Mackintosh, rather than the rise of an Orwellian dystopian society.
In our age of fractured human connections, it is easy to see technology as the problem, the thing to fear, the barrier to greater understanding and deeper human relationships. But as contemporary films like Ex Machina artfully explore, the technology we create is a reflection of our own desires, impulses and fears, and its use - for good or ill - will depend on the choices of us, its makers.
Rather than fear some inevitable AI-powered digital dictatorship, we need to understand that how we use AI is a reflection of us and our society. Its use in China reflects the central control at the heart of the political system in place. In other societies, the same data and AI can be used to give us a greater sense of who we are. Rather than spying on one another in a culture of distrust, it can help us to connect, grow, and trust one another.
Written by Claire Lamont