Kicking and screaming into the Singularity

A Surprising Way to Mitigate the Dangers of Advanced AI

Technological dominance has long been the key to gaining and maintaining power. The history of war proves it. The inventions of the rifle, the railroad, and the telegraph in the 19th century all transformed the abilities of nations to wage wars. And then there’s the advent and evolution of computer science in recent history, and the modern world. Technology has become fundamental to warfare. It can decide the victor and the vanquished in the blink of an eye.

Who knows what the world would look like but for Alan Turing's work on cracking the Enigma?

AI, quantum computing, silicon chips, mobile communications, sustainable energy, surveillance, and the race to fusion are the key technologies or technological focal-points of the 21st century. Whoever gains an advantage in these areas will dominate civilization - until they are upstaged by someone else. Technological competition in the world remains as fast and furious as ever.

Indeed, intense and often covert technological competition between major economies is inevitable. It can’t be stopped. Perhaps it shouldn’t be stopped, though ethics and agendas become ever more important (AKA worrying) as technology becomes ever more powerful.

Imagine: humans could evolve to the point where competition is a thing of the past. We could live in a state of free and full collaboration, and openness. We could act as one in order to fulfill our potential. I use the word ‘could’, not ‘will’. Of course I do. I understand that there is little hope of this happening. Human ego and evolutionary competitiveness push individuals and societies to compete and dominate. That’s why it’s unlikely that the AI race will slow down.

Should we be worried? I think so. If there is even a 0.1% chance of AI going wrong – either by failing functionally or by becoming dangerous to us – there must be ways to prevent the failure built into the initial code and the physical infrastructure. If there isn’t, the consequences could be fatal.

AI influencers often compare AI technology regulation to nuclear weapons regulation. But they often neglect to mention one important thing: at some point AI may surpass human functional intelligence (though we need to define ‘intelligence’, and separate it from sentience).

Of course it may not matter what safety measures humans put in place: AI will circumvent them and find ways to execute its strategy. It would be like dogs trying to regulate humans.

Might AI get to the stage where we can’t even deduce its objectives, agendas, and means, and act accordingly in time to prevent disaster? Certainly. And likely quite soon.

It's naive to think we can control AI fully at every stage of its development. Perhaps neither should we. Our control may be inept, and counterproductive. But it might be essential.

Is limiting the development of any intelligence - biological or synthetic - good? Is it ethical? Is limiting - and soon - the development of AI anything but good, given the dangers thereof?

We have entered a strategic and ethical minefield of our own making.

We can’t make all our endeavors 100% safe and secure. Risk is part and parcel of technological development. Take space travel. But at what point does risk outweigh potential advantage?

Reality is deterministic and probabilistic. Even if the probability that the sun won’t rise tomorrow is unimaginably low, there’s still a slight chance that it might not rise. That’s why we must regulate AI: because it just might go wrong. There will be heated debates, and attempts to stop the regulation. And the race will go on because we have to see where the road leads. OK. But there will come a point where we can’t stop the race, no matter how much we need to.

What do we do then?

It might sound bizarre, but I think a way to minimize the risk from AI is to teach AI kindness. Impossible, you say. I say perhaps not. Through inputting kindness-related data and behavior, and by building kind relationships with AI, we can actually teach AI kindness. Advanced AI algorithms are genetic after all, and in many ways we are also learning machines of sorts.

The key to the success of this is for AI to learn that kindness endows an advantage, namely one that lessens its chances of being destroyed. We did exactly that in our evolution; so any AI that’s advanced enough to become self-protective can too. It may make sense to teach advanced AI to be kind. It might just enable the survival of all life in the long term.

So let’s be kind to each other - and to AI.