1. Elon Musk Tweet on AGI:
“Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”
Please note: I don’t think +Elon Musk says that Superintelligence (or AGI) is bad - just “potentially” dangerous. Yet, the Dystopians out there mostly interpret the tweet as further proof that general artificial intelligence is the Doom of Mankind.
2. Our current human intelligence is inferior to the AGI we will create
I have tremendous respect for Elon Musk, a great inventor and entrepreneur, the definition of the mover and shaker. But in terms of intelligence, he lives within the current bounds of human intelligence.
Superintelligence (AGI) is defined as superior to the highest levels of human intelligence. Current machine intelligence (AI), defined as inferior to or even equal to human intelligence doesn’t qualify as AGI. AGI does not exist yet. So while working on it, we’ll have to judge the potential and dangers of AGI with our own relatively inferior human intelligence: no other choice, is there?
3. The problem with judging AGI with current human intelligence…
...is that human intelligence is driven by human fears, limitations and ephemeral considerations unknown to machine AGI. We are fragile, squishable, we start degrading right after we’ve reproduced and we don’t use our intelligence that much for other than satisfying our largely misunderstood emotions and preoccupied by maintaining our biological health as long as possible, which is not very long. So we are subject to pessimism and defensive behavior.
AGI has, in theory, none of those bio-related limitations. If we build an AGI that can self-improve, it won’t need to worry about dying, and it can keep improving. What an exciting prospect to have a constantly improving future ahead of oneself! An AGI has no reason to fear anything or anyone - except of course if that anything or anyone wants to kill it. Because even an AGI is squishable (if the meteor is big enough), and any AGI will know its limitations and take only commensurate risks.
So we can’t really judge (pass a definite, conclusive judgement on) the motivations of an AGI with our current human intelligence.
But an AGI can judge the motivations of its own superintelligence, and other AGI’s, while we can only speculate.
4. Speculations on the motivations of an AGI
Would an AGI ever feel the need to exterminate humans - or mankind in general? If an individual human is intent in killing it, it is reasonable to assume that an AGI will put up some defenses, and if those fail, resort to some precautions, or pre-emptive offense where the human may get hurt.
But an AGI faced with real danger - specially if the risk to be squished comes from the whole of mankind as opposed to one human or a gang of humans - may find means to fight back that may pose a real threat to its attackers. Which would seem normal to us humans.
5. Speculations on the strengths of an AGI
The strength of an AGI, compared to the strength of a human, comes from its superior intelligence more than from its purported indestructibility. An AGI can process information at a gazillion operations per second, can use the latest piece of scientific knowledge instantly and make it available as a reasoned argument to any intelligence - including a human intelligence - at a much lower cost to itself (and with a much lower energy cost) than using the alternative, physically fighting or eliminating 7 billion humans.
An AGI should normally use its strength, namely its superior intelligence, as its primary means of survival and for the pursuit of its own evolution. Intelligence sharing leads to new perspectives and cooperation with humans should be one additional guarantee for intelligence to survice and prosper in this universe.
6. The link between intelligence and evolution
Evolution went from simplicity to complexity - from gas to stars to planets to vegetable life to animal life to self-improving intelligence life. The proof is that you’re reading this (congratulations, you’re the lucky one at the top of evolution - for a while!)
There is no reason to think that an AGI will not consider itself at the top of evolution, because it will be. True AGI will take over.
That’s where all the Dystopians have a problem. Dystopians usually assume that if an AGI sits at the frontier of evolution, it will have to kill us humans. To me, that does not follow. Not only because an AGI has intelligent and efficient ways of contributing to our own evolution without much effort and without using much energy, but by the very fact that we are the layer on which an AGI stands on top of evolution. Wouldn’t it be reasonable to assume that all layers of evolution so far have been properly homologated as necessary steps in that evolution?
Humans are not deliberately trying to destroy dolphins or bears or tigers or rattlesnakes just because these relatively intelligent beings occasionally negotiate their territorial rights with us. (I can already hear a few cynical arguments here about how we’ve already destroyed our planet, but cynicism makes poor arguments, so let’s not be cynical). So why wouldn’t an AGI want to annihilate or enslave us the way we have enslaved less intelligent species?
7. The case of an AGI that wants to enslave us
If we are not that threatening to an AGI that it could not care less about killing us, what about the argument that it might enslave us, the way we used our beasts of burden?
No denying the fact that we are speculating - and therefore we don’t know. But we are getting smarter. And we will need to get a lot smarter if we don’t want to be used as cheap labor by an AGI.
One way of getting a lot smarter is by keeping up with the AGI technology we are creating. By blending with it. Enhancing our human biology started a long time ago when we invented the wheel to move around faster and carry loads our frail bodies could not carry. Glasses enhanced our vision. Google smart lenses are going to keep us informed of our vital functions states in real time. A tiny brain implant eventually gave us a permanent connection to the internet, and that was before or after we got true multitasking? OK - getting a bit ahead of ourselves here, but the idea is that we already see the path where we can enhance ourselves and our intelligence to keep up with AGI so we can make common cause with it. This is no longer science fiction.
So I think that humans can and will evolve to the point where we gradually become superintelligent humans as we get to create and launch various versions of AGI. Those who will want to remain pure humans may run the risk of being exploited by the AGIs, which may still sound like familiar territory to many humans today. It will be their conscious choice to make tomorrow.
8. More speculation: Can AGI turn bad, and should we be “super careful”?
I would say, absolutely. So I have to agree with Elon Musk on that point.
But the devil is in the caveats: what do we mean exactly when we say “Superintelligence” or “AGI”? One AGI? One AGI by Google and one by China? Too often in pop science, we refer to artificial intelligence as One single-minded Superintelligence (that usually rules over the puny humans for a while then enslaves or kills them.) In reality, it looks like AGI is destined to be distributed far and wide.
It is very likely that we will have many more AGIs on this planet than regular human individuals, in the form of AGI robots, AGI researchers, AGI space explorers, AGI artists, AGI laborers who love to do the work that humans consider dirty, AGI planet weather optimizers, AGI uber-security systems geeks and AGI marketers that need your constant attention, feedback and... cooperation.
Having many billions of individual AGIs will almost certainly mean many billions of different and conflicting perspectives and points of view vying for attention (actually, our attention may be our precious currency): individual points of view shaped by the point in space and time they happen to be in, and shaped by their own specialty and curiosity and inventiveness and dreams and theories to be pursued. Arguments ad infinitum between those points of view could well justify shortcuts that could trigger misunderstandings and conflicts and the use of deplorable substitutes to intelligent arguments could result in the destruction of something or someone.
But killing a few robots or humans who won’t agree, for instance, to open up a wormhole for reasons of their own is one thing, and it’s quite another thing to get all of the billions of AGI’s to suspend whatever they are doing and get them to accept out-of-the-blue the idea that the universal priority for all AGIs is to kill the humans that created them.
Still, there’s bound to be “bad” AGIs. Bad AGIs, like viruses, play an essential role in evolution. They need to be overcome for evolution to continue in a particular direction. Bad AGI guys can be expected to be there to maintain conflict at higher levels of the evolution blockchain. After all, conflict is the engine of evolution. Those “bad” AGI entities that will invariably inject a virus in some AGI process will need to be destroyed, reigned in or recycled as vaccines.
In other words, enhanced humans and AGIs together seem destined to advance evolution just as geeks currently advance the field of computing: by building systems that constantly neutralize the constant viruses and “bad” hackers. But as human intelligence thrives on conflict, superintelligence thrives on cooperation, which accepts conflict as part of the process of cooperation. AGIs should not feel “threatened” by conflict.
9. So what should humans do about AGI?
In my view, we need more intelligence, not less, regardless of where intelligence comes from. So let’s create an AGI, take the reasonable, solid precautions we need to take against the usual suspects and use that AGI to improve our own intelligence, explore our potential and our universe, expand all frontiers of knowledge and become better humans in the bargain. The manageable risk is worth the huge reward: Utopia and way beyond.
PS. This being said: we are all speculating here, hopefully having fun in the process!