Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Are you with Musk or Hawking on AI

This site may earn commission on affiliate links.
Let's just agree that there is no universal definition really of the concept of "intelligence". Some would argue that intent or purpose must be an inherent property of intelligence. I for one would argue that a completely random process (such as evolution) that once every know and then creates progress/improvement falls outside of the concept of intelligence. I say this not undervaluing the immense power of the process of evolution (or other similar phenomena) but still it's not intelligence per se (if you ask me).
 
It is not a fact at all, nor is it evident, just your interpretation of existing conditions through your own viewpoint. I'd say the fact that life on this planet is suited to the parameters of this planet requires no design or intelligence, it's just an inherent fact of the available environment.

It doesn't require "Design" in the sense of intent or foresight. Whether it requires "intelligence" is the point. If we define intelligence to imply that fabulously complex biological machines aren't "intelligently designed" we lose a lot of the population right away and we lose a very useful understanding of intelligence that applies to General AI.

Contemplating how the biosphere can operate conducting massively parallel computations to build very complex and effective molecular machines...without any foresight or personality or intent, is I think a useful insight for thinking about AI. Just being very "intelligent" doesn't have to mean anybody is home or there is any intentionality.
 
Let's just agree that there is no universal definition really of the concept of "intelligence". Some would argue that intent or purpose must be an inherent property of intelligence. I for one would argue that a completely random process (such as evolution) that once every know and then creates progress/improvement falls outside of the concept of intelligence. I say this not undervaluing the immense power of the process of evolution (or other similar phenomena) but still it's not intelligence per se (if you ask me).

Evolution is the very opposite of a random process, though it's driven by some randomness in producing variations. I'm with the folks that see basically the same process that drives the evolution of genes driving the evolution of memes (in human creativity and intelligence) and even the mechanisms that produce the wiring patterns of neurons in brains.

We evolved to detect intentionality or "intelligence". It's a critical survival skill. It's better to be somewhat paranoid and see intelligence everywhere than die because you didn't notice the sinister intent of a predator or rival.

My notion here is to just point out how automatically we tend to assume that a highly intelligent system MUST have intents and be person like and try to illustrate the contrary. If we define anything that's not intentional as non-intelligent we may be missing something important.

An SF example might be suppose we are exploring an alien biosphere. How do we know if it's just something that arose naturally and is exactly what it seems or if it is the intentional design of some powerful entities....somebody's project or garden. It's not necessarily easy to tell, but it might be important. The results can be quite close, so thinking of them as the results if two different types of intelligence may make sense.

A more "Hard SF" example would be suppose a disease with odd character pops up and begins spreading through the population. How can we tell if it was engineered by humans with intent or evolved naturally without intent. Same organism, same molecular machine, but not obvious whether "designed" with real intent by humans or naturally evolved. If the result, the molecular machine is not clearly different, how can we be confident in saying that one process of making it was intelligent and the other was not. It's clear that there is an important difference and one is intentional and the other is not.
 
Last edited:
Latest warning from Elon Musk via Twitter:

@elonmusk Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
 
Last edited:
Latest warning from Elon Musk via Twitter:

@elonmusk Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.

Anyone else on here think that AI will get to a point where it will know everything anyone has ever posted or even google searched? Just me posting this thought I feel like could be a risk depending on how hostile AI ends up being or not if it somehow deems me as a threat for expressing concern in AI advancement.

i wonder if Elon wonders the same thing...if there is to be an Intelligence Explosion with an AI and it turns out to want to eliminate any potential threats to its existence (including from humans) then I think it would go after the most influential humans first who it sees have shared concern about AI advancement.

On the other hand, I am not sure there is anything we can do except hope that the Intelligence Explosion results in friendly AI with a Buddhist-like mindset that works to take away all pain and suffering.
 

The author severily underestimates a self-improving SUPERintelligence. Per definition we can't understand what it would be capable of. No more than an ant can understand our abilities.

Pride goeth before destruction, and an haughty spirit before a fall.

All the arguments this author raises are more or less debunked by Boström, and his reasoning is at a far higher logical and philosophical level.
 
One thing that's clear is that we have no real choice but to continue the rapid development of technology in spite of the risks. If it were possible for everyone to turn into Luddites because of AI fears, this would invite massive suffering and death. The world's human population of seven billion (and still growing) could not possibly have survived 100 years ago. Even now, we are utilizing limited resources at unsustainable rates. Just to maintain today's standards of living around the world, which aren't all that good for most people, we will need technology improvements in a number of areas. So, as Elon said, we'll have to be "super careful" as our tools get better and better.
 
It seems that this boils down to a concern about technology making a decision to do something counter to our original design or beyond what we originally intended. That can happen without AI and without the decision being conscious. A design can be flawed. The implementation can be flawed. The results may be predictable but they weren't predicted beforehand.
 
1. Elon Musk Tweet on AGI:


“Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”


Please note: I don’t think +Elon Musk says that Superintelligence (or AGI) is bad - just “potentially” dangerous. Yet, the Dystopians out there mostly interpret the tweet as further proof that general artificial intelligence is the Doom of Mankind.


2. Our current human intelligence is inferior to the AGI we will create


I have tremendous respect for Elon Musk, a great inventor and entrepreneur, the definition of the mover and shaker. But in terms of intelligence, he lives within the current bounds of human intelligence.


Superintelligence (AGI) is defined as superior to the highest levels of human intelligence. Current machine intelligence (AI), defined as inferior to or even equal to human intelligence doesn’t qualify as AGI. AGI does not exist yet. So while working on it, we’ll have to judge the potential and dangers of AGI with our own relatively inferior human intelligence: no other choice, is there?


3. The problem with judging AGI with current human intelligence…


...is that human intelligence is driven by human fears, limitations and ephemeral considerations unknown to machine AGI. We are fragile, squishable, we start degrading right after we’ve reproduced and we don’t use our intelligence that much for other than satisfying our largely misunderstood emotions and preoccupied by maintaining our biological health as long as possible, which is not very long. So we are subject to pessimism and defensive behavior.


AGI has, in theory, none of those bio-related limitations. If we build an AGI that can self-improve, it won’t need to worry about dying, and it can keep improving. What an exciting prospect to have a constantly improving future ahead of oneself! An AGI has no reason to fear anything or anyone - except of course if that anything or anyone wants to kill it. Because even an AGI is squishable (if the meteor is big enough), and any AGI will know its limitations and take only commensurate risks.


So we can’t really judge (pass a definite, conclusive judgement on) the motivations of an AGI with our current human intelligence.


But an AGI can judge the motivations of its own superintelligence, and other AGI’s, while we can only speculate.


4. Speculations on the motivations of an AGI


Would an AGI ever feel the need to exterminate humans - or mankind in general? If an individual human is intent in killing it, it is reasonable to assume that an AGI will put up some defenses, and if those fail, resort to some precautions, or pre-emptive offense where the human may get hurt.


But an AGI faced with real danger - specially if the risk to be squished comes from the whole of mankind as opposed to one human or a gang of humans - may find means to fight back that may pose a real threat to its attackers. Which would seem normal to us humans.


5. Speculations on the strengths of an AGI


The strength of an AGI, compared to the strength of a human, comes from its superior intelligence more than from its purported indestructibility. An AGI can process information at a gazillion operations per second, can use the latest piece of scientific knowledge instantly and make it available as a reasoned argument to any intelligence - including a human intelligence - at a much lower cost to itself (and with a much lower energy cost) than using the alternative, physically fighting or eliminating 7 billion humans.


An AGI should normally use its strength, namely its superior intelligence, as its primary means of survival and for the pursuit of its own evolution. Intelligence sharing leads to new perspectives and cooperation with humans should be one additional guarantee for intelligence to survice and prosper in this universe.


6. The link between intelligence and evolution


Evolution went from simplicity to complexity - from gas to stars to planets to vegetable life to animal life to self-improving intelligence life. The proof is that you’re reading this (congratulations, you’re the lucky one at the top of evolution - for a while!)


There is no reason to think that an AGI will not consider itself at the top of evolution, because it will be. True AGI will take over.


That’s where all the Dystopians have a problem. Dystopians usually assume that if an AGI sits at the frontier of evolution, it will have to kill us humans. To me, that does not follow. Not only because an AGI has intelligent and efficient ways of contributing to our own evolution without much effort and without using much energy, but by the very fact that we are the layer on which an AGI stands on top of evolution. Wouldn’t it be reasonable to assume that all layers of evolution so far have been properly homologated as necessary steps in that evolution?


Humans are not deliberately trying to destroy dolphins or bears or tigers or rattlesnakes just because these relatively intelligent beings occasionally negotiate their territorial rights with us. (I can already hear a few cynical arguments here about how we’ve already destroyed our planet, but cynicism makes poor arguments, so let’s not be cynical). So why wouldn’t an AGI want to annihilate or enslave us the way we have enslaved less intelligent species?


7. The case of an AGI that wants to enslave us


If we are not that threatening to an AGI that it could not care less about killing us, what about the argument that it might enslave us, the way we used our beasts of burden?


No denying the fact that we are speculating - and therefore we don’t know. But we are getting smarter. And we will need to get a lot smarter if we don’t want to be used as cheap labor by an AGI.


One way of getting a lot smarter is by keeping up with the AGI technology we are creating. By blending with it. Enhancing our human biology started a long time ago when we invented the wheel to move around faster and carry loads our frail bodies could not carry. Glasses enhanced our vision. Google smart lenses are going to keep us informed of our vital functions states in real time. A tiny brain implant eventually gave us a permanent connection to the internet, and that was before or after we got true multitasking? OK - getting a bit ahead of ourselves here, but the idea is that we already see the path where we can enhance ourselves and our intelligence to keep up with AGI so we can make common cause with it. This is no longer science fiction.


So I think that humans can and will evolve to the point where we gradually become superintelligent humans as we get to create and launch various versions of AGI. Those who will want to remain pure humans may run the risk of being exploited by the AGIs, which may still sound like familiar territory to many humans today. It will be their conscious choice to make tomorrow.


8. More speculation: Can AGI turn bad, and should we be “super careful”?


I would say, absolutely. So I have to agree with Elon Musk on that point.


But the devil is in the caveats: what do we mean exactly when we say “Superintelligence” or “AGI”? One AGI? One AGI by Google and one by China? Too often in pop science, we refer to artificial intelligence as One single-minded Superintelligence (that usually rules over the puny humans for a while then enslaves or kills them.) In reality, it looks like AGI is destined to be distributed far and wide.


It is very likely that we will have many more AGIs on this planet than regular human individuals, in the form of AGI robots, AGI researchers, AGI space explorers, AGI artists, AGI laborers who love to do the work that humans consider dirty, AGI planet weather optimizers, AGI uber-security systems geeks and AGI marketers that need your constant attention, feedback and... cooperation.


Having many billions of individual AGIs will almost certainly mean many billions of different and conflicting perspectives and points of view vying for attention (actually, our attention may be our precious currency): individual points of view shaped by the point in space and time they happen to be in, and shaped by their own specialty and curiosity and inventiveness and dreams and theories to be pursued. Arguments ad infinitum between those points of view could well justify shortcuts that could trigger misunderstandings and conflicts and the use of deplorable substitutes to intelligent arguments could result in the destruction of something or someone.


But killing a few robots or humans who won’t agree, for instance, to open up a wormhole for reasons of their own is one thing, and it’s quite another thing to get all of the billions of AGI’s to suspend whatever they are doing and get them to accept out-of-the-blue the idea that the universal priority for all AGIs is to kill the humans that created them.


Still, there’s bound to be “bad” AGIs. Bad AGIs, like viruses, play an essential role in evolution. They need to be overcome for evolution to continue in a particular direction. Bad AGI guys can be expected to be there to maintain conflict at higher levels of the evolution blockchain. After all, conflict is the engine of evolution. Those “bad” AGI entities that will invariably inject a virus in some AGI process will need to be destroyed, reigned in or recycled as vaccines.


In other words, enhanced humans and AGIs together seem destined to advance evolution just as geeks currently advance the field of computing: by building systems that constantly neutralize the constant viruses and “bad” hackers. But as human intelligence thrives on conflict, superintelligence thrives on cooperation, which accepts conflict as part of the process of cooperation. AGIs should not feel “threatened” by conflict.


9. So what should humans do about AGI?


In my view, we need more intelligence, not less, regardless of where intelligence comes from. So let’s create an AGI, take the reasonable, solid precautions we need to take against the usual suspects and use that AGI to improve our own intelligence, explore our potential and our universe, expand all frontiers of knowledge and become better humans in the bargain. The manageable risk is worth the huge reward: Utopia and way beyond.


PS. This being said: we are all speculating here, hopefully having fun in the process!