adiggs
Well-Known Member
We are at a critical time in history. If leave it unchecked, AI will quickly surpass human in terms of brain power, and right after that point, AI will become thousands millions times more powerful than human brain, this vertical phase may only take weeks or months. At that time AI will work on AI, we become ants.
SoftBank CEO Son thinks robots will never surpass human in imagination. That is totally wrong. There is nothing so special that AI can not do regarding imagination. Wait after their IQ reaches 800.
Elon fully understand the risk of AI.
I work in a highly related field (though far from claiming expertise - only greater than complete ignorance) - I haven't yet seen evidence of a computer program / AI that can solve the problem of "what is the problem that needs to be solved"? Or related - "what is the opportunity that should be taken advantage of"?
When there is a defined objective or winning condition (chess, go, sabermetrics), we've seen computer programs of various kinds that can do a better job of solving that problem than humans can. But I still haven't yet seen even signs of life of a program that could successfully pick chess as a problem to solve, much less define what "winning" or "success" in chess would look like, in order to then go off and solve chess better than humans.
In the case of autonomous driving, we're seeing evidence of computer programs that can drive cars as well as or better than humans. However I still haven't yet seen evidence of a computer program that can pick "autonomous driving" as a problem to be solved, much less define the objective or winning condition for autonomous driving to solve for, so that the program can be written by the program in order to drive the car as well as or better than a human.
I'm not saying that it's not possible for a computer program / AI to reach the point where it's the program that is identifying the problem / opportunity to be solved, defining success in solving that problem / opportunity, so that the AI can then go solve the problem. Only that I have never seen or heard of signs of life anywhere, of an AI / computer program being able to do so.
I also have no personal evidence, signs of life, or even a functional mental model of how that would work or what it would look like.
I'm also not holding my breath waiting for the day when a computer / program or AI is able to identify the problem to be solved. I make use on a daily basis of AI and related techniques to help me sift through big piles of data in order to inform and improve the decision making I'm involved in, and I expect AI and related techniques to continue helping humans make a wider and wider variety of such decisions.
Interesting to me is that in the chess and sabermetrics examples (Nate Silver talks about these, among others, in his book The Signal and the Noise), is that on the other side of the AI / computer program getting good enough to beat humans, the next evolution in the relationship and best solution to the problem evolves to be neither human nor AI / computer program on their own. Instead it's some sort of combination / hybrid of the two forms of input.
In the case of Go, my guess based on the prior art is that over the next few years, we'll see an evolution within go to where the very best Go "player" will be some sort of team made up of a mix of human and AI / computer program. I could, of course, be wrong about that.
Either way, as complex as Go is, it's trivial next to autonomous driving, which is itself trivial next to the sorts of problems AI's will need to start solving in order for AI to have "imagination" or to have anything like brain power. The central problem itself being "what is the problem / challenge / opportunity that needs to be solved", and then the followup question "what constitutes success / what do we solve for"?
I'm not worried about AI solving that problem, based on any work I'm aware of going on.