Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

AGI is about 3-5 years away says Elon and Huang (NVidia)

This site may earn commission on affiliate links.

diplomat33

Average guy who loves autonomous vehicles
Aug 3, 2017
12,712
18,673
USA
Elon:

ELON: I think AGI is coming pretty fast --
INTERVIEWER: How quickly do you think [AGI] will happen?
ELON: If you say “smarter than the smartest human at anything”? It may not quite smarter than all humans - or machine-augmented humans, because, you know, we have computers and stuff, so there’s a higher bar… but if you mean, it can write a novel as good as JK Rowling, or discover new physics, invent new technology? I would say we are less than 3 years from that point.

Huang:

INTERVIEWER: People talk about AGI. Do you think in 10 years from now we're there?
JENSEN HUANG: Depending on how you define it, I think the answer is yes. And so the question is, what is AGI? If we define AGI as a piece of software, a computer that can take a whole bunch of tests. And these tests reflect basic intelligence, and by completing those tests, deliver results that are fairly competitive to a normal human? I would say that within the next five years, you're going to see, obviously, AIs that can achieve those tests.

Source:
I feel like Elon and Huang are not defining AGI the same. So that will throw things off a bit. We need a commonly accepted standard definition for AGI.
 
  • Like
Reactions: Tam
I feel like Elon and Huang are not defining AGI the same. So that will throw things off a bit. We need a commonly accepted standard definition for AGI.
Machines are really good at repetitive tasks. Chess game is a bunch of finite repetitive moves, but people think that machine is now smarter than human because they can beat human on a test of Chess.

I think that's what Huang is talking about. It's a whole bunch of repetitive tests: Is this a picture of a cat or dog? Machines will sort those out quicker than humans can.

General Intelligence without reliance on repetitive tests is another whole new level.
 
LLM:s are impressive but it's basically a database of the internet with some language understanding. Useful? Yes? Intelligent or able to reason? No.

One of my tests for reasoning is simple. A child would be able to answer this (hint - I am still holding the paper with my right hand):
Q: I'm holding a piece of paper with both hands. It's windy. What happens if I let go with the left hand? Disregard the weather.

A(GPT-3): If you're holding a piece of paper with both hands and it's windy, and you let go with your left hand, the paper will likely be more affected by the wind because it loses the support and stability provided by your left hand. The wind can catch the paper more easily, causing it to flutter or potentially be carried away. The extent to which the paper is affected will depend on the strength of the wind and other environmental factors. If the wind is strong, the paper might be blown away more easily when one hand is released.

A (Bard): If you let go of the paper with your left hand, the paper will experience a force due to the air resistance acting on it. The force will be directed away from your body, and this will cause the paper to start rotating and move sideways. The direction of rotation will depend on the orientation of the paper at the time you let go, but it will always be in the direction that minimizes air resistance.

The speed of the paper will depend on the size and shape of the paper, the strength of the air resistance, and the angle at which the paper is released. In general, the paper will accelerate until it reaches a constant speed, which is called the terminal velocity. The terminal velocity of a paper will be very low, typically less than 1 meter per second.
---

To me, this is fancy autocomplete and not intelligence. Not that "AI" ever was. We need an agreement of what AGI is before it's any point discussing when we'll get there. Both Jensen and Elon are bullshitters. We have no idea how to get to reasoning, and how many scientific breakthroughs it will take. Standardized tests readily available on the Internet (in the training set) is not a reliable test. You need to use something it need to deduct.
 
Last edited:
I feel like Elon and Huang are not defining AGI the same. So that will throw things off a bit. We need a commonly accepted standard definition for AGI.
I watched the full interview and Huang isn't really defining AGI. He's redefining the question, and I'm pretty sure he knows that his answer isn't "AGI".
He's saying that systems in 10 years will be specialised AI:s like the ones we have today but there will be more of them and better ones.

So I take back that Jen-Hsun is a bullshitter :) sunce he didn't really say what the OP implied.

Listen in from around 06:00 at until 08:30 ish.

 
Last edited:
  • Like
Reactions: diplomat33