AlphaGo is predicting moves in a game with well defined rules, limited inputs, and a clear outcome. It was able to play itself to quickly learn the best strategies. FSD, or LLMs for that matter, don't work that way.
Limited inputs? Go was explicitly chosen because it is a complicated game with FAR more possible plays than chess.
From Scientific American: “Go's complexity is bigger, much bigger. With its breadth of
250 possible moves each turn (Go is played on a 19 by 19 board compared to the much smaller eight by eight chess field) and a typical game depth of 150 moves, there are about 250^150, or 10^360 possible moves.”
That’s a lot for “limited inputs”. And that’s using technology from over a decade ago.
And if you asked most people if you could just show a neural network videos of cars driving and it would be able to do it pretty darn well with almost no control code, they wouldn’t have believed you either.
And if I told you I could show a random picture of a cat to a computer and it could identify that there’s a cat in the picture, it would have been thought of as impossible just two decades ago.
The car can already identify people of all different kinds, shapes, sizes, and with different clothing with almost 100% reliability. Why do you think the capability stops there?
How is it that a human brain can drive in an infinite world of infinite possibilities? Yes, we have more neurons, but we also have to do a lot more than just drive.
Neural nets don’t have to be perfect drivers. No human is a perfect driver either.