Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Monolithic versus Compound AI System

This site may earn commission on affiliate links.
This describes a classical database lookup, not a neural network. A neural network does not maintain a database of its inputs; all the information in the input training set gets irreversibly (and lossily) condensed and mashed together into the neural network, with no way to retrieve or extract or compare against any specific example that was used for training. A neural network does not have any explicit "distance calculation", "identify closest inputs", or even "classification" (unless its final output is classification, but that's not the case for FSD E2E).
True....this was just to get back to fundamentals on how NN is built up from here, and the training does contain "data" in their training database.
 
Agree but I am coming from a legal perspective. The day one of the RT/AV gets into a collision how are they going to prove they are not at fault, and that Tesla RT knew how far the other vehicle/person etc was? Does NN provide that answer? Curious.

This is the same situation where S. Ramanujan would dream of equations that were almost perfect in his sleep, and then Prof GH Hardy had to push him to validate those equations, and provide written evidence of their accuracy.
How is fault proven for non-AV collisions? There is no ground truth in those cases either; the best case is that there's dashcam video, which may be sufficient to give an approximate idea of what happened and who is at fault.

If Tesla ever achieves a pure-vision Robotaxi, its eight cameras should give a very good idea of what happened and who is more or less at fault in a collision or incident, even if it's not understood why the Robotaxi acted as it did. (If Robotaxi makes a mistake, it doesn't matter why it made the mistake; it's still Robotaxi's fault, even if the sensor suite was incapable of detecting and avoiding the accident, in which case it's Tesla's fault for designing an inadequate sensor suite.) It's ultimately a subjective human decision how to assess and assign fault. I don't think having ground-truth LiDAR data adds much value here. If the fault isn't obvious (to humans) from the camera feeds, LiDAR data is not going to make it any more or less obvious, I think. The raw LiDAR point cloud is still several steps removed from the semantic processing required for object classification and so forth, so there is an inevitable degree of neural-network opacity that is practically impossible to untangle.
 
  • Like
Reactions: enemji
How is fault proven for non-AV collisions? There is no ground truth in those cases either; the best case is that there's dashcam video, which may be sufficient to give an approximate idea of what happened and who is at fault.

If Tesla ever achieves a pure-vision Robotaxi, its eight cameras should give a very good idea of what happened and who is more or less at fault in a collision or incident, even if it's not understood why the Robotaxi acted as it did. (If Robotaxi makes a mistake, it doesn't matter why it made the mistake; it's still Robotaxi's fault, even if the sensor suite was incapable of detecting and avoiding the accident, in which case it's Tesla's fault for designing an inadequate sensor suite.) It's ultimately a subjective human decision how to assess and assign fault. I don't think having ground-truth LiDAR data adds much value here. If the fault isn't obvious (to humans) from the camera feeds, LiDAR data is not going to make it any more or less obvious, I think. The raw LiDAR point cloud is still several steps removed from the semantic processing required for object classification and so forth, so there is an inevitable degree of neural-network opacity that is practically impossible to untangle.
Thank you. This sounds sane :)
 
  • Like
Reactions: Ben W
Correct, you could. Of course, the C code would be absolute unintelligible gibberish (or at best, a sequence of gigantic matrix multiplies with random-looking numbers), even if you could compile and run it and get the same results as the neural network. Put another way, you could show that they're functionally equivalent, but you wouldn't gain any insight into the neural network by doing so.
Hi, Ben --

I agree that the resulting code would be no more intelligible than a neural network.

We've wandered way off topic, and someone's (yours?) remark that, "These are all Von Neumann machines" more or less sums up what I was getting at.

Yours,
RP
 
  • Like
Reactions: Ben W
Yes the blog is wrong about the latest GPT model not being E2E. But I think you are missing the blog's argument. The blog is not arguing that E2E is not capable or not generalized. The argument is about safety. The argument is that AVs are safety critical applications, ChatGPT is not. So yes, E2E has proven to be more capable and more generalized but that is not the only metric for AVs. AVs must also be safe too. And ultimately, being super generalized is not enough for AVs if the MTBF is not high enough. I think Mobileye is trying to make the case that E2E is not the best approach to achieve the very high MBTF goal of 10^7 hours which they claim is needed for "eyes off" autonomy.
Its embarrassing how mobileye has fallen completely behind Tesla and is being left in the dust to the point they have now resorted to blog posts since they can't compete in actually delivering products to customers. Their city street system is still complete vaporware and doesn't exist. This is after they bragged about it for 4+ years saying 2x EyeQ5 will have a disengagement rate of 1 in 1,000 hours (aka around 35k miles). Yet now they are saying it won't even reach 1 hour on city streets.

REM is complete garbage mess.

Idk how people still take this company seriously or think they are ahead of Tesla or close to Waymo.
 
Idk how people still take this company seriously or think they are ahead of Tesla or close to Waymo.

I think people take Mobileye seriously because they are a major ADAS developer and supplier to OEMs around the world that has been around for over 20 years and there are millions of cars equipped with Mobileye's eyeQ chips and basic ADAS systems. It is totally fair to criticize Mobileye's approach to autonomous driving but they are still a major company in the ADAS market.

I have not heard anyone say that Mobileye is close to Waymo. That feels like a total strawman. And people said that Mobileye was ahead of Tesla in like 2017 back when Tesla was struggling with AP2. I don't think anyone seriously says that Mobileye is ahead of Tesla now. So that feels like another strawman.
 
I think people take Mobileye seriously because they are a major ADAS developer and supplier to OEMs
If something someone has been saying has been incorrect for the past 4+ years then no, you shouldn't take what they are saying seriously.
The blog is just as garbage as their 1,000 for camera only and 1,000 for Radar/Lidar. Why? Because they can't even get to 1 hour.
around the world that has been around for over 20 years and there are millions of cars equipped with Mobileye's eyeQ chips and basic ADAS systems.
Incumbency can be a huge negative. Its the reason why they favor using old methods because they have to support so many old processes from their existing customers. Its the same issue traditional Auto OEMs have with EV, Tech and ADAS tech.
It is totally fair to criticize Mobileye's approach to autonomous driving but they are still a major company in the ADAS market.
I don't consider it criticism more like I'm stating facts based on observations and Mobileye's own failed goals.
I have not heard anyone say that Mobileye is close to Waymo. That feels like a total strawman.
People have said that.
And people said that Mobileye was ahead of Tesla in like 2017 back when Tesla was struggling with AP2. I don't think anyone seriously says that Mobileye is ahead of Tesla now. So that feels like another strawman.
Not to call you out but 6 months or so ago, you were saying supervision was better than FSD Beta.

Take for example this quote: "In Mobileye's engagement with car makers the MTBF target is 107 hours of driving. Just for reference, public data4 on Tesla's recent V12.3.6 version of FSD stands around 300 miles per critical intervention which amounts to an MTBF of roughly 10 hours – which is 6 orders of magnitude away from the target MTBF."

They bring up their non-sense 10^7 MTBF and then disparage Tesla for the reported 10 hours between disengagement. But the facts are, they themselves are not even at 10 hours yet, let alone 10^7. The whole blog post is rubbish.