Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla announced it will offer FSD to competitors

This site may earn commission on affiliate links.
My feeling,

Elon saw he needed more experts to attempt to get FSD delivered and assembled rhe people into xAI. Now xAI will jump in and evaluate FSD and next steps. These experts will either make it happen or fail. There is tremendous exposure for Elon if he cannot deliver FSD. If he fails, Tesla drops in value and finalizes as only an EV and energy company. By including another company as a potential FSD customer, such as Ford, there is even more exposure for Elon because FSD failure will cause questions about Tesla as even an EV company.

It’s industry call out time on Elon. He’s delivered before, can he do it again?
I have faith that he will for he is a 4D chess master thinker and already sees the future.
In his mind, he’s already past FSD success and planning the next moves for Tesla, xAI, and the other companies.

Let’s watch how this plays out.
Fascinating. You have to agree.
 
Elon saw he needed more experts to attempt to get FSD delivered and assembled rhe people into xAI. Now xAI will jump in and evaluate FSD and next steps. These experts will either make it happen or fail. There is tremendous exposure for Elon if he cannot deliver FSD. If he fails, Tesla drops in value and finalizes as only an EV and energy company. By including another company as a potential FSD customer, such as Ford, there is even more exposure for Elon because FSD failure will cause questions about Tesla as even an EV company.
This makes no sense what so ever. Is Elon and the other investors (if any) of x.ai going to give away their time and/or expertise to Tesla for free? This is wishful thinking if you ask me. Regardless, there is no team on earth that can make current Tesla's become driverless autonomous in a meaningful ODD. That's has nothing to do with the size/skills of the team, the amount of data nor the size of the computer. It's just where machine learning and computer vision is today.

It would be a feat if they could remove the drivers in the LVCC loop within 3 years, but I honestly don't think that's going to happen.

It’s industry call out time on Elon. He’s delivered before, can he do it again?
I have faith that he will for he is a 4D chess master thinker and already sees the future.
In his mind, he’s already past FSD success and planning the next moves for Tesla, xAI, and the other companies.
It's been call out time on Elon since 2018, yet people keep believing. They should read up on computer vision and look at the competition.

Elon has made cars and even rockets in the past. Now he needs to push science a couple of steps forward. That's hardly the same thing. Tip: Never put unsolved research problems in the middle of your roadmap.
 
Last edited:
  • Disagree
Reactions: finman100
There are very few industry mavens such as Elon
His ability to see, plan, assemble and execute are leading sustainability, EV industries and now AI industries
Not asking you to be a Elon fanboy, just tell me who else is this effective as a maven?

FSD has these headwinds/issues to conquer:

AI talent, AI hardware, energy powering rhe AI infrastructure

-xAI will charge behind the scenes for all of the consult expertise Tesla uses to fix FSD
I think that as xAI, working on their meaning of the universe, fixing FSD is on the list as a initial deliverable proving the sub company
Do you deny xAI has top AI industry talent on board and attracting others? xAI will be a magnet addressing the talent issue
-It was said Nvidia is prioritizing GPU deliveries to Tesla, helping with the hardware supply issue
-Tesla Energy more than anyone will energy supplying rhe Tesla AI power needs, conquering the power issue stated

I am confident in Tesla FAS and so is “Ford”
 
It's also at 12 miles per disengagement. For eyes-off autonomy you need 100000 miles or more. I wouldn't call it awesome outside a tech-demo context.
Okay, so you are a naysayer.

I said what I said. I said what I believe, there are MANY others here that feel the same way.

And yes, there are a select group on the forum who think that FSD is a death trap.

And read my post again. I took extreme car in writing it. (Because I expected the flashback that you did). Even if it does have an average of 12 miles per disengagement, that means that for 10 mile trips, it may not need disengaging. It also means that for some 30 mile trips, it may not need disengaging.
And honestly, there are a LOT of people who disengage it when it will perform the maneuver correctly.
Just recently my wife came up on a biker in the right lane. She disengaged. I told her that the car can now do it and she reengaged. And the car indeed did it perfectly. It slowed for the biker, waited until there was clear visibility and no cars and then moved out in the road and passed. (and this was indeed going into a curve with oncoming cars)

So for me, the general disengagement numbers don't adequately represent what the current software can do. I know that there are a number of times that I come up to an intersection that I know the car will do, but I just don't want to wait on the car. I disengage, but I know the car can do it.
 
Just reviewing the levels chart.


I guess if the NHTSA or NTSB ever finds an instance where FSD is found guilty, that automatically puts the car in Level 3 (Human and CAR is responsible)

And like I inferred, FSD is performing somewhere between Level 4 and Level 5, from the Automation and Conditions lines, All it takes is changing the responsibility to make it a Level 4.

Traffic Aware Cruise Control with lane following is all that's required for Level 2. The chart makes it seem that it's a small jump between Level 2 and Level 3. Tesla was well beyond the Automation and Condition requirements 4 years ago.

So it comes down to disengagement and safety. Why do you think that Tesla reports that information? Keep the safety as good as a driver and the disengagements down and that starts the path to removing the steering wheel.

Okay, so you are a naysayer.

I said what I said. I said what I believe, there are MANY others here that feel the same way.

And yes, there are a select group on the forum who think that FSD is a death trap.

And read my post again. I took extreme car in writing it. (Because I expected the flashback that you did). Even if it does have an average of 12 miles per disengagement, that means that for 10 mile trips, it may not need disengaging. It also means that for some 30 mile trips, it may not need disengaging.
And honestly, there are a LOT of people who disengage it when it will perform the maneuver correctly.
Just recently my wife came up on a biker in the right lane. She disengaged. I told her that the car can now do it and she reengaged. And the car indeed did it perfectly. It slowed for the biker, waited until there was clear visibility and no cars and then moved out in the road and passed. (and this was indeed going into a curve with oncoming cars)

So for me, the general disengagement numbers don't adequately represent what the current software can do. I know that there are a number of times that I come up to an intersection that I know the car will do, but I just don't want to wait on the car. I disengage, but I know the car can do it.

The issue is what you believe is entirely wrong so wrong that even Tesla tells you that you are wrong.


FSD Beta is an SAE Level 2 driver support feature that can provide steering and braking/acceleration support to the driver under certain operating limitations. With FSD Beta, as with all SAE Level 2 driver support features, the driver is responsible for operation of the vehicle whenever the feature is engaged and must constantly supervise the feature and intervene (e.g., steer, brake or accelerate) as needed to maintain safe operation of the vehicle.

Disengagement rate does not define autonomy levels, in fact you can have a L4 system that disengages every 10 miles. But that's not a product anyone would be willing to bet their life on, or so I thought. What defines the autonomy level is what the design intent is for that particular system wherein the manufacturer of that system has to declare who is responsible for driving when said system is engaged. The levels are not defined by how good or bad it drives aka disengagement rate. Disengagement rate is just a metric used to measure progress of MTBF. Would you want to ride in the back seat of a L4 car that fails every 10 miles? L4 meaning you are not in control of the vehicle or position to take over when the system fails.
 
Last edited:
The issue is what you believe is entirely wrong so wrong that even Tesla tells you that you are wrong.




Disengagement rate does not define autonomy levels, in fact you can have a L4 system that disengages every 10 miles. But that's not a product anyone would be willing to bet their life on, or so I thought. What defines the autonomy level is what the design intent is for that particular system wherein the manufacturer of that system has to declare who is responsible for driving when said system is engaged. The levels are not defined by how good or bad it drives aka disengagement rate. Disengagement rate is just a metric used to measure progress of MTBF. Would you want to ride in the back seat of a L4 car that fails every 10 miles? L4 meaning you are not in control of the vehicle or position to take over when the system fails.

They don't define the levels,
But one part of the bogusness of the levels is that they are nice square blocks with no methodology of validation.

Truly, if Tesla was to turn off the nags, the cars would operate at a Level 4 mode.

Again, the blocks don't show any requirements of how safe that the vehicle has to be. They really don't even give a clear difference between Level 4 and Level 5. It's got nebulous statements such as "all conditions." Does that mean that the car has to drive through tornados?

User Disengagement rates are an indication of how safe the drivers feel with the cars. Car disengagement rates are more related to "can the car do it." There are very few car disengagement events, indicating that the car can do it.

So by monitoring the user disengagement rates, the car disengagement rates and the safety numbers, A quantifiable measurement can be made as to the success of the software.

Again, all it takes today is for Tesla to turn the nags off and the car meets Level 4 today.
 
There are very few industry mavens such as Elon
His ability to see, plan, assemble and execute are leading sustainability, EV industries and now AI industries
Not asking you to be a Elon fanboy, just tell me who else is this effective as a maven?
Elon is an fantastic investor and marketeer. No doubt. Is he Tony Stark or even an engineer? No. I wouldn't call him a good leader based on his management style.

FSD has these headwinds/issues to conquer:

AI talent, AI hardware, energy powering the AI infrastructure
Oh, you're sole source of information is Elon....

If talent, hardware and data (I assume you mean data) was the only things required, then why don't we already have Level 5 autonomy? All the smart people have been working on this problem for at least 15 years...

I'll add to this list:
* Getting computer vision to work unsupervised in a safety critical environment

This hasn't been done by anyone anywhere. Not even close. Not even for still images. It requires scientific break throughs. Can you give me a single example of a an AI scientific breakthrough (or any other scientific breakthrough for that matter) coming from an Elon-operated company? They've done very impressive applied engineering, but that's not the same thing as breaking research barriers.

* Being able to validate this in some operational domain and prove it to consumers and regulators.
First step is: define an ODD that yuo say the car is capable of autonomy. Ask yourself why hasn't Tesla defined an ODD? Will they go from nothing to everywhere? By magic?

-xAI will charge behind the scenes for all of the consult expertise Tesla uses to fix FSD
I think that as xAI, working on their meaning of the universe, fixing FSD is on the list as a initial deliverable proving the sub company
Do you deny xAI has top AI industry talent on board and attracting others? xAI will be a magnet addressing the talent issue
-It was said Nvidia is prioritizing GPU deliveries to Tesla, helping with the hardware supply issue
-Tesla Energy more than anyone will energy supplying rhe Tesla AI power needs, conquering the power issue stated
xAI doesn't have the "top talent". They hardly exist. Google Deepmind and some others (OpenAi, Anthropic, Meta, Microsoft etc) has the top talent. Have you heard of protein folding? AlphaZero?
NVidia? Aren't you a Dojo-chip believer?

Oh you meant energy. You must understand that "Tesla Energy" is not an actual supplier of energy or electricity. 🤣
I am confident in Tesla FSD and so is “Ford”
Ignorance is bliss. Source for "Ford"? No Elon doesn't count.
 
Last edited:
Again, all it takes today is for Tesla to turn the nags off and the car meets Level 4 today.
You're right that J3016 doesn't talk about safety metrics. But this statement is completely false.

The FSD system is not capable of performing the full OEDR and doesn't by handle DDT fallback. Are you claiming FSD can handle its own failures? If so which? It's not even capable or Level 3 fallback with a human driver. Finally you need a defined ODD for Level 4, which Tesla currently hasn't bothered to do, since they will "jump" magically from nothing to everything.

That makes it a L2 system, regardless of safety metrics.

Screenshot 2023-07-21 at 18.08.51.png



Watch this:
 
Last edited:
Elon is an fantastic investor and marketeer. No doubt. Is he Tony Stark or even an engineer? I wouldn't call him a good leader based on his management style.


Oh, you're sole source of information is Elon....

If talent, hardware and data (I assume you mean data) was the only things required, then why don't we already have Level 5 autonomy? People have been working on this problem for at least 15 years...

I'll add to this list:
* Getting computer vision to work unsupervised in a safety critical environment

This hasn't been done by anyone anywhere. Not even close. Not even for still images. It requires scientific break throughs. Can you give me a single example of a an AI scientific breakthrough (or any other scientific breakthrough for that matter) coming from an Elon-operated company? They've done very impressive applied engineering, but that's not the same thing as breaking research barriers.

* Being able to validate this in some operational domain and prove it to consumers and regulators.
First step is: define an ODD that yuo say the car is capable of autonomy. Ask yourself why hasn't Tesla defined an ODD? Will they go from nothing to everywhere? By magic?


xAI doesn't have the "top talent". They hardly exist. Google Deepmind and some others (OpenAi, Anthropic, Meta, Microsoft etc) has the top talent. Have you heard of protein folding? AlphaZero?
NVidia? Aren't you a Dojo-chip believer?

Oh you meant energy. You must understand that "Tesla Energy" is not an actual supplier of energy or electricity.

Source? No Elon doesn't count.

* Getting computer vision to work unsupervised in a safety critical environment
This happens every day, millions of times each day. And it does it significantly faster and more reliable than humans do. A great example of this is inspection lines. Computer vision is on a vast number of production lines watching items pass by. And in many of these lines, an incorrect interpretation can impact the local works or the end-users of the product.
* Being able to validate this in some operational domain and prove it to consumers and regulators.
Elon and Tesla are not designing a product that works in a operational domain. They are designing a product that works in all operational domains. Cruise and Waymo have products that work in specific operational domains, but that's not where Tesla wants to be. They have stated that fact multiple times.


Why don't we have Level 5 autonomy?

That's easy, the stupidness of human drivers. Get the humans off the roads and we'd be there. The problem is learning how to drive better than humans while putting up with their seemingly complete randomness and stupidity. From and automation and conditions point of view, my car does it now. From the Who is driving point of view, it's debatable. If I don't provide any input into the car, am I driving the car?

Trust me, xAI will get the talent in short order. Most every technical resource wants to work for Elon. It's with Elon that you can see the future. It's with Ford that you see the past.
 
* Getting computer vision to work unsupervised in a safety critical environment
This happens every day, millions of times each day. And it does it significantly faster and more reliable than humans do. A great example of this is inspection lines. Computer vision is on a vast number of production lines watching items pass by. And in many of these lines, an incorrect interpretation can impact the local works or the end-users of the product.
Does people die if these machines make a single mistake? No. You have no concept of what "safety critical means" Perhaps Google "safety critical use of computer vision". The closest you'll get is radiology. And that's supervised. Do you think radiology or autonomous cars are harder? In driving you need to make sub-second decisions. That's called "time critical." And then there is video, other agents and so on.
* Being able to validate this in some operational domain and prove it to consumers and regulators.
Elon and Tesla are not designing a product that works in a operational domain. They are designing a product that works in all operational domains. Cruise and Waymo have products that work in specific operational domains, but that's not where Tesla wants to be. They have stated that fact multiple times.
You need to understand that in the real world, engineering doesn't work like this. Only in fantasy land. Elon sells a fantasy: "Camera-only, unbounded robotaxis". He might as well say a Nintendo Switch is capable of AGI. But, alas, people are so terribly stupid and easily charmed by science fiction dreams. You like want it to be true, right?

When Tesla defines an ODD, we may have 3-5 years until they work reliably there based on Elon's history and current progress of the FSD team.
Why don't we have Level 5 autonomy?
That's easy, the stupidness of human drivers. Get the humans off the roads and we'd be there. The problem is learning how to drive better than humans while putting up with their seemingly complete randomness and stupidity. From and automation and conditions point of view, my car does it now. From the Who is driving point of view, it's debatable. If I don't provide any input into the car, am I driving the car?
Explain to me why there are at least five companies that are capable of driverless operation among other human drivers and Tesla isn't one of them?
 
Last edited:
1) Does people die if these machines make a single mistake? No. You have no concept of what "safety critical means" Perhaps Google "safety critical use of computer vision". The closest you'll get is radiology. And that's supervised. Do you think radiology or autonomous cars are harder? In driving you need to make sub-second decisions. That's called "time critical." And then there is video, other agents and so on.

2) You need to understand that in the real world, engineering doesn't work like this. Only in fantasy land.

3) Explain to me why there are at least five companies that are capable of driverless operation among other human drivers and Tesla isn't one of them?

1) Absolutely
2) I am an engineer, I understand the concept and reality.
3) I know that there are ZERO that will pick me up today.

I know my Tesla will take me to where I want to go.
 
Last edited:
  • Like
Reactions: APotatoGod
  • Like
Reactions: diplomat33
Honestly, the labels of the level are worthless. It is very possible that a Level 2 system go to a Level 5 in one day.
No they aren't and no it isn't. Just because you understand the concept of "full OEDR" and "DDT including fallback" doesn't make it "worthless". SAE J3016 isn't written for laymen, which gets proven here over and over again, every single day.

There are many people that have taken their Teslas on large numbers of trips with no disengagements. That number is only going to get higher.

And that's a capability that NO ONE ELSE can claim.
What? Literary ALL driver assist systems and autonomous systems are improving all the time. Regarding FSDb it's actually hard to see if FSDb is improving at a meaningful pace towards autonomy at this time. Think 50000-100000 miles per failure. Now 12 miles.

Screenshot 2023-07-21 at 18.49.52.png
 
Last edited:
No it isn't. Just because you understand the concept of "full OEDR" and "DDT including fallback" doesn't make it "worthless". SAE J3016 isn't written for laymen, which gets proven here over and over again, every single day.


What? Literary ALL driver assist systems and autonomous systems are improving all the time. That's actually hard to see if FSDb is improving at a meaningful pace towards autonomy at this time. Think 50000-100000 miles per failure. Now 12 miles.

Why don't you think that Tesla doesn't have "full OEDR" And by the way, humans don't either. How many deer are hit each year? Squirrels? Armadillos? Let alone walls. we had one truck mode an Interstate bridge when his dump truck was slightly raised and impacted a bridge at 70mph. "Full OEDR" is pretty comical.
DDT including fallback? Why do you think that it is missing? Take a look at the CPU architecture, there's a backup online and running with supervisory algorithms monitoring and waiting for a failure.

But the most important thing is that the SAE specs were written with the concept of basic programmatic flow as it existed when written. Today's AI truly changes a lot of that.

Tesla is improving and it is improving at a rapid pace. It was very few years ago that it had problems with things like sharp curves, those are a piece of cake now. The latest releases are able to handle bicycles on roads and correctly pass them. The numbers that you are looking at are NOT FAILURES. they are disengagements reported by a very small number of people. I dare say that a large number of these disengagements are driver initiated because they didn't think that the car was going to perform the maneuver correctly, but odds are that the car would.
I'm pretty sure that I can be in the car with you and you'd have a certain number of disengagements and then if I drove the exact road in the exact conditions, that my number of disengagements would be much smaller, probably 0.
I mean really, using stats when the max number of miles driven is only 6,679?

I know, you don't believe that Tesla will ever do it. That's fine, But you aren't going to change the minds of the many who do believe it and who do see the advantages of it.

We know how well it works!