Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Level 5 would need to. Level 4 just needs to move the car to a safe place and come to a safe stop.
I don't think it actually needs to do that (give up on the trip) in this case. It just needs to plan a different route to avoid this ULT. We've discussed the U-turn alternative, and there are other ways around it that of course waste even more time, may be fodder for derision, but IMO are acceptable for L4 autonomous safety.

There's an honest difference of opinion among observers here. Some people have the opinion that this turn exists, many humans are negotiating it every day even during rush hour, and so FSD can't be serious unless it does the same. Other people think this represents and unsafe and poorly planned intersection, maybe a holdover from 40 years ago when the traffic so bad, and which deserves at least a No Left Turn sign, if not a closure of the median opening.

My opinion is somewhere in between. I wouldn't want to attempt this turn myself, I really wouldn't want my Tesla to try it under my supervision, and I think it's no shame on Tesla if they train FSD to avoid this and other risky or very tricky cases. OTOH, I think it's a tough decision for the traffic department to take away the option from drivers who've been making this turn from their neighborhoods for for decades. Chuck has said that he thinks there are actually fewer accidents there than at the traffic light up the street. That's often the case; drivers recognize these nasty and dangerous situations so they pay attention don't take it lightly.

A cynical but possibly true calculus is that the city would like to close these medians, but they need to wait until one or two people get killed so that they have the justification to do so.

Whether L2 or L4, I would very much like my Tesla to have a better facility to let me guide the route. Ashok actually mentioned this, suggesting that v12 could eventually listen to guidance suggestions in real time - more natural and adaptable than some kind of complex route management on-screen.
 
  • Helpful
Reactions: GSP
If the approach is to rely on vision, then there has to be 360 degree horizontal coverage with cameras.
I definitely agree with your overall point, but it's been a source of confusion that the present camera suite actually does have "360° coverage". The problem is that this 360° is made up of viewing angle components that too easily obstructed in common situations:
  • View of cross-traffic to the left and right is too easily obstructed by roadside objects or other vehicles pulling up next to ego.
    • Because the side facing cameras are too far back.
  • View of oncoming traffic is too easily blocked by stopped cars facing ego that are waiting to make their own left turns.
    • Because all of the forward-facing cameras are clustered at the center of the windshield.
  • View of close front obstructions is inadequate. Mostly an issue for parking, but also for best recognition of road debris.
    • Because all forward-facing cameras are high on the windshield.
  • View of rear cross-traffic is inadequate. Mostly an issue when backing out of a parking space.
All of the above could have been solved (at least very substantially improved) with a different, more corner-weighted placement of approximately the same number of cameras. Inexpensive camera cleaning measures are another related topic,.

I don't see all this as a showstopper, but it's unfortunate. And the very nature of v12, training on massive fleet data, is a big inhibitor against making substantial changes.
 
  • Like
Reactions: aronth5
Exists and is meaningfully better was the point.

It literally was not

The discussion in question spawned off this post-

Is there any actual evidence that a V12 exists, beyond someone saying, "When we figure out something that works, we're going to name it V12."

They asked for evidence of it existing. At all.

Which is Elons drive with it, discussing the architectural differences in V12 vs V11 and the head of AP softwares comments both during and after the drive.

It exists. Question answered.

Anything about how good or better/worse is a different discussion (and a fairly pointless one right now given we have 0 information about what a customer-facing release of it will look like or anything beyond a few high level architecture notes)
 
  • Like
Reactions: Nakk and sleepydoc
3 lanes each way - speeds above 50 mph. There is not a single urban situation I know of in Seattle metro that would allow a ULT in such a situation.
In Wisconsin (and likely other states) they actually force you to take a right then make a u turn across the median rather than try to make a dangerous left turn across multiple lanes of fast traffic. They do this on rural 4 lane highways so an urban intersection like Chucks UPL would clearly be a candidate as well.
1698367439828.png
 
I don't think it actually needs to do that (give up on the trip) in this case. It just needs to plan a different route to avoid this ULT.
Agreed - I don’t see the issue. As a driver, i do this on a regular basis. If there’s an unsafe maneuver that I can avoid by altering my route, I consider that smart driving, not failing.
 
I don't think Chuck's ULT is particularly common. But what is common is making a left out of a driveway/parking lot into a suicide lane and then merging with traffic. This is effectively the same turn.
Chuck's ULT has several features that make it a challenge. First, the cross traffic supposedly flows at a high speed limit, but in fact it tends to exceed the posted limit. Second, there are no nearby upstream traffic signals which would create predictable travel gaps. Third, the entry is into a multilane highway via a median. Lastly, there is vegetation partially obstructing the view to the left. Given the high speeds, the reaction time of ego is critical. Often its decision making is downright scary. We can argue forever whether this is either a hardware or software flaw, but why handicap the NN when it's possible to improve the sensor array?
 
  • Like
Reactions: EVNow
Mercedes fits in the definition level of level 3 or (traffic jam assist). There's not a committee that decides if something is actually level "X"...we use them to shape discussion. A car that couldn't make those ULT would not be considered level 4 by most...but we have people in this forum who think Tesla is level 3 right now and others have even argued Tesla can already do level 4...so take that as you will.

Making those turns routinely and safely is a necessity to actual full self-driving/robotaxi. There will be instances where they can avoid it with a right turn, but not all, then later make a U-turn, but not always.
Waymo and Cruise have avoided ULTs for years (sometimes taking routes that take significantly longer) and they were considered L4 by the public no problem even back then. So I don't think that it particularly matters for L4 or robotaxi operation.
 
Waymo and Cruise have avoided ULTs for years (sometimes taking routes that take significantly longer) and they were considered L4 by the public no problem.
People say this about Waymo, but I've seen multiple videos of Waymo taking very difficult ULTs very well in other Tesla groups.

Regardless, people here question Waymo all the time and say they are behind Tesla(crazy). That's my point...more people(outside of the Tesla fan club) want or are convinced that FSD is a failure than think it will be successful. The optics that it can't make a turn that is routine in some states (Florida) would not be received well.

For those saying "who cares what people think"...look at the scrutiny from the public that Waymo and Cruise get. FSD is largely called vaporware. Opinions matter if you want mass adoption and acceptance from local governments.
 
  • Like
Reactions: spacecoin
People say this about Waymo, but I've seen multiple videos of Waymo taking very difficult ULTs very well in other Tesla groups.
I'm not saying they can't make ULTs now, but it's basically a fact they have routed around them for years and practically no one said it's not a Level 4 vehicle for doing that (even if they complain about it). In fact, I remember when the Chuck videos came out, people talked about how FSD Beta was dumb for not knowing how to reroute when it couldn't make a turn (instead it keeps trying to repeat the failed turn in an infinite loop).

From a quick google, I'm going to cite Brad Templeton who comments here and knows the history more than anyone:
"Waymo has been famously criticized for trying hard to avoid even doing these turns, to the point that it will pick a longer route with 3 right turns to avoid that unprotected left, even when it’s not a good strategy."
Why Don’t You Have A Self-Driving Car Yet? Part Two Outlines Some Social Problems

Regardless, people here question Waymo all the time and say they are behind Tesla(crazy). That's my point...more people(outside of the Tesla fan club) want or are convinced that FSD is a failure than think it will be successful. The optics that it can't make a turn that is routine in some states (Florida) would not be received well.

For those saying "who cares what people think"...look at the scrutiny from the public that Waymo and Cruise get. FSD is largely called vaporware. Opinions matter if you want mass adoption and acceptance from local governments.
Look at some YouTube video comments on FSD Beta, many people (not just Tesla fans) think Tesla is ahead because you can just buy a Tesla and experience "self driving" for yourself anywhere in the US, while you can only do that for Waymo if you happen to live in a limited amount of cities. And there are half a million users and half billion miles so far.

They don't know or care about the technical differences of Level 2 or Level 4, which is why I say a very narrow issue like avoiding ULTs will hardly matter to the general public. To them a "door to door" L2 system is the "same" as a L4 vehicle under test with a safety driver, even though technically it's not.
 
Last edited:
  • Like
Reactions: GSP
Hopefully, Tesla's data and Telemetry infrastructure is flexible enough to allow experimentation with these kinds of concepts. Ashok mentioned that they were looking at the system being able to take verbal suggestions or directives from the human operator/passenger.
Tesla's existing map service already annotates navigation routes with lane information and soon "object on road" that could be detected by the fleet. So potentially this could include relatively free-form directives that can be fixed and dynamic, e.g., "slow when school is in session" or "watch out for object in left lane." Of course Tesla's primary approach would have neural networks handle situations without relying on these map annotations, so hopefully extensive training of similar situations allows Vision to solve many of the edge cases and only the difficult corner cases get the special treatment.

It'll be interesting to see what these tougher situations end up being for end-to-end.
 
  • Like
Reactions: JHCCAZ
We can argue forever whether this is either a hardware or software flaw, but why handicap the NN when it's possible to improve the sensor array?
Yes it's technically possible, and I would say also that it's inexpensive (in cost-per-unit terms) to improve the camera placements. I also think the cost-effective medium-def radar is sensible, and I believe the end-to-end approach coulsld be a key factor in solving the much-d8scussed sensor fusion problem.

However, I think the probability of Tesla making such a change is very small, and recedes every month. This is because the end-to-end approach it's so heavily based on large fleet data, and that makes it increasingly unlikely that Tesla would introduce a disruptive step change into new vehicles. It would basically split the fleet into two separate groups, with the vast bulk of available data being useful only for the obsoleted cars, and an inadequately sparse amount of data then crippling the potential of the newer and presumably better cars.

From this point of view, it's practically a non-starter for Tesla. It's asking them to throw away the very advantage, a huge one, that they have over every other player who might attempt the promising data-hheavy E2E approach.

Having said the above, it might be possible for Tesla to develop some kind of surround-video transform algorithm. One that would let them freely use training data from the existing cars to create a virtual data set for a new sensor suite in redesigned cars - and vice-versa.

So I don't think this is an impossible idea, but there's no way they're going to tackle such a thing unless they become thoroughly convinced that the existing sensor suite is truly a dead end. I have my criticisms of the camera set as stated up-thread, but even I wouldn't say it's unusable for FSD. And there's essentially zero evidence that the Tesla autopilot engineers see the camera placements as a major problem. (Strangely, they've never called me to discuss it. :))

Elon has even stated that the relatively straightforward HW4 upgrades have already created a somewhat incompatible segment of the fleet, such that it will lag in FSD Improvement by 6 months or so. I was a little disappointed by this, as I would have thought that the improved computer and higher-res cameras would be relatively easy to integrate into the development plan - you can always emulate a slower computer and down-res the video - but he's telling us it's not that easy. So we probably have to forget about some magic equivalency transform algorithm that could accommodate even bigger changes, like camera placements and/or new transducers.

Now more than ever, with v12 Tesla is invested in the hardware choices made years ago. Furthermore, the relatively minor evolutionary changes seen in the latest HW4 refresh models provide a very strong indication that no major overhaul is in the works.
 
Last edited:
  • Like
Reactions: rlsd and GSP
Lots of people misquoting and putting words in Ashok’s mouth. Look again at what he sent on X:

“This end to end neural network approach will result in the safest, the most competent, the most comfortable, the most efficient, and overall, the best self-driving system ever produced. It’s going to be very hard to beat it with anything else!”

He said the E2E approach *will* be the best. He’s not saying anything about Tesla or it’s current iteration of FSD: just that he thinks this approach will result in all of those things.

While their demo did start to accelerate at a red light, it also did a few great things we haven’t seen in FSD until the demo:

1. Navigated several roundabouts perfectly.
2. Stopped on one side of a green light because traffic was backed up on the other side. That is, it knew to not block the intersection.
3. No hesitancy as far as when to commit to a turn.

There were others, but the point is the demo showed promise. And Ashok is speaking about the future of the e2e technique, not Tesla’s current iteration.
 
Yes it's technically possible, and I would say also that it's inexpensive (in cost-per-unit terms) to improve the camera placements. I also think the cost-effective medium-def radar is sensible, and I believe the end-to-end approach coulsld be a key factor in solving the much-d8scussed sensor fusion problem.
Only Tesla labels sensor fusion as a hard problem. It was solved years ago, but it requires input of equal fidelity. Teslas radar wasn't and isn't.
However, I think the probability of Tesla making such a change is very small, and recedes every month. This is because the end-to-end approach it's so heavily based on large fleet data, and that makes it increasingly unlikely that Tesla would introduce a disruptive step change into new vehicles. It would basically split the fleet into two separate groups, with the vast bulk of available data being useful only for the obsoleted cars, and an inadequately sparse amount of data then crippling the potential of the newer and presumably better cars.

From this point of view, it's practically a non-starter for Tesla. It's asking them to throw away the very advantage, a huge one, that they have over every other player who might attempt the promising data-hheavy E2E approach.
E2E or a modular NN architecture has similar data requirements. E2E is just an architecture with a large single network, which I doubt Tesla is doing btw. I think they mean "all neural networks", but who knows.

Now more than ever, with v12 Tesla is invested in the hardware choices made years ago. Furthermore, the relatively minor evolutionary changes seen in the latest HW4 refresh models provide a very strong indication that no major overhaul is in the works.
I agree. They put low unit cost over performance and reliability.
 
He said the E2E approach *will* be the best. He’s not saying anything about Tesla or it’s current iteration of FSD: just that he thinks this approach will result in all of those things.
What basis does Ashok have for saying Tesla's implementation ("This end to end neural network approach") will be the best at some future point in time? It's a meaningless marketing statement that has no foundation in reality.

Shouldn't already Tesla be the best given they claim that they have a data advantage and all you need is more data?

Show us the perf data? No, they won't.

This is the only thing we have, and it's been pretty flat for the last three years, but of course the NEXT VERSION WILL BLOW YOUR MIND at usual.
 
Last edited:
“This is why I avoid this thread so much, because it's either fanboys or haters.”
To quote country folk singer Dan Hicks, “How can we I miss when you you won’t go away”
Very little true (technical) discussion regarding the possibilities/strengths/weaknesses of current FSD architecture.
This is just one thread in an entire AI forum. And if someone hasn’t already created one that meets your expectations, you’re welcome to start another.
 
I definitely agree with your overall point, but it's been a source of confusion that the present camera suite actually does have "360° coverage". The problem is that this 360° is made up of viewing angle components that too easily obstructed in common situations:
  • View of cross-traffic to the left and right is too easily obstructed by roadside objects or other vehicles pulling up next to ego.
    • Because the side facing cameras are too far back.
  • View of oncoming traffic is too easily blocked by stopped cars facing ego that are waiting to make their own left turns.
    • Because all of the forward-facing cameras are clustered at the center of the windshield.
  • View of close front obstructions is inadequate. Mostly an issue for parking, but also for best recognition of road debris.
    • Because all forward-facing cameras are high on the windshield.
  • View of rear cross-traffic is inadequate. Mostly an issue when backing out of a parking space.
All of the above could have been solved (at least very substantially improved) with a different, more corner-weighted placement of approximately the same number of cameras. Inexpensive camera cleaning measures are another related topic,.

I don't see all this as a showstopper, but it's unfortunate. And the very nature of v12, training on massive fleet data, is a big inhibitor against making substantial changes.
Of all the problems with FSD camera location is the most serious. No amount of software improvements with V12 can adequately address obstructed camera views.
 
What basis does Ashok have for saying Tesla's implementation ("This end to end neural network approach") will be the best at some future point in time? It's a meaningless marketing statement that has no foundation in reality.

It's an opinion. People express opinions all the time. Why do you interpret it as if he is stating fact, and why does it get you so upset? You sound like you have some sort of agenda...
 
Last edited:
  • Like
Reactions: sleepydoc