Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Make your robotaxi predictions for the 8/8 reveal

This site may earn commission on affiliate links.
So Elon says that Tesla will reveal a dedicated robotaxi vehicle on 8/8. What do you think we will see? Will it look like this concept art or something else?

GKcNKVvaEAAUmMG


I will say that while this concept drawing looks super cool, I am a bit skeptical if it is practical as a robotaxi. It looks to only have 2 seats which would be fine for 1-2 people who need a ride but would not work for more than 2 people. I feel like that would limit the robotaxis value for a lot of people. Also, it would likely need a steering wheel and pedals for regulatory reasons even if Tesla did achieve eyes-off capability.

So I think this is concept art for a hypothetical 2 seater, cheap Tesla, not a robotaxi.

Could the robotaxi look more like this concept art but smaller? It could look a bit more like say the Zoox vehicle or the Cruise Origin, more futuristic box like shape IMO and seat 5-6 people.

robotaxi-tesla-autonome.jpg


Or maybe the robotaxi will look more like the "model 2" concept:

Tesla-Model-2-1200x900.jpg



Other questions:
- Will the robotaxis be available to own by individuals as a personal car or will it strictly be owned by Tesla and only used in a ride-hailing network?
- What will cost be?
- Will it have upgraded hardware? Radar? Lidar? additional compute?
- Will Elon reveal any details on how the ride-hailing network will work?

Thoughts? Let the fun speculation begin!

 
What I predict is that whatever they announce they will keep it in house for themselves, further screwing over all FSD owners that were promised they could use their cars in the robot taxi fleet and get paid for it.

Thats for sure, cause screw over FSD customers is their new game they love playing.
 
Yeah and it would be a huge unforced error IMO. That's because there is no reason to go all-in on level 5 at this time. Tesla is making progress on "FSD". And "FSD" could be a compelling L2 system that beats many of the other L2 systems out there. They should focus on improving the build quality of the vehicles, up the luxury in the interior, produce new, more affordable and exciting models. Basically, just make a great EV! And then when "FSD" actually is capable of eyes-off, then roll it out. And Tesla could focus on an eyes-off highway system and likely deliver that before anyone else. But if Tesla neglects quality and just tries to rush to level 5, they will end up with neither.
LOL. Elon went all in in L5 in 2019 on autonomy day.

Are you aware of that most analysts value their auto business at about $60-80? Some add $30 bucks for energy business which I think is a bit on the high side. The rest is FSD. Elon did that to max his now-voided comp program, and now there is no walking that back.

After the -35% YTD the stock closed at what? 160 ish? Unless FSD robotaxi + optimus, that's about 50% more downside. Forward P/E 58. For an auto company. Sector average is 7-8.
 
Last edited:
  • Like
Reactions: MD70 and diplomat33
So if a Waymo fails or stalls 1 time in 10000 around certain types of construction, your conclusion is that HD-mapping doesn't contribute to reliability? I don't get it.
No, he's saying you can't get to the end goal on mapping. As good as any map gets the car needs to be able to see what's actually there. It has to defer to it's better judgement that it does not drive into a sink hole that wasn't there 5 seconds ago.
 
  • Like
Reactions: enemji
No, he's saying you can't get to the end goal on mapping. As good as any map gets the car needs to be able to see what's actually there. It has to defer to it's better judgement that it does not drive into a sink hole that wasn't there 5 seconds ago.
Are you saying Waymos are not much safer than the average human driver?
I think we just have too accept that AI is going to make totally different mistakes than humans make. That doesn’t mean it can’t exceed human performance.
 
No, he's saying you can't get to the end goal on mapping. As good as any map gets the car needs to be able to see what's actually there. It has to defer to it's better judgement that it does not drive into a sink hole that wasn't there 5 seconds ago.
Mapping is like an additional sensor that adds redundancy and safety. in order to get to the end goal, you need reliability guarantees, which ML can't give at present. Hence expensive measurement equipment (lidar), premapping, 360 multimodal sensing. etc.

How do you see an occluded sign with CV? You can't. Using a prior (maps) you know that the bus is blocking a stop sign or whatever.
 
In the meantime Waymo and a few others are commercially deployed across quite a few cities. 15+ I believe at this point, and since 2017. .... I'm sure most agree with that you need to make the product deployable first, and save costs second.


Like I said...pros and cons of each approach. Tesla has millions of its cars / architecture running across North America. Waymo does not.

There is no reason to believe that Waymo's approach is any closer to "widely scalable deployment of inexpensive autonomous automobiles" than Tesla is. They are taking different approaches with differences on what can deployed in what manner.

So no, I reject your statement of "most agree you need to make [a fully autonomous] product deployable first..." I don't. Tesla doesn't. And to be clear, I agree that Tesla might not get there...not might Waymo.

Again, we don't know what Waymo's actual plans are. Do they even want "robotaxis everywhere?" Or are they targetting "expensive, corporate run fleet vehicles limited to mapped city driving?"
 
  • Like
Reactions: enemji
V12 misses stop signs that are not even occluded.

I’ve had FSD do a couple low speed rolls myself, but specifically for this one, the stop sign apparently doesn’t apply to continuing right, which it seems is what the car was doing before it pulled over to the left. The car still almost failed to follow nav but it seems like a bad setup tbh. Of course it still shouldn’t have happened (and the driver shouldn’t have let it happen, either) but this is the first time I see a stop sign like this.
 
Like I said...pros and cons of each approach. Tesla has millions of its cars / architecture running across North America. Waymo does not.

There is no reason to believe that Waymo's approach is any closer to "widely scalable deployment of inexpensive autonomous automobiles" than Tesla is. They are taking different approaches with differences on what can deployed in what manner.

So no, I reject your statement of "most agree you need to make [a fully autonomous] product deployable first..." I don't. Tesla doesn't. And to be clear, I agree that Tesla might not get there...not might Waymo.

Again, we don't know what Waymo's actual plans are. Do they even want "robotaxis everywhere?" Or are they targetting "expensive, corporate run fleet vehicles limited to mapped city driving?"
It’s simple. General autonomy is most likely 10+ years away. Someone will “get there” someday.

In the mean time you try to get to market with a driverless product that is deployable, safe, appreciated and profitable.

“From driverless nowhere to everywhere”, is that your path for Tesla? It sounds very unlikely to me. Walk me through it?

2025: xxx
2026: yyy

and so on
 
Last edited:
  • Like
Reactions: mborkow
This is such a beautiful and simple example. Love it!

So how is Tesla FSD managing it? Will it miss that STOP sign and continue on it’s merry way?
Yes. as @diplomat33 points out, Tesla's perception stack seems a lot less reliable than the robotaxi companies' stacks.
I am guessing that means that Tesla are MORE reliant on correct map data than others. The problem for Tesla is that they also require correct mapping of all STOP-signs in the world at all times, in lack of a limited ODD.

Good luck with that.
 
  • Like
Reactions: enemji
Are you saying Waymos are not much safer than the average human driver?
I think we just have too accept that AI is going to make totally different mistakes than humans make. That doesn’t mean it can’t exceed human performance.
Never said anything about Waymo.

It's a good point about acceptance of what we as humans see as totally stupid errors. Can't see it happening myself until robot racecars (with G force limiters) can beat humans on sight unseen rally courses. That will mean they can read the road very very well.

Then they will need to learn assertion of rights to coexist with human drivers. Good luck with that. It's the end for humans driving when they get to the point they can coexist with one another.
Mapping is like an additional sensor that adds redundancy and safety. in order to get to the end goal, you need reliability guarantees, which ML can't give at present. Hence expensive measurement equipment (lidar), premapping, 360 multimodal sensing. etc.

How do you see an occluded sign with CV? You can't. Using a prior (maps) you know that the bus is blocking a stop sign or whatever.
Mapping is a red herring. Of course the car needs to know what is supposed to be there.

It's how it reacts to the unexpected that has yet to be solved.

It needs AI on the level of interpreting a Larsen cartoon to do the job.
 
  • Like
Reactions: OxBrew
Mapping is a red herring. Of course the car needs to know what is supposed to be there.

It's how it reacts to the unexpected that has yet to be solved.

It needs AI on the level of interpreting a Larsen cartoon to do the job.
If mapping adds 10% safety and 30-40% ride quality it’s probably a no brainer to do it.

For robotaxis, hd-maps probably helps a lot with pickups and drop offs too.

I agree with you 100% regarding today’s AI reliability and capability. ML alone is not yet ready for safety critical applications.

For example, It’s a lot safer to measure a distance with a laser than guessing in from a 2d image using ML (but it’s gotten quite good at it at least in optimal conditions).
 
Last edited: