Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

HW2.5 capabilities

This site may earn commission on affiliate links.
I imagine it's Python.

People in deep learning are obsessed with Python. Now that doesn't mean Tesla can't change it. As far as I can tell python doesn't have to be a strictly interpreted language.

What I find interesting is Tesla left a lot of this stuff readable. Where there doesn't seem to be any attempt at obscuring it. Even when there was already a history of individuals who have gotten root access.

So I have a feeling it was done on purpose. Where the Engineers at Tesla knew this kind of conversation would happen. I imagine one of them is having fun reading these comments. Especially when jimmy_d called them interns. :)

Tesla did use Linux and the license terms do state they should publish all the source code, but they never did. There was a go round about that here a year or so back. It turns out the #2 person in the Linux world is a member of this forum and a Tesla owner. Basically the Linux organization isn't interested in pressing the point, but technically the source code for at least some of the systems should be open source. Some systems are all proprietary code and there would be no obligation for Tesla to publish it.

Maybe the original programmers thought most of the code was going to be published someday anyway, so why do anything to obscure it?

Lanes are like 8ft wide anyway, right? So would work wonders for the stated purpose of other cars encroach on you.

8 ft would be insanely narrow. A car like the Model S/X is already 7 ft wide and semi trailers are 8 1/2 feet.

In the US, the following standards are set forth by the Federal Highway Administration:

Local roads - 9-12 ft wide
Arterial Roads - 11-12 ft
Freeways - 12 ft

In some places arterials are as narrow as 10 ft. I remember when my sister was going down to Houston on training courses she said there was an arterial that the city wanted to add a lane. Instead of actually adding 12 feet to each side, they widened it only a little bit and repainted the lines to 10 ft. But that was an abnormality.
 
  • Helpful
Reactions: scottf200
Allright, this is the longest of all long shots, but hey: - Does anyone know somebody with an actual Model 3?

Wouldn't it be interesting if that person could do an itsy-bitsy, super fast little panel disassembly? You know, that small plastic cover at the end of the dashboard (passenger side) should be removable like in the S and X with just a few clips.

Dash cover.JPG


Remember when @bjornb gave us the very first glimpse of the HW2 ECU this way, maybe we could get a first peek at the water cooled sandwitch computer for M3?
 
From what I understand, lidar performs even worse than cameras in poor weather because when a laser pulse hits a raindrop, snowflake, or fog it refracts. Low light is an interesting one. I've seen some examples that suggest deep neural networks might be able to see in the dark better than humans. Even if a human looking at the camera image can't pick out a vehicle, a deep neural network might be able to. Don't know about glare.

All these different technologies have different strengths. Lidar sees in complete darkness and has a very low performance requirement of confidently identifying and ranging obstacles. Radar car see in poor weather and through objects.

That's why some of us indeed believe that the best autonomous will come from 360 degree lidar, 360 degree radar and 360 degree vision fusion.
 
  • Love
Reactions: rnortman
All these different technologies have different strengths. Lidar sees in complete darkness and has a very low performance requirement of confidently identifying and ranging obstacles. Radar car see in poor weather and through objects.

That's why some of us indeed believe that the best autonomous will come from 360 degree lidar, 360 degree radar and 360 degree vision fusion.

We don't live in a world of "best". We live in a world of good enough and cheap enough.

There's no question that I can build a better solution to most anything by spending 10x more money - but the cheapest solution that truly solves the problem is the one that will take over the market.
 
We don't live in a world of "best". We live in a world of good enough and cheap enough.

There's no question that I can build a better solution to most anything by spending 10x more money - but the cheapest solution that truly solves the problem is the one that will take over the market.

Well that's a completely different question, of course. There will be probably room for everything ranging from cheap to super-expensive, as with every product.

I agree with AP2 Tesla is aiming to do FSD on the cheap. I have made this point before. Whether or not even Tesla ends up with this suite once they actually have shipping FSD software is another thing. I would not be surprised to see Tesla adding more sensors down the road...

As for which world we live in, with a premium car I do expect more than cheap enough. Shipping cameras mostly/only at this stage is OK. Whether or not it ends up being an OK suite in a premium car a couple of years down the road seems less likely.
 
Last edited:
It's not only about cheap but also possible. Lidar isn't commercially feasible right now or in the next couple years for mass market consumer vehicles. Tesla wants to be the first and the best. We will see if they achieve either but they aren't pussy footing around now. Thank God.

360 degree radar would be feasible, though, right now. That is IMO a very unfortunate omission at this stage. It would have made an already great suite for 2016 really formidable even without lidar.

As for this conversation, it is also IMO about what the future holds. We can surely say Lidar is not feasible yet beyond the super high-end, fine, but that's different than saying Lidar is not needed even in the future (which is what Tesla is saying)...
 
  • Like
Reactions: croman
That's why some of us indeed believe that the best autonomous will come from 360 degree lidar, 360 degree radar and 360 degree vision fusion.

Then you have people like me who strongly believe you also have to have car2car communication where it also allows road2car communication.

You do that to really maximize the benefit of autonomous driving. So you can assemble a train of cars with a switching of the leader, and to identify more quickly to things that happen ahead.

I don't think any of this needs to be an argument though. Sure it appears like Tesla is anti-lidar, but I don't think that's going to last for much longer. The technology on all these fronts is happening at a rapid pace. Plus no one moves as a fast as the guy with an egg on his face, and Elon Musk is that guy. So I expect to see him reining in the expectations pretty soon (3-6 months) for AP2/AP2.5.

Will they immediately switch to Lidar for HW3? I can't say. But, I would be extremely surprised if it remained Vision only for 3 sides of the car.

HW 2.5 -> Different Radar, and 1 extra SOM (2 SOM + 1 dedicated GPU)
HW 3 -> Corner Radars with improved NVidia CPU/GPU
HW 4 -> Car2Car communication
HW 5 -> Lidar

Everything in their due time.
 
as someone with a bit of experience in video, motion blur doesn't depend on the directly on fps but rather on the shutter speed. You can shoot something at 60 fps, but use a faster shutter speed to get less motion blur. In the industry this relation between frame rate and shutter speed is described as shutter angle. Here's an article on this, with examples (all shot at the same 24 fps frame rate):

Shutter speed related motion blur is entirely different from the rolling shutter effect.

Rolling shutter - Wikipedia

Essentially with a rolling shutter not all parts of the image are recorded at the same time.
 
  • Informative
Reactions: strangecosmos
I tried to search the exact model of cameras used in Hardware 2 but it is not easy info to find. Searching "what cameras does Tesla use", "Hardware 2 cameras", "camera model" etc. doesn't get you anywhere. Thankfully verygreen was kind enough to let me know it's the ON Semiconductor ART0132. (Hopefully now if someone searches those same terms, this post will come up.)

Is that really the correct Sensor Part? It's showing up as discontinued.

https://onsemi.secure.force.com/PCN/?pn=AR0132AT

Update: It appears like that's only for a variant of the chip that Tesla doesn't use.
 
Last edited:
Hmm... I am still learning about the basics of camera technology. (To understand Tesla holistically you need to learn about a lot of things — accounting, economics, disruption theory, battery packs and electric motors, deep learning, cameras...) What is the maximum shutter speed of the AR0132AT sensors? Does that question make sense?
 
Hmm... I am still learning about the basics of camera technology. (To understand Tesla holistically you need to learn about a lot of things — accounting, economics, disruption theory, battery packs and electric motors, deep learning, cameras...) What is the maximum shutter speed of the AR0132AT sensors? Does that question make sense?

Please create a camera thread.
 
All these different technologies have different strengths. Lidar sees in complete darkness and has a very low performance requirement of confidently identifying and ranging obstacles. Radar car see in poor weather and through objects.

That's why some of us indeed believe that the best autonomous will come from 360 degree lidar, 360 degree radar and 360 degree vision fusion.

You can keep piling on sensors. Add thermal imaging. Add microphones. The question is: how many people will die because you waited three years for your sensor suite to be cheap enough to include in a $35,000 car? If Tesla’s Hardware 2 suite enables full self-driving at 10x average human safety, isn’t that good enough to deploy?

Tesla can always include more sensors in future versions of its cars. It is already upgrading its autonomy hardware roughly once a year. Once sensors like lidar become cheap enough, I expect Tesla will add them to the sensor suite if it determines that it’s worthwhile.

It makes sense to me why Tesla is focusing most of its sensors forward. Driving is not designed for full 360 degree awareness. Mostly, drivers are looking forward, and occasionally check their mirrors.

While lidar can see in complete darkness, this is not necessary as cars use headlights. There is a constant source of light for the cameras to use. If this weren’t the case, humans would not be able to drive, since we can’t see in the dark. The front-facing radar can also see in the dark, which adds a superhuman level of perception. Superhuman is good enough for the first generation of self-driving cars.

To summarize, the choice is not between e.g. lidar or no lidar. It’s between deploying self-driving hardware in 2016 or in 2019 (or later). You can always add lidar later. But you can’t deploy in 2016 with lidar.