Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Companies that supply for Tesla Autopilot (and competing systems)

This site may earn commission on affiliate links.

anticitizen13.7

Not posting at TMC after 9/17/2018
Dec 22, 2012
3,638
5,870
United States
From browsing this thread: http://www.teslamotorsclub.com/showthread.php/18843-What-other-tech-stock-to-consider, I saw that Mobileye and nVidia were two companies that were discussed. I wanted to start a new thread because the other thread also had a lot of discussion on Solar companies and Life Sciences companies.

As many of us already know, Tesla's Autopilot uses several hardware components to gather information about the outside world: a video camera, a radar, and an array of ultrasonic sensors. The data provided by the sensors must be fed to some kind of central computing system, which makes decisions and operates the mechanical components of the car (primarily throttle, steering, and braking).

I am confused about a few things though. I know the Model S uses nVidia processors and Mobileye visual computing systems.

(1) Does nVidia provide Tesla with processors used in Autopilot?

(2) nVidia has a "Drive PX" platform that can take inputs from multiple cameras. Mobileye has its EyeQ3 and newer systems that appear to be cameras backed by the company's own ASICs and software algorithms. Are Drive PX and EyeQx complementary or competing systems?

(3) Elon recently tweeted about wanting to hire more software engineers for "full autonomous driving". What hardware from which companies do you think this would require? What algorithms does Tesla buy, and what do they have to program themselves?

This is an area where I have zero (0) expertise. I think there's a lot of potential in self-driving technology, but as a person who drives an econobox car with a manual transmission and no Bluetooth or USB, I feel I need to know a lot more before considering an investment. A lot of the info I see on the web seems to be hype from investment "news content farms" and I don't trust it.
 
I think Bosch supplies their ultrasonics and the radar, but could be wrong.

from watching the various videos from the mobileye CEO and also reading through their last quarterly ER where they talked about Tesla, it would seem that Tesla is at the very least using their cameras and processors along with their machine learning algos for the various pattern recognitions. What Tesla would have to program themselves is the drive controls (how the car actually responds to the information from the camera) as well as integrating the camera data with the other sensors in order to provide useful data to those drive controls.

I don't think they are using Nvidia for anything other than the infotainment system and information cluster. There was rumors that they might be testing out the Nvidia systems on some mules, but it was never proven this to be the case as far as I am aware. But the Tegra chips are more for driving displays and general compute rather than specialized code for autopilot (which again if you listen to the mobileye CEO talk about this, graphics processors are not the best for this as they wont get full utilization out of the GPUs which is why they designed their own processors for these tasks.

hope this helps!
 
Mobileye has their own processors optimized for the application.

Tesla uses nVidia Tegra processors for the screens. What's confusing it nVidia also has a line of GPUs for desktop computers called Tesla.

Mobileye has a next generation of system that is rumored to be under test by Tesla. Some people thinks its AutoPilot 2.0.
 
Mobileye applies artificial intelligence to computer vision, they supply the algorithms and processor.
They are the brains behind active safety and semi autonomous driving and dominate in that
field.

Nvidia has a graphics card for the tesla screen , and now they have some card
that includes a mobileye eyeq3 chip. Nvidia makes no artificial intelligence
software applied to computer vision.
 
I own short to intermediate term calls in both. NVDA has by far been my best performer for the year in the options space.

What you get with NVDA is a growth company diversified on the front edge of several next generation computing environments - gaming, virtual reality, autonomous driving - that is making money and paying a dividend. Downside support with upside potential.

MBLY seems to be the unquestioned leader in autonomous driving - at least if you listen to their conference call they'll tell you that. I think TSLA is in a space race with other manufacturers to integrate their next gen system, with 8 cameras, which MBLY claims enables fully autonomous driving.

NVDA is a safer bet, MBLY a more likely multi-bagger. YMMV.
 
I own short to intermediate term calls in both. NVDA has by far been my best performer for the year in the options space.

.

"at least if you listen to their conference call they'll tell you that. I think TSLA is in a space race with other manufacturers to integrate their next gen system, with 8 cameras, which MBLY claims enables fully autonomous driving."

why all the diffidence towards mbly, check the mit conference on computer vision and
where Annon Shashua stands in that field, second to none.

Your assertions are yours alone.
 
why all the diffidence towards mbly, check the mit conference on computer vision and
where Annon Shashua stands in that field, second to none.

Your assertions are yours alone.

In fairness, Bosch and Google also paving the way forward in the space. They are using LIDAR mostly to make their solutions work. The issue I think it the difference between what is already being marketed and what is coming in the future. If you are looking at sales then MBLY is leading for sure, if you are looking at technical capability? I think the waters are less clear.

This isn't to discredit MBLY's accomplishments, I think Cattledog was just trying not to be biased toward them in his statement. And he clearly is a believer in MBLY or else he wouldn't have claimed purchase of calls.
 
Thanks for the info everyone. That helps clarify whose products are responsible for what functions.

Mobileye seems very popular. What differentiates Mobileye hardware and algorithms from competing camera solutions from other suppliers (Like Bosch, Continental, etc.)? I understand the competitive advantages of Tesla fairly well, but I don't really understand what makes the Mobileye system so good. What sets Mobileye apart, and how difficult is it for competitors to achieve what Mobileye does?
 
most comprehensive source of information is the camera, and cameras are cheap.
other sources of data like radar are not cheap and as comprehensive.
comprehensive and cheap makes mobileye the best solution.
check their video presentations by their lead scientist
 
I would also add that it is their algorithms that sets Mobileye apart. Because they aren't just reading for static things like lines in the road. They are using advanced pattern recognition in order to look at every pixel of every frame and map out what is happening in each image.

In one of the demo videos they show the car just figuring out what is "road", so they color the road in green and it is looking for things like cars, the curb, guardrails, grass, sidewalks, people, etc in order to apply free space logic into figuring out what is decidedly *not* road.

They are also running code in order to figure out what is the side of a car, the front, and the rear.

They are also running code to identify the "center of the lane" even in absence of road markings (this is applying the above to assist in figuring this out).

They are also reading for objects, identifying things such as different types of vehicles (including emergency vehicles), people, road signs, and stop lights. They then apply algorithms against those in order to identify their current action and predict future action (such as a person walking into a street, or the light turning yellow, or a car turning on their signal).

Essentially they are taking human logic for spacial recognition and programming it. Anyone could technically make this themselves, just like we have multiple search engines on the internet... Mobileye just seems to be ahead of the rest in real deployments. I think the other benefit to their product is that they don't need to map an entire area before driving in to it.

There is two different ways of programming AI. You can either design it so it knows everything about what you want it to do and it will work entirely within those constraints, or you can program it so it can learn about it's environment on its own. Google has taken the former route and Mobileye has taken the latter. This is why you can drive your Tesla onto any road with line markings and turn on Autosteer. The Google car on the other hand will come to a stop at the first sign of confusion, because it isn't trained to handle an unknown situation; it can only do what it knows about.
 
right on chickenevil, the computer learning algorithm is extremely complex, don't try this at home.

before getting a phd in computer vision from MIT for instance and putting in
a good 20 years of R&D, which is the case with mobileye's CTO
 
Last edited:
I would also add that it is their algorithms that sets Mobileye apart. Because they aren't just reading for static things like lines in the road. They are using advanced pattern recognition in order to look at every pixel of every frame and map out what is happening in each image.

In one of the demo videos they show the car just figuring out what is "road", so they color the road in green and it is looking for things like cars, the curb, guardrails, grass, sidewalks, people, etc in order to apply free space logic into figuring out what is decidedly *not* road.

I think the other benefit to their product is that they don't need to map an entire area before driving in to it.

right on chickenevil, the computer learning algorithm is extremely complex, don't try this at home.

I wonder if this technology is directly applicable to stuff like flying drones and small land robots (think droids like from Star Wars). The focus right now is on cars, but I think the potential goes far beyond cars.
 
I wonder if this technology is directly applicable to stuff like flying drones and small land robots (think droids like from Star Wars). The focus right now is on cars, but I think the potential goes far beyond cars.

You can use cameras and pattern recognition for just about anything. I mean think about humans (or other animals). We have sight (cameras), hearing (radar/sonics), touch (road condition detection), taste/smell (eh, robots don't need this for this task). So the way they are programming these to learn is in similar ways that things we currently observe are able to learn things. Think of it like a dog, you train it to fetch, roll over, and even some of the more advanced things like getting a beer from a fridge or opening and closing doors... Once the AI knows how to learn, we train it in similar ways, once we have felt like we have held its hand long enough we can let it continue learning on its own with less supervision.

So, you can use this logic for anything. In fact, in the same presentations that the CEO has given to talk about cars, he shows some of the other things they are working on, like wearable tech that helps blind people to "see". It is really quite fascinating. And when we can get enough programming power into a small enough package to build neuronets it will take things to the next level of programming and machine learning.
 
Some more Mobileye questions:

Mobileye was founded in Israel and its main R&D center is in Israel. Why does the company list its headquarters as Amsterdam, Netherlands?

Israel is in a geopolitically volatile region of the world. What are the business risks of domestic violence (unresolved Palestinian issue) to the company? What are the business risks of regional violence from state actors (Iran, Syria, other hostile nations) and quasi-state actors (Hezbollah, Islamic State, other terrorist organizations)?
 
In fairness, Bosch and Google also paving the way forward in the space. They are using LIDAR mostly to make their solutions work. The issue I think it the difference between what is already being marketed and what is coming in the future. If you are looking at sales then MBLY is leading for sure, if you are looking at technical capability? I think the waters are less clear.

This isn't to discredit MBLY's accomplishments, I think Cattledog was just trying not to be biased toward them in his statement. And he clearly is a believer in MBLY or else he wouldn't have claimed purchase of calls.

This is not an area where I'm an expert, but from what I understand the different technologies have advantages and disadvantages. LIDAR is currently more expensive, but prices are dropping fast. LIDAR has the ability to "see" things that a visual system might miss and it has a somewhat simpler algorithm to figure things out than a visual system has. A visual based system like Mobileye's has to process the environment much like the human brain does and that is a fantastically difficult algorithm. The advantage is the Mobileye system is working in the same environment human brains designed for themselves and understand. When the algorithm is right, it will be able to see it if a human can.

Additionally a visual system is passive. It's taking in what's already in the environment and it doesn't have to fill the environment with its own signals to "see". With LIDAR systems, they have to contend with signal interference from other LIDAR systems.

Systems that can see in other spectrums than just the human visual spectrum also have some advantages. Something that could see in the infrared would be a big advantage driving rural roads at night. Being able to detect deer and other animals in the dark would likely lower car animal collisions down to near zero.

Ultimately we may see sensor suites on cars that have everything: LIDAR, visual spectrum camera, and infrared cameras.

One thing I was thinking about the other day with autonomous driving cars that might be a problem is what I often do when I'm on foot. If I'm waiting to cross the street where there is no signal, I usually wave cars to go ahead and wait to cross when I won't hold up traffic. It's more efficient for the cars to keep moving; if the car in front is waiting for a pedestrian and traffic is busy, it could cause an accident or at least back up traffic; and it only takes a few seconds out of my walk. Autonomous cars probably won't be able to handle a pedestrian signalling the car to go ahead. Though I would figure there would have to be something that would stop for an authority figure telling the car to stop like a cop directing traffic. But if it responded to anybody doing that, what's to stop a prankster from jumping into an intersection to mess with the autonomous cars?

- - - Updated - - -

Thought you guys might like this. The tech seems pretty similar.



Are TRW (bought Lucas) still in this field?

Lucas Prince of Darkness Electronics
 
Last edited by a moderator:
This is not an area where I'm an expert, but from what I understand the different technologies have advantages and disadvantages. LIDAR is currently more expensive, but prices are dropping fast. LIDAR has the ability to "see" things that a visual system might miss and it has a somewhat simpler algorithm to figure things out than a visual system has. A visual based system like Mobileye's has to process the environment much like the human brain does and that is a fantastically difficult algorithm. The advantage is the Mobileye system is working in the same environment human brains designed for themselves and understand. When the algorithm is right, it will be able to see it if a human can.

Additionally a visual system is passive. It's taking in what's already in the environment and it doesn't have to fill the environment with its own signals to "see". With LIDAR systems, they have to contend with signal interference from other LIDAR systems.

Systems that can see in other spectrums than just the human visual spectrum also have some advantages. Something that could see in the infrared would be a big advantage driving rural roads at night. Being able to detect deer and other animals in the dark would likely lower car animal collisions down to near zero.

Ultimately we may see sensor suites on cars that have everything: LIDAR, visual spectrum camera, and infrared cameras.

One thing I was thinking about the other day with autonomous driving cars that might be a problem is what I often do when I'm on foot. If I'm waiting to cross the street where there is no signal, I usually wave cars to go ahead and wait to cross when I won't hold up traffic. It's more efficient for the cars to keep moving; if the car in front is waiting for a pedestrian and traffic is busy, it could cause an accident or at least back up traffic; and it only takes a few seconds out of my walk. Autonomous cars probably won't be able to handle a pedestrian signalling the car to go ahead. Though I would figure there would have to be something that would stop for an authority figure telling the car to stop like a cop directing traffic. But if it responded to anybody doing that, what's to stop a prankster from jumping into an intersection to mess with the autonomous cars?

- - - Updated - - -



Lucas Prince of Darkness Electronics

Actually both Google and Mobileye have worked on algorithms for exactly this issue. There is even a story on Google's blog about a car/pedestrian standoff where the car stopped for the person and the person stopped for the car and resolved itself same as anyone would have. Just like they have programmed in the ability to understand bikers and their signalling and actions.

So taking cost out the equation LIDAR can't see things like road markings or read signs. Therefore you need highly accurate maps. The benefit is that LIDAR can see "through" objects.

Cameras as you stated is able to see what a human sees (and depending on the filter or lens can see better than a human) which is helpful since roads are built for humans. But cameras can not see through stuff.
 
How does LIDAR deal with traffic lights?

The "highly accurate maps" requirement troubles me, because things can change unexpectedly. If the map hasn't been updated, the system could get confused. However, I do see the advantage of a system that can see through stuff.

I'm guessing that Tesla chose to use both Mobileye and Radar in order to get the best possible combination of sensory input for its cars.
 
How does LIDAR deal with traffic lights?

The "highly accurate maps" requirement troubles me, because things can change unexpectedly. If the map hasn't been updated, the system could get confused. However, I do see the advantage of a system that can see through stuff.

I'm guessing that Tesla chose to use both Mobileye and Radar in order to get the best possible combination of sensory input for its cars.

It appears Google also has a camera for detecting the state of traffic lights:

http://research.google.com/pubs/archive/37259.pdf