Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Juicy tidbits on Autopilot, deep learning, Tesla's advantage from Nov 3 Mobileye Call

This site may earn commission on affiliate links.
This morning - as someone pointed out in another thread - Mobileye's executives did their Q3 conference call with various analysts. The full transcript is here on Seeking Alpha. And I've cut and pasted some of the best parts relevant to Tesla specifically beneath the link:

Mobileye's (MBLY) CEO Ziv Aviram on Q3 2015 Results - Earnings Call Transcript | Seeking Alpha

Insights from the call:

1 - Mobileye strongly implies that the next suite of hardware - 8 cameras on EyeQ3 - coming within months will be all that is needed for autonomous driving - and after that it's all software development.

"...the next phase is going to be eight cameras, so the infrastructure that they will introduce the vehicle. We will take the same infrastructure up to fully autonomous vehicle. Well, the hardware will not going to be changed. There is going to be only updates of software, once they introduce the full set of eight cameras around the vehicle.


2 - There is neural network "deep learning" going on in the current implementation of Autopilot - regardless of the need for additional cameras in the future:

"Recently, we launched our first deep learning functions on Tesla auto pilot feature. These capabilities include semantic free-space which uses every pixel in the scene to help us understand where are the curves, barriers, [indiscernible] drills, moving objects and anything that is not part of the driving path. Once we know the free-space, the big challenge is where to locate the vehicle in this free-space. We saw this with the holistic path prediction, which uses the context of the road to determine exactly where the car should go at all the time."

3 - There are two SEPARATE groups of "deep learning" fleet learning going on - Tesla's own properietary data set and Mobileye's data set. They are not one and the same, and Tesla is not sharing their own fleet learning with Mobileye.

This, to me, is an indication that Tesla's race to get Autopilot on the road is in fact a way to build a moat - a distinct competitive advantage - by building a superior self driving experience earlier than other automakers will be able to do.
The discussion on the call implies that many auto makers are relying on outside companies and Mobileye's own fleet data to implement their self driving system. But Tesla is doing their own fleet learning and not sharing the data.

"we’re not part of the self-learning that Tesla is doing. (bold emphasis mine) This is their project. We continue developing our system based on growing huge database that is part of a database that’s created with the help of our customers, so it’s much, much bigger information that we can collect and we build on a wider scope of scenes and scenarios and different geographical roads in order to improve the system. And we are supporting Tesla with whatever necessary information they need for their own study, but we are not part of it."

And then an analyst asks about deep learning that Mobileye itself is doing with all its customers - or a "database" - the question and the answer were not entirely clear but they seem to imply that Mobileye has some much larger data set being collected from ALL its OEM customers to improve the system long term:

"Andrea James - Dougherty & CompanyThe second part of the question was about maintaining the ownership as you grow on more and more vehicles. You maintained the ownership of the database.
Ziv Aviram - Co-Founder, President and CEO
The database is collected through the validation process with our customers, so it’s analyzing together with our customers the different scenarios and improving the performances. It’s a constant process, it’s never end and actually the data is used only for our system because it was taken and recoded through our system. And only our system can use this kind of data. Even if you give it to somebody else, it’s useless for them. And this is a benefit or interest of all the industry to have as much as bigger database in order to have the most robust and the most cost-effective system that everybody can share."

4 - Deep learning requires a HUGE data set which can only be acquired by having a fleet on the roads doing the learning.By the time other manufacturers actually launch real hands-off autopilot systems Tesla will have raced ahead and have an informational advantage in building a reliable self driving car in real world conditions.

Self driving isn't yet a simple bolt-on which can be purchased from a company like Mobileye. There is real value to be had with an in-house software development team and a linked network of real-world customers driving your cars improving the system.

"For example, we saw with a deep learning the free space. We saw with the deep learning the holistic path prediction but we still use a huge amount of code that we have done in the last 16 years and this is kind of additional layer of technology on top of what we have today. It’s not one algorithm that suddenly can do slightly better than free space and to the system. In order to have a system, it’s a lot of elements that you have to orchestrate in order to make it work in the level of robustness that we presented today to the market.So on one hand, it is a very encouraging that it’s difficult, because, one, it’s difficult, it’s difficult also to our competitors. I believe that we have one of the strongest team and maybe the biggest team in the world that works on this specific application today, but it’s difficult to us. I believe it’s also difficult to others. And I would not expect so quickly somebody would show results on the level that we’re presenting today."

5 - Mobileye's CEO very subtly criticized Elon Musk for referring to a single camera EyeQ3 setup as "autopilot" rather than simply "Lane Keeping Assist"

"So, currently, we are very happy with the introduction of Tesla with what we call Lane Keeping Assist, which is the best in class Lane Keeping Assist today. And we hear a lot of great feedbacks on this system. And but we also hear some feedback of people that abuse the system and drove much higher speed than what was recommended by Tesla.And once you abuse the system is definitely there is a risk of failure and we might face some of the drivers that getting in trouble if that kind of system that not used according to what it was presented.
What we presented is Lane Keeping Assist system rather than auto pilot system. Auto pilot system is going to be presented next year, where is going to be 360 degrees coverage around the vehicle and is going to be multiple cameras with additional sensors. What we have today is just a mono camera looking forward. So it’s a very limited input that we have on the road.
But importance of this launch is, Tesla is willing to push the envelope faster and more aggressively than any other OEM, and definitely this is a very important in step forward to introduce the beginning of semi-autonomous application that will start being launched next year."
 
Last edited:
Thanks a bunch. Good stuff.

A few points.

1. They say that they launched their "first deep network functions" in Tesla's AP implementation. However, Tesla isn't sharing the data with them. I'm trying to sort out exactly what this means. Does it mean that Tesla's not building their own deep networks - just building models with the functions provided by Mobileye? That's odd, if so.
2. Mobileye's holistic path detection sounds like a combination of hand coded algorithms with deep nets laid on top. This is used in AP, from Elon's previous statement. I'm assuming that the same thing applies here - Mobileye will have to improve their deep nets without the benefit of Tesla's data.
3. The 8-camera "final hardware" comment certainly makes a case for holding off on new Tesla purchases, if AP is something you plan to utilize. Or even if not.
 
1. They say that they launched their "first deep network functions" in Tesla's AP implementation. However, Tesla isn't sharing the data with them. I'm trying to sort out exactly what this means. Does it mean that Tesla's not building their own deep networks - just building models with the functions provided by Mobileye? That's odd, if so.
2. Mobileye's holistic path detection sounds like a combination of hand coded algorithms with deep nets laid on top. This is used in AP, from Elon's previous statement. I'm assuming that the same thing applies here - Mobileye will have to improve their deep nets without the benefit of Tesla's data.

In response to your #1 - I'm not sure - your guess is as good as mine. One thing which might be relevant is to remember that Musk is an investor in at least two artificial intelligence companies - Deepmind and Vicarious. Vicarious' business is to develop neural networks. So - could Vicarious own machine technology somehow be involved with Autopilot? I think it's possible but who knows. I don't think the transcript (which I did in fact read in its entirety) disambiguates this either. I'm not a professional or amateur expert in machine learning - so I have no idea if you can have Mobileye's deep networks + "Your own" deep networks and that adds up to even better learning. Or if the case might be as you suggested and Tesla is simply building a behavioral model using "off the shelf" networks from Mobileye. I don't think anyone from Mobileye or Tesla has publicly clarified this issue - but it's a good point/question you made.

#2 - Sounds like you're right again - at least from what has been said so far. I would imagine it might be the case however that Mobileye will soon have access to real world customer data (not just testing validation data) from other OEM partners who might be perfectly willing to share with Mobileye. They're not all as ambitious as Elon Musk.
 
Last edited:
Concerning this "Recently, we launched our first deep learning functions on Tesla auto pilot feature. These capabilities include semantic free-space which uses every pixel in the scene to help us understand where are the curves, barriers, [indiscernible] drills, moving objects and anything that is not part of the driving path."

I don't see that this is currently implemented in Tesla.

My car doesn't seem to understand traffic islands or barriers. It only understands line markings or follows other car.
 
3. The 8-camera "final hardware" comment certainly makes a case for holding off on new Tesla purchases, if AP is something you plan to utilize. Or even if not.

Totally agree. If I was rolling in it I'd leap in and enjoy the existing capabilities but since autonomous driving is a priority for me, I won't get the MX until I know it has the required hardware.

I'm willing to wait for the full implementation with the new hardware just like those who waited for the MS autopilot roll out.
 
I doubt that 8 camera setup is optimal. I think trifocal will eventually be installed in the rear as well. And why stop there, trifocal setup for sides makes sense as well. Mobileye says 8 camera is optimal because their v4 processor will support around 8 cameras :)

Mobileye deep learning data is static, once installed in the production vehicle it does not change. Neural networks may probably tweak it a little but it remains set for the life of the vehicle. Where as Tesla can push/pull updates as needed. This is huge Tesla advantage.

As to Line Keeping feature, Tesla's AP uses ultrasonic sensors and radar in addition to the mobileye camera. 'Autopilot' that mobileye is talking about is not the same since mibileye technology relies on the camera input. Based on the criticism from the CEO I'm beginning to think that mobileye other customers are unhappy that Tesla is ahead and mobileye has nothing to offer to them yet.
 
1 - Mobileye strongly implies that the next suite of hardware - 8 cameras on EyeQ3 - coming within months will be all that is needed for autonomous driving - and after that it's all software development.

Ha. Autonomous driving is a computer science problem. All I need to play in the NBA is a 48" vertical jump.

What he is really arguing here is that autonomous driving doesn't require Google's Lidar approach. His risk is that lidar on a chip becomes available at a low price. What google is up to in private is unknown.

- - - Updated - - -

I doubt that 8 camera setup is optimal.

Probably means eight camera channels coming from less than eight physical devices. They may even want to handle something like forward looking infrared as a separate channel.
 
Of course autonomous driving is a software, not a hardware problem. Human with one eye can drive car safely (of course it is better to have two eyes, but one is enough to legally drive car).

So basically all sensors you need is one camera on rotating base. Rest is up to software.
 
In response to your #1 - I'm not sure - your guess is as good as mine. One thing which might be relevant is to remember that Musk is an investor in at least two artificial intelligence companies - Deepmind and Vicarious. Vicarious' business is to develop neural networks. So - could Vicarious own machine technology somehow be involved with Autopilot? I think it's possible but who knows. I don't think the transcript (which I did in fact read in its entirety) disambiguates this either. I'm not a professional or amateur expert in machine learning - so I have no idea if you can have Mobileye's deep networks + "Your own" deep networks and that adds up to even better learning. Or if the case might be as you suggested and Tesla is simply building a behavioral model using "off the shelf" networks from Mobileye. I don't think anyone from Mobileye or Tesla has publicly clarified this issue - but it's a good point/question you made.

This is called "Ensemble Learning" and yes, it's a thing. You basically average out the predictions from one model with one or multiple other models, using custom weighting to vary the output. I would like some clarity on this, because if they're not building a model from their data, there's a lot less value added in the data they're consuming. It's very difficult to tweak models without having access to the data used in the model, since you use some of it for cross-validation testing. That said, if they're doing an ensemble method, they could be weighting their own model more and more as they phase out the Mobileye deep learning model.
 
Of course autonomous driving is a software, not a hardware problem. Human with one eye can drive car safely (of course it is better to have two eyes, but one is enough to legally drive car).

So basically all sensors you need is one camera on rotating base. Rest is up to software.

The human eye does not compare to a camera lens either...
 
How long until we see a thread that someone bought a car that was advertised with "Full Auto Pilot" and now they find out that the system is only "Lane Keeping Assist" and they are now either demanding compensation for their horrendous loss or a free retrofit with a full 8 camera autopilot system? :rolleyes:

Surely I jest, but let's hope!

Calisnow, thanks so much for compiling such a useful post with lots of great info.
 
How long until we see a thread that someone bought a car that was advertised with "Full Auto Pilot" and now they find out that the system is only "Lane Keeping Assist" and they are now either demanding compensation for their horrendous loss or a free retrofit with a full 8 camera autopilot system? :rolleyes:

Surely I jest, but let's hope!

It's amazing how the leading-edge car I bought eight months ago, which has had a transformational software upgrade and will soon have two substantial hardware upgrades (LTE and ludicrous) will be completely obsolete six months from now after the new sensor suite comes out.