Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Tesla, TSLA & the Investment World: the Perpetual Investors' Roundtable

This site may earn commission on affiliate links.
The problem is not reading the sign. The problem are the rules or even the road design. If the area around schools is designed to be safe for pedestrians, you don't need rules and signs like this.
If they can't change the road, go for the safe option and limit the speed to 25 at all times.
Currently our rules include reading signs. Changing the infrastructure would be easier, but not an option for Tesla and has a very high initial cost that will be difficult to motivate for democratically elected politicians. So Tesla will somehow have to solve these scenarios. Maybe the neural network will be smart enough to just figure it out, given the sun's position, given how buildings look like, how other cars behave etc, but eventually it will learn to read because reading is useful and neural networks will get all the signal from the data with enough compute/training.
 
This livestream discussion about v12 has me wondering something. If they scrap all the c++ rule coding and just let the NN train on its own, won't you lose many of the traditional knobs that you've historically needed? For example, v12 has learned to stop at stop signs completely because of the curated data fed to it, but that used to come from manual coding. So, let's say the evil empire NHTSA wasn't picking on Tesla, and Tesla wanted to allow stop sign stops to slightly roll (ie 1/2 MPH or 1 MPH)....in v11, this was easily accomplished in code. Change zero to 0.5 or 1.0 (slightly oversimplified), but in v12, there's no knob for that slight tweak. It's a Blackbox. They'd have to curate a whole new set of stop sign data to retrain? This question on "rule tweaks" obviously applies to many other examples.
 
This livestream discussion about v12 has me wondering something. If they scrap all the c++ rule coding and just let the NN train on its own, won't you lose many of the traditional knobs that you've historically needed? For example, v12 has learned to stop at stop signs completely because of the curated data fed to it, but that used to come from manual coding. So, let's say the evil empire NHTSA wasn't picking on Tesla, and Tesla wanted to allow stop sign stops to slightly roll (ie 1/2 MPH or 1 MPH)....in v11, this was easily accomplished in code. Change zero to 0.5 or 1.0 (slightly oversimplified), but in v12, there's no knob for that slight tweak. It's a Blackbox. They'd have to curate a whole new set of stop sign data to retrain? This question on "rule tweaks" obviously applies to many other examples.
They probably just set the curateDatasetForStupidNhtsaRequirementCompleteStop variable to false and press rebuild.

The have large dataset, from this select a subset to use for the current training run. The variable would just change a small part of which data is included, everything else is the same. It just another extra complexity they had to manually add that can easily be turned off.
 
The drive is no better than whole mar's video but what is coming out of Elon's mouth is mind blowing. How V12 is being trained is just 100% videos. In fact the only way for V12 to stop at stop signs fully is by finding <1% of the video data from the fleet of people who actually stopped at stop signs. They did not hard code anything into the software such as traffic lights or roundabouts. Just show videos to the NN and it'll figure it out.

Seems like V12 is a full re-write by AI and not by humans.
Basically, no road rules have been programmed. It literally just drives based on what it has learned from watching good drivers drive. They have lots of traditional code in the data center to curate the training videos to find good driving and exclude bad driving. I can’t wait to try it on my funky roads I have around my house.

They did have one intervention in about 30 minutes of driving around Palo Alto. Major intersection, stopped at red light, driving straight, left turn arrows came on and the car tried to go straight but the straight ahead lights were still red.

This is going to make for some interesting localisation issues.

Imagine training with New York city drivers (LHD) then coming out to Dorset country lanes (RHD). Massively different driving styles, driving regulations, traffic behaviour, etc. Or crossing the channel from London to Paris. Or style changes between rush hour and non-rush-hour traffic.

Interesting.

(EDIT: I see some others have made the same point)
 
Last edited:
Then there are many other parts of the world with different typical local driver's styles and different unhuman like government agency rules. This will complicate things a bit as the AI needs to learn how to drive in different countries etc. Maybe they can just feed in a variable for which jurisdiction they are in and the car learn which of its modes it should use in each particular situation. Or they can just gather enough of data for each country in each situation with different rules. This complexity will be moved offline, but it will still be complexity they need to deal with.

Wouldn't Tesla already use all the data they collect from all over the world? And simultanously train the autopilot on how to drive in all these countries?

One example - us scandiwegians often drive abroad on holiday - so it won't do if the car only knows my home country. I might even drive to England or Wales where they drive on the other side of the road and post speed limits in miles. My Tesla really need to know both countries at the same time.
 
My Tesla really need to know both countries at the same time.

Actually, no it doesn't: The FSD stack remains the same (the binary executable that Elon spoke about), while the NN 'weights' are updated based on your new location (via GPS, of course). New weights downloaded in advance when you enter your destination into the Nav, switched when you cross the border. Easy-peasy, lemon-squeezy. ;)

Cheers!
 
Wouldn't Tesla already use all the data they collect from all over the world? And simultanously train the autopilot on how to drive in all these countries?

One example - us scandiwegians often drive abroad on holiday - so it won't do if the car only knows my home country. I might even drive to England or Wales where they drive on the other side of the road and post speed limits in miles. My Tesla really need to know both countries at the same time.
I suspect Tesla can define geofenced regions, where each region has its own weights. Drive into England? Car downloads and starts running with England weights. In New York City proper? The weights curated to that area are installed.

I suspect there are a LOT of weights, but they are just floating point values, so downloading them via cellular connection wouldn’t take too long. The car could even possibly cache onboard any regions within 100 miles. When you enter a region, the new weights are pushed to the secondary processing unit. When ready, car control switches to that processor while everything is loaded into the primary unit. So the switch can be done while driving, with only a loss of let’s say a second of compute redundancy.

Edit: Ninja’d by a Canadian ;)
 
Last edited:
FWIW: someone earlier commented that this “simpler” approach to FSD reduced Tesla’s technical lead to something like 2 years. I believe Tesla’s lead will still be well over 2 years when FSD finally comes out. Although much less hand-coded logic is required for the actual car, a competitor adopting Tesla’s approach right now has to do the following:

1. Develop a set of integrated sensors/cameras to go in a car.
2. Develop a chip that has sufficient compute and sufficient efficiency to deploy to a fleet.
3. Develop a feedback system whereby cars in that fleet can send data back to the mothership.
4. Install a massive compute datacenter, or rent one for lots of money.
5. Develop utilities for data gathering, processing, and classifying to curate the dataset.
6. Deploy the hardware in all the cars.
7. Deploy the software to the cars to pull the desired data from the fleet.

I’m sure I’m missing a few steps, but who else is in the position to do this? Waymo could immediately switch to Tesla’s approach, but they’re not a car company. Cruise is part of GM, but GM moves slow and doing (1) and (2) in a way that’s cheap for consumers is no small task.

Rivian could go this route now, but the costs are currently prohibitive for them.

I still see this as a 4-5 year lead minimum. And that’s if someone else immediately abandons their approach and starts going with Tesla’s approach TODAY—and assuming that competitor has the money to do it.
 
I suspect Tesla can define geofenced regions, where each region has its own weights. Drive into England? Car downloads and starts running with England weights. In New York City proper? The weights curated to that area are installed.

I suspect there are a LOT of weights, but they are just floating point values, so downloading them via cellular connection wouldn’t take too long. The car could even possibly cache onboard any regions within 100 miles. When you enter a region, the new weights are pushed to the secondary processing unit. When ready, car control switches to that processor while everything is loaded into the primary unit. So the switch can be done while driving, with only a loss of let’s say a second of compute redundancy.

Edit: Ninja’d by a Canadian ;)
The problem with this is that there edge cases in each country that would be useful to learn from in other countries. Take turn on red for example, will they have do discard all their intersections from non turn on red places?

Eventually how many neural nets will they end up training if they have one neural net for each specific set of rules?

It might be better to just let the GPS and C++ figure out which region they are in and feed that into the neural network and let it figure out which rules to follow and in training have a few scenarios of test driver driving in both a correct and incorrect way and activate different flags for the same situation.
 
I'm very glad Elon dropped in here. IMO, this was a lot more informative than the live stream drive.

I wasn't happy with Elon's answer about Hardware 4. It makes me think that FSD beta on HW4 will require an all new training set and thus it will be a long time before Tesla has enough data to get it running. I'm guessing they will prioritize HW3 for the next year or so because they are getting so close to something truly great.

images


I want my F - S - D!
(on hardware 4)
 
Easier, but caveats…
They need the data.
They need the training compute.
There is still much code that’s used to select and curate the training data.
There’s still many iterations to go.

Tesla has two years lead?? (Guess) And importantly, the most cars that are ready, the most affordable cars, and the most capacity to make more.
In addition they will need to get over the "must have LIDAR" syndrome (e.g. sunk cost). I suspect this is just as big a hurdle as any of the others, so I suspect the lead time is closer to five years than two.
 
The problem with this is that there edge cases in each country that would be useful to learn from in other countries. Take turn on red for example, will they have do discard all their intersections from non turn on red places?

Eventually how many neural nets will they end up training if they have one neural net for each specific set of rules?

It might be better to just let the GPS and C++ figure out which region they are in and feed that into the neural network and let it figure out which rules to follow and in training have a few scenarios of test driver driving in both a correct and incorrect way and activate different flags for the same situation.
Or get the NN to read the (legal) road traffic regulations of each jurisdiction (country / state / etc) and have a geodefined knowledge of the extent of each country. At some point the NN would start to figure out that the law has quite a heavy weight.
 
I'm not sure about pre -v12, but v12 will have to be less than generic in it's driving rules. What I mean is, we all know driving 'properly' in Rome is different than driving 'properly' in Cincinnati, Delhi, Adelaide, etc. Not sure how they handle that, but maybe there's a 'master/mistress' set of rules, and a secondary set of regional rules. I'm not an AI programmer (clearly!), but the NN will have to take this into account. In other words, some input data will be inapplicable in certain regions. They must have prescriptive (programmed) rules for this in pre-v12, like "in the UK? Drive on the LEFT side of the road" duh.
Given that cars are not normally transferred between countries, then one way would be to have a NN "brain" for each country (assumes EU is one country) probably located in that country (one in each gigafactory, plus outliers). Then the cars for that area could learn just correct behaviour for a particular area.
 
It's very interesting to see how V12 is approaching FSD.

It sounds, at least from what Elon was saying, the jump from V11 to V12 similar to what Alphabet's DeepMind did to their AlphaGo.

AlphaGo is the famous NN program that beat Lee Sedol in the game of GO. It went on to beat every other top ranked GO players in the world including Ke Jie, who was the world ranked No.1 at the time of Lee Sedol's game.

DM didn't stop there. It went on to develop another version of AlphaGo called AlphaZero. The main difference between Zero and Go was that Zero never got any training from the game strategies that humans developed throughout the centuries. Instead, it was given a set of rules of how GO is played and it just went on to train itself by playing against itself.

DM then put Zero to go up against GO for 100 games. The result was 100-0 with Zero beating GO every single game.

Ke Jie then famously expressed his opinion after hearing about that result and having previously played with AlphaGO before: "In the game of GO, human's knowledge was a burden and not an asset."

Tesla basically went from teaching FSD how to do things to just let FSD work its own way out based on data input. This is truly something else. The fact that the car or Robot one day would roam the world without any previous knowledge... not even maps is next level.
I wonder if the NN will be trained or observe some real driving: Shanghai, Istanbul or Karachi comes to mind (among many other similar densely populated cities where cars have so much value that no one dares have an accident)
 
I'm wondering about the driver input. It doesn't seem like the professional drivers Tesla is using would provide enough data, and a car driven by NN trained by the bozos on the road around me seems unsafe. I suppose the bad driver factor is less with Tesla owners, but still.
The professional drivers will give a base line to start with, then add the rest (and some negative examples from bad drivers).
 
There will be some signs with text that the car will have to learn to read. With enough examples and enough training the car should learn to read basic car literacy and basic math. For example:
View attachment 968349
This example is going to be very difficult for HW3. HW4 might be able to read the sign if going slow enough. HW5 will do it for sure.
 
  • Like
Reactions: nativewolf
The problem is not reading the sign. The problem are the rules or even the road design. If the area around schools is designed to be safe for pedestrians, you don't need rules and signs like this.
If they can't change the road, go for the safe option and limit the speed to 25 at all times.
Except 25 mph would get you a ticket in Texas. 20 mph is the standard school zone speed.
 
FWIW: someone earlier commented that this “simpler” approach to FSD reduced Tesla’s technical lead to something like 2 years. I believe Tesla’s lead will still be well over 2 years when FSD finally comes out. Although much less hand-coded logic is required for the actual car, a competitor adopting Tesla’s approach right now has to do the following:

1. Develop a set of integrated sensors/cameras to go in a car.
2. Develop a chip that has sufficient compute and sufficient efficiency to deploy to a fleet.
3. Develop a feedback system whereby cars in that fleet can send data back to the mothership.
4. Install a massive compute datacenter, or rent one for lots of money.
5. Develop utilities for data gathering, processing, and classifying to curate the dataset.
6. Deploy the hardware in all the cars.
7. Deploy the software to the cars to pull the desired data from the fleet.

I’m sure I’m missing a few steps, but who else is in the position to do this? Waymo could immediately switch to Tesla’s approach, but they’re not a car company. Cruise is part of GM, but GM moves slow and doing (1) and (2) in a way that’s cheap for consumers is no small task.

Rivian could go this route now, but the costs are currently prohibitive for them.

I still see this as a 4-5 year lead minimum. And that’s if someone else immediately abandons their approach and starts going with Tesla’s approach TODAY—and assuming that competitor has the money to do it.
It's a fool's errand if Tesla solves this first because they very much want to license this which is what legacy Oems really want anyways. None if them actually cares to do it themselves, and only small players out of China want to steal Tesla's code and try just to make their product relevant in a world that doesn't care about Chinese made cars.

Tesla will beat all their competitors on cost and performance by a decimal, they already have today. Tesla hardware suite are miles cheaper than what mobile eyes/Nvidia offer.
 
More than 150 car models are now too big to fit in average car-parking spaces, according to analysis ..... While the size of the standard car-parking bay has remained static for decades, cars have been growing longer and wider in a phenomenon known as “autobesity”. ...... There is growing debate about car size and road safety, after two eight-year-old girls Selena Lau and Nuria Sajjad died when a Land Rover crashed through a school fence in south-west London in July.