Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

So… Highland is out…

This site may earn commission on affiliate links.
I imagine this journalist isn’t going to be impressed with the direction of travel by Tesla with their indicators 🤔.

What's the solution to poor roundabout etiquette?

Driving quality is dropping pretty quickly in the country. Tailgaters, people that just block the fast lane completely oblivious that there's a massive queue of cars behind them, people that don't indicate, playing with their mobiles while they weave over the road, etc.

Probably is best cars start driving themselves as humans seem less and less capable of this as the years go by.
 
  • Like
Reactions: Mercimek
395474844_10163479940294535_2955101766712803440_n.jpg

Not my picture but demo UK Highlands have been spotted at the Hilton Park superchargers (Credit: FB / Martin Davies)
 
Feels like they are making their self driving goal harder when they seem to ship cars out with all sorts of combinations of cameras, etc. They’d have to train against all those different combinations.

I've taken that as the software is amazing and will work with anything - Semi Tractor, or any Saloon / Cybertruck ... just plug it in.

You saying I'm wrong? 🤡
 
I've taken that as the software is amazing and will work with anything - Semi Tractor, or any Saloon / Cybertruck ... just plug it in.

You saying I'm wrong? 🤡
Well to be fair I think their software probably is amazing but the limits are probably the amount of compute needed to train the neutral networks.

If you think about it, if the quality (MP changes, contrast, etc) or even the position of the cameras change slightly, they likely need to re-train the models. Otherwise it’ll be more likely to make the wrong decisions.

Pretty sure they’ve already said that this is a limit to how fast they can improve it and why they are just focusing on HW3 at the moment. It’s a weird situation that the newer cars are now less capable than the older cars. You’d assume at some point that would change but Model S / X have HW4 for a while and nothing been done about it yet really. Probably doesn’t really affect us outside of the US but big deal in ‘merica I imagine.
 
if the quality (MP changes, contrast, etc) or even the position of the cameras change slightly, they likely need to re-train the models

I don't know about AI, but I am a software engineer. I'd be wanting to factor those things out ... have something decide on the surroundings (distance / position), make the independent of the position where the cameras are sited (so it will work with "anything", that would include Old model / New model with subtle changes - or adding a bumper cam maybe? 🤩 ) and then the "Shall I swerve" bit is independent.

Whether the camera MP / position changes the way in which Distance / Position is calculated (such that it needs remodelling) I dunno - but my instinct would be to program around that.

I'm glad I'm not in AI ... well, other than asking ChatGPT to do my coding for me :)

I doubt the bumper cam would be used for FSD anyway?

Visibility pulling out from a junction maybe? (IDK)
 
I doubt the bumper cam would be used for FSD anyway? It's more to fill in the blind spot that is present on low speed manoeuvres such as park assist and summon. Don't think it'd provide much value > 20MPH.
I think though it’s all supposed to be part of a single stack approach so would need training for that to work. By that point why not use it for FSD as the more vision the car has around it the better really.
 
I don't know about AI, but I am a software engineer. I'd be wanting to factor those things out ... have something decide on the surroundings (distance / position), make the independent of the position where the cameras are sited (so it will work with "anything", that would include Old model / New model with subtle changes - or adding a bumper cam maybe? 🤩 ) and then the "Shall I swerve" bit is independent.

Whether the camera MP / position changes the way in which Distance / Position is calculated (such that it needs remodelling) I dunno - but my instinct would be to program around that.

I'm glad I'm not in AI ... well, other than asking ChatGPT to do my coding for me :)



Visibility pulling out from a junction maybe? (IDK)
Not a developer but in IT also. ChatGPT is awesome and scary at the same time. Hope my job will last until I get to retirement before our AI overlords fully take over.

I think Tesla said they had a massive amount of that kind of logic but are removing it as it’s not needed anymore. The Neural Networks can work this out on their own these days.

Think about this, if you train a car with one camera and it learns what a wheelie bin is. Then another camera has different contrast so the colours are different it might all of a sudden not click it’s still a wheelie bin as it’s never seen one that looks like that with its first camera.

Or even the position of the cameras changing from each car. Where is the front of the car, what’s the turning circle, etc. all of this is going to vary.

The more data they can train on the better it’ll get but also just constantly increasing compute and power requirements. Interesting that Microsoft are hiring now to build nuclear power stations for their data centres to try to keep up with the power requirements AI needs. Guessing it’ll be the US where it’s likely legal for a private company to have their own nuclear power stations 😉

This will come in handy enriching the uranium needed for Skynet to wipe us all out 😂
 
There will be an image processing step before the camera data reaches the neural networks.
This should mean the core neural networks are somewhat isolated from camera specifics.
Yeah likely but it’s still going to vary a bit from different cameras. Maybe I’m wrong and it’s not a big deal but if this was true why can HW4 cars with the new cameras not use the same FSD beta that the HW3 cars can? I think Elon’s already said they’d have to retrain for HW4 and this isn’t a focus, they want to continue improving on HW3 where they have a much larger installed base at the moment.
 
Think about this, if you train a car with one camera and it learns what a wheelie bin is. Then another camera has different contrast so the colours are different it might all of a sudden not click it’s still a wheelie bin as it’s never seen one that looks like that with its first camera.

Yup, I can see that, but I would consider it horrendous if I was involved with something where a charge to hardware had a massive impact on the software (as in "To deliver we need X amount of processing time", even if no code changes)

I saw a demo of "You Only Look Once" which was image-recognition where the aim was to recognise what the object was "in one look". He had a laptop with him up on the dais, big screen behind him projecting the laptop camera showing his face - with a box around it and labelled "Boy". He swivelled the laptop round so the camera faced the audience and instantly every person in the audience had a box and was labelled boy / girl. (Might have been some "Maybe"'s I can't remember!) (I found this very old YouTube about YOLO, might be the same presenter; his talk is about the history of object recognition, and the improvements in methods that have taken it from tens-of-seconds per image to real-time. His object-recognition demo is towards the end and uses staged models, rather than the live audience that I saw previously ... but I expect it has come aa long way since 2016, and I found the overall thing interesting)

Not the same as we are discussing, and it may be that he had trained it specifically on his laptop / camera, but the angle between him and the tiered seating audience, and him jiggling about holding the laptop loosely in his hand, would have needed a fair number of "extras" if it needed specific training.

I have no idea how software is trained to recognise a wheelie bin, but it has to be able to do it from all angles - and presumably if the lid is open / stuffed a bit full - fallen over even? Seems to me that the definition of "wheelie bin" is pretty generic - maybe enough that "any camera" and "any camera position" will do.? But I'm speaking from the comfort of my armchair of course ... hopefully someone here can put me straight.

Guessing it’ll be the US where it’s likely legal for a private company to have their own nuclear power stations

Yeah, I reckon the money is in the recycling of the "old batteries" :)
 
It could also be, now that HW3 in existing Teslas has gathered the necessary data for machine learning, they can use that data to refine the HW3 experience. They will next need to get a large number of HW4 vehicles on the road, also gathering data and images, before they can give that to the learning computers, to advance that technology.
 
I think for the self driving they’ve moved beyond trying to recognise objects as the issue with that is the sheer quantity. It also then stuggles to know where the edges of the object is exactly on weird shaped items.

Now they are basically looking at the world and trying to workout on a 3D space of boxes, is there a part of an object in that space or it’s open air. Think they called it an occupancy network.

Probably then on top of that they need to just use the object detection for just a few items like lane markings, traffic lights, humans and other living creatures, etc. Here it’s kind of a backup that if the occupancy network misses or isn’t sure in a space, the car will know a human is there. However this approach means you probably don’t have to teach it what a tree is or other things in the world.

I’m just part going off some videos I’ve seen and a bit of guessing also. I’m not an expert in this, it’s beyond me. Though they systems are getting so advanced, even people like mid will be able to train them easily soon as you won’t need specific knowledge.
 
  • Like
Reactions: WannabeOwner