Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

2017 Investor Roundtable:General Discussion

This site may earn commission on affiliate links.
Status
Not open for further replies.
I don't think California is utopia. I'm served by two utilities, both investor-owned, though on one of my islands we're exploring moving to a municipal form, thus I follow power costs a bit. So why DOES San Diego have such high costs?
Because for some reason, the customers have to pay for the cost of closing down a nuclear power plant that had just been (badly) refurbished at our cost, and pay for the imported electricity it was supposed to have supplied but isn't any more.
 
6 billion was worldwide regulatory approval. Nothing saying that states or countries cant come sooner.

That tweet makes it look as if he's positive m3 will be autonomous at launch no? That can't just be my optimistic reading of it.
I think that's been the plan for a few years now. How autonomous I guess is the question. My back of the knapkin is they are well short of 6B by then though. How many cars do they have for learning now anyway? Is it just the new ones since the last update or is it previous ones too?
 
Last edited:
The same way you do it. By making a best case judgement of where the lane is by looking for cues of where the roadway ends and dividing it up. Then just go slow and try not to hit other cars. In principal a computer can do this far better too; it has the benefit of actually being able to calculate a high probability where the lanes are and has GPS. Safer than people.
Actually, in principle the computer can do this far *worse* than a human can. The human can take into account really obscure context and the computer has to be programmed to understand the context, to put it bluntly. And the computer *won't* be programmed to understand the context in obscure situations; it just won't happen.

I remember driving down a road where the only signal to the road location was the location of mailboxes. Any human could make that leap of logic, but you'd have to specifically program (or "train", to use the current silly jargon) a computer to do so.

That's in *principle*. In principle, a quality human driver will always be better than any of these computers.

In practice, however, most humans suck at driving. Suck horribly, really really horribly. And driver's licenses are handed out like candy. So the computers will be better than most humans.
 
In general this replacement of wide area grids with micro grids is utopian idea. I am puzzled why it is so popular.
Because high-tension power lines are widely hated. For whatever reason.

People don't seem to mind smaller lines so much. If each local area is *largely* independent, then the interconnections between the small grids can be much smaller and less obtrusive. There will still be interconnections.
 
Actually, in principle the computer can do this far *worse* than a human can. The human can take into account really obscure context and the computer has to be programmed to understand the context, to put it bluntly. And the computer *won't* be programmed to understand the context in obscure situations; it just won't happen.

I remember driving down a road where the only signal to the road location was the location of mailboxes. Any human could make that leap of logic, but you'd have to specifically program (or "train", to use the current silly jargon) a computer to do so.

That's in *principle*. In principle, a quality human driver will always be better than any of these computers.

In practice, however, most humans suck at driving. Suck horribly, really really horribly. And driver's licenses are handed out like candy. So the computers will be better than most humans.
Humans have some advantages, computers other advantages.

In your example, the computer would (in the not too distant future) have access to an accurate GPS position, compass bearing and up-to-date maps. With sufficient accuracy, it could navigate the road based on this alone. It would also have experiential camera, radar (and/or lidar) data, and it might not know what a mailbox is, but it could use the mailboxes (and all other static objects close to the road) as terrain references for correlating the maps to the terrain. The computer could also have access to infrared imaging, meaning it could see things you do not.
 
Actually, in principle the computer can do this far *worse* than a human can. The human can take into account really obscure context and the computer has to be programmed to understand the context, to put it bluntly. And the computer *won't* be programmed to understand the context in obscure situations; it just won't happen.

I remember driving down a road where the only signal to the road location was the location of mailboxes. Any human could make that leap of logic, but you'd have to specifically program (or "train", to use the current silly jargon) a computer to do so.

That's in *principle*. In principle, a quality human driver will always be better than any of these computers.

In practice, however, most humans suck at driving. Suck horribly, really really horribly. And driver's licenses are handed out like candy. So the computers will be better than most humans.
I actually think you're wrong here neroden.

The way most of the autonomous driving system works is via machine learning, where it learns what the desired output (vehicle control actions) is for a given set of inputs (radar, cameras, etc), over time, with guidance from humans teaching it - this is a big part of what the cars are doing when they're in shadow mode, or otherwise being piloted by their human, is recording how the human driver reacted to that set of inputs.

Since the autopilot system has superhuman sensory perception of the world around it - radar can see through snow better than you can, you don't have a GPS in your head accurate to a metre or so, and you don't have eyes in the back and sides of your head - it necessarily follows that it is taking into account a more accurate picture of the world around it to base decisions on.

To use your example - nobody has to program, or teach it to drive in those conditions by using mailboxes as a visual cue for where the road is, it will simply learn to do that as it watches what humans do in those situations and sees mailboxes along the side of the path it's following in a field of white.
 
Humans have some advantages, computers other advantages.
Indeed. Fuzzy pattern matching and identifying "that's just not right" behavior are two of humans' greatest advantages.
Computers are much much better at precision operations with good data.

In your example, the computer would (in the not too distant future) have access to an accurate GPS position, compass bearing and up-to-date maps.
OK, look, we're living in the real world here. Not a fantasy world. The computer would be perfectly likely to have an *inaccurate* GPS position, an *inaccuarate* compass bearing, and *out-of-date* maps. Really, think about how things actually work for a moment. If you've ever actually used GPS, compasses, or online maps, I'm sure you've experienced all three of these, *recently*.
 
  • Informative
Reactions: Drax7
OK, look, we're living in the real world here. Not a fantasy world. The computer would be perfectly likely to have an *inaccurate* GPS position, an *inaccuarate* compass bearing, and *out-of-date* maps. Really, think about how things actually work for a moment. If you've ever actually used GPS, compasses, or online maps, I'm sure you've experienced all three of these, *recently*.
In the not-too-distant future, basically every car will be a google street-view mapping car. Translating the data into a map, that means the map could be up-to-date as of five minutes ago. This requires *massive* data, but this is the direction we are moving.

As for GPS, they're getting more and more accurate. In a few years the standard off-the shelf GPS receivers will also be using satellites from the Galileo system and probably others. With more satellites accuracy goes up (plus the government can't mess with accuracy for civilians).
 
I actually think you're wrong here neroden.
Well, you can *think* that, but in fact I'm right and you're wrong.

The way most of the autonomous driving system works is via machine learning, where it learns what the desired output (vehicle control actions) is for a given set of inputs (radar, cameras, etc), over time, with guidance from humans teaching it - this is a big part of what the cars are doing when they're in shadow mode, or otherwise being piloted by their human, is recording how the human driver reacted to that set of inputs.
It's a statistical correlation system, yes. Based on common data patterns. It does badly if you haven't fed it the right data on the weird situations.

It will NEVER have the level of context-sensitivity that a human is capable of acquiring from a human's years of experience. It will always be an idiot savant.

I can spot signs that something is wrong up ahead which are based on my *general knowledge*, not my driving knowledge. The autopilot will never *have* that general knowledge, because it will never acquire that data. We are a very long way from true AI.

Since the autopilot system has superhuman sensory perception of the world around it - radar can see through snow better than you can, you don't have a GPS in your head accurate to a metre or so, and you don't have eyes in the back and sides of your head - it necessarily follows that it is taking into account a more accurate picture of the world around it to base decisions on.
No, actually, it doesn't. It's missing ludicrous amounts of context which humans get from "general knowledge".

To use your example - nobody has to program, or teach it to drive in those conditions by using mailboxes as a visual cue for where the road is, it will simply learn to do that as it watches what humans do in those situations and sees mailboxes along the side of the path it's following in a field of white.
This could work if it were being trained on the right data. It *could*.

Unfortunately -- and here's the killer point -- the majority of humans are bad drivers and will simply go off the road in these conditions. (And in other conditions, humans won't follow the mailboxes.) The autopilot is being trained by looking at the behavior of typical drivers, which means BAD drivers. Because it has better sensors it will probably do somewhat better than bad drivers.

You can already see this in the rather stupid lane-finding schemes: they've got one based on road lines which fails if they aren't there. And they've got one based on where people actually drive -- but if the majority of people are weaving out of their lane (*which I expect that they are*), then it's just going to copy the bad drivers!

I already said it would do better than bad drivers, and that bad drivers are typical. Will it be a truly good driver? Not if you train it this way.
 
Indeed. Fuzzy pattern matching and identifying "that's just not right" behavior are two of humans' greatest advantages.
Computers are much much better at precision operations with good data.


OK, look, we're living in the real world here. Not a fantasy world. The computer would be perfectly likely to have an *inaccurate* GPS position, an *inaccuarate* compass bearing, and *out-of-date* maps. Really, think about how things actually work for a moment. If you've ever actually used GPS, compasses, or online maps, I'm sure you've experienced all three of these, *recently*.
Fuzzy pattern matching is the human behavior that machine learning is really trying to mimic.

Early autonomous cars - think DARPA challenge - weren't doing it that way. It was much more like you think, teaching a car to drive by writing the rules of the road into computer code. What Tesla is doing is much closer to Dad teaching a 16 year old to drive by showing him.

It's a big part of why Tesla's can autonomously drive in many conditions that more primitive autonomous vehicles like Google's car cannot.

Machine learning works. It's a problem of collecting a large enough data set for it to be able to infer how to deal with new situations, and Tesla has certainly got the data part down.
 
In the not-too-distant future, basically every car will be a google street-view mapping car. Translating the data into a map, that means the map could be up-to-date as of five minutes ago. This requires *massive* data, but this is the direction we are moving.

As for GPS, they're getting more and more accurate. In a few years the standard off-the shelf GPS receivers will also be using satellites from the Galileo system and probably others. With more satellites accuracy goes up (plus the government can't mess with accuracy for civilians).
GPS *still* doesn't work reliably in major cities due to skyscraper reflections, or in tunnels, and that hasn't been solved yet.

I know I'm talking about corner cases. My point is *entirely* that there are a lot of corner cases and they are not going to be solved in the near future. Semi-autonomous cars? Yes. Fully autonomous cars? Fantasy.
 
  • Helpful
Reactions: Drax7
Fuzzy pattern matching is the human behavior that machine learning is really trying to mimic.
Yes, I know this, and it's much much worse at it than humans. Still.

They've finally got to the point where it's slightly better at visual pattern recognition, but that's an extremely narrow area of pattern-matching. It's finally only *somewhat* worse than humans at language translating. It'll be a lot of work before they get it to work better than humans on other problem domains.

And driving is a *much harder problem domain than it at first appears*. The problem looks easy if you're not looking at it carefully, but it's not. *Expressway* driving is a nice controlled environment and can be solved easily. Driving in *general* is a crazy uncontrolled environment, and even getting the behavior right at four-way stops is requiring years of work by the teams working on them.
 
  • Informative
Reactions: Drax7
Well, you can *think* that, but in fact I'm right and you're wrong.


It's a statistical correlation system, yes. Based on common data patterns. It does badly if you haven't fed it the right data on the weird situations.

It will NEVER have the level of context-sensitivity that a human is capable of acquiring from a human's years of experience. It will always be an idiot savant.

I can spot signs that something is wrong up ahead which are based on my *general knowledge*, not my driving knowledge. The autopilot will never *have* that general knowledge, because it will never acquire that data. We are a very long way from true AI.


No, actually, it doesn't. It's missing ludicrous amounts of context which humans get from "general knowledge".


This could work if it were being trained on the right data. It *could*.

Unfortunately -- and here's the killer point -- the majority of humans are bad drivers and will simply go off the road in these conditions. (And in other conditions, humans won't follow the mailboxes.) The autopilot is being trained by looking at the behavior of typical drivers, which means BAD drivers. Because it has better sensors it will probably do somewhat better than bad drivers.

You can already see this in the rather stupid lane-finding schemes: they've got one based on road lines which fails if they aren't there. And they've got one based on where people actually drive -- but if the majority of people are weaving out of their lane (*which I expect that they are*), then it's just going to copy the bad drivers!

I already said it would do better than bad drivers, and that bad drivers are typical. Will it be a truly good driver? Not if you train it this way.
I'm not prepared to assume that it's simply using an average of what the typical - and we agree, therefore crap - drivers piloting their Model Ss are doing.

I suspect the autopilot software team is spending a great deal of effort categorizing the mountain of data being produced by the fleet to find stellar examples of driving to teach the system with. The system does not have to learn from every driver - you can exclude the bad examples, and over time it will learn what the best drivers do.

As a technology in its infancy, of course it's not going to optimally handle the corner cases right away, but to suggest that it could never, because it doesn't possess some abstract concept of *general knowledge* is pure folly.

It could learn to spot whatever you saw ahead that was amiss and caused you to proceed with caution - and it will do it faster and better than you can once it knows how - just look at the video where AEB kicked in on detecting the collision in front of the vehicle ahead.

I assert that it will do better than all but the best drivers in most situations very quickly upon reaching maturity as a technology, and eventually could do better than them too. Truly though, it wouldn't matter if it didn't - just getting the majority of average drivers to stop driving themselves would decrease road deaths by a factor somewhere between 10 and 100.
 
Yes, I know this, and it's much much worse at it than humans. Still.

They've finally got to the point where it's slightly better at visual pattern recognition, but that's an extremely narrow area of pattern-matching. It's finally only *somewhat* worse than humans at language translating. It'll be a lot of work before they get it to work better than humans on other problem domains.

And driving is a *much harder problem domain than it at first appears*. The problem looks easy if you're not looking at it carefully, but it's not. *Expressway* driving is a nice controlled environment and can be solved easily. Driving in *general* is a crazy uncontrolled environment, and even getting the behavior right at four-way stops is requiring years of work by the teams working on them.
Visual pattern matching is basically the only one that matters for driving. If it's better than humans by your own admission, then we've got the building blocks we need.

I do not challenge your view that driving as a problem is *hard*. I work in R&D for a technology firm as a programmer and product designer, and spent much of my youth building robots for the FIRST Robotics Competition. Trust me when I say that I understand how difficult it is to get a complex electromechanical system to autonomously respond to the world around it, even in very narrowly scoped ways.

All I'm saying is that there is no technical reason a computer with the capabilities that autopilot has, couldn't do a better job than a human provided it knows how.

I believe that eventually it will. You seem to believe the problem domain is so large that it can never be fully solved. On this we disagree.
 
Visual pattern matching is basically the only one that matters for driving. If it's better than humans by your own admission, then we've got the building blocks we need.

I do not challenge your view that driving as a problem is *hard*. I work in R&D for a technology firm as a programmer and product designer, and spent much of my youth building robots for the FIRST Robotics Competition. Trust me when I say that I understand how difficult it is to get a complex electromechanical system to autonomously respond to the world around it, even in very narrowly scoped ways.

All I'm saying is that there is no technical reason a computer with the capabilities that autopilot has, couldn't do a better job than a human provided it knows how.

I believe that eventually it will. You seem to believe the problem domain is so large that it can never be fully solved. On this we disagree.

There is *some* unknown distribution of driving situations where a machine is better, and a distribution where the human is better. Neroden is correct in describing these systems as having poor higher level reasoning. It will be a while before the machine can detect that a box in a pickup truck is likely to come loose and fall out the back, but it can react extremely fast once that box does fall. Ultimately this is empirical rather than philosophical whether the behavior of the machine is sufficient to trust entirely without a human driver, but there will be corner cases and failures and reasons for people to worry for many years. I'm kind of optimistic that 'fatal' accidents can be reduced substantially and that this might dominate the conversation, but you are gonna have situations like the car choosing to hit a dog rather than the cat it was chasing (which is clearly a failure of high level reasoning).

I also worry about those weird situations in parking lots where everybody just jams up because things are in the way, and you have to coordinate a solution to unlock the puzzle, etc. Or maybe you are at a sports game and a human is telling you where to park, or you are parking on someone's lawn but need to not drive over the flowers, etc. If there's not a steering wheel there's gonna be some situations handled poorly.

To me though, this is all stuff that will somehow and someway be overcome over many years. It only matters that Tesla remains the leader.
 
There is *some* unknown distribution of driving situations where a machine is better, and a distribution where the human is better. Neroden is correct in describing these systems as having poor higher level reasoning. It will be a while before the machine can detect that a box in a pickup truck is likely to come loose and fall out the back, but it can react extremely fast once that box does fall. Ultimately this is empirical rather than philosophical whether the behavior of the machine is sufficient to trust entirely without a human driver, but there will be corner cases and failures and reasons for people to worry for many years. I'm kind of optimistic that 'fatal' accidents can be reduced substantially and that this might dominate the conversation, but you are gonna have situations like the car choosing to hit a dog rather than the cat it was chasing (which is clearly a failure of high level reasoning).

I also worry about those weird situations in parking lots where everybody just jams up because things are in the way, and you have to coordinate a solution to unlock the puzzle, etc. Or maybe you are at a sports game and a human is telling you where to park, or you are parking on someone's lawn but need to not drive over the flowers, etc. If there's not a steering wheel there's gonna be some situations handled poorly.

To me though, this is all stuff that will somehow and someway be overcome over many years. It only matters that Tesla remains the leader.

And it could be argued that difficulty on this topic is good for Tesla since it keeps this from being a commoditized feature like heated seats, and more of a product differentiation feature which leads to yummy margins. I say create a DEATH RACE and throw all the available autopilot cars into it and see who wins.
 
  • Like
Reactions: larmor
There is *some* unknown distribution of driving situations where a machine is better, and a distribution where the human is better. Neroden is correct in describing these systems as having poor higher level reasoning. It will be a while before the machine can detect that a box in a pickup truck is likely to come loose and fall out the back, but it can react extremely fast once that box does fall. Ultimately this is empirical rather than philosophical whether the behavior of the machine is sufficient to trust entirely without a human driver, but there will be corner cases and failures and reasons for people to worry for many years. I'm kind of optimistic that 'fatal' accidents can be reduced substantially and that this might dominate the conversation, but you are gonna have situations like the car choosing to hit a dog rather than the cat it was chasing (which is clearly a failure of high level reasoning).

I also worry about those weird situations in parking lots where everybody just jams up because things are in the way, and you have to coordinate a solution to unlock the puzzle, etc. Or maybe you are at a sports game and a human is telling you where to park, or you are parking on someone's lawn but need to not drive over the flowers, etc. If there's not a steering wheel there's gonna be some situations handled poorly.

To me though, this is all stuff that will somehow and someway be overcome over many years. It only matters that Tesla remains the leader.
Driving is hard. But there are several parts to AI driving. First part is sensors, still aways off. Next is data, we know that real world data input is happening. Lastly is machine learning, and with AI, it is possible to have multiple systems-supercomputers review the tasks and hence accelerate machine learning.

Parts of driving by humans is better for now, but feed that data into multiple supercomputers....

Google's deepmind beat Go master....
 
  • Like
Reactions: everman
Status
Not open for further replies.