Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Open Letter to Elon Musk

This site may earn commission on affiliate links.
This open letter is a bit, to a lot, over the top. Autopilot is an option. You do not have to pay for it, you do not have to use it. However, it's completely unfair and unreasonable to suggest other owners who are willing to accept the risk shouldn't be allowed to because "you" don't think they should be.

I encourage Tesla to keep pushing the envelop, keep driving progress, keep bringing technology to the marketplace. No one is forcing anyone to use any of this and so long as that is the case, I don't see what logical reason anyone has for these "open letters".

Jeff
 
...I am making the attempt to help you save yourself as well as the many people who will lose their lives

It's the everyday heroes among us that give me hope.

What is at stake here, in addition to the lives of these innocent people, are your employees, shareholders, most of the AV industry because they follow you, your company and you personally.

Never thought about this way - a lot is at stake!

...famous to ... infamous... saving lives to ...costing lives... ethical to unethical....one of the greatest visionaries...of all time to... a monumental hypocrite. A person who becomes or does the things they say they rail against.

Please stop with the amazing writing. You are hurting my heart - Foster Wallace and Joyce look out.

...take a deep breath... and find the courage to do the right thing.

The right thing is hard - but it's the right thing.

...I have done somethings that have earned me some ethical and engineering credibility.

Get out. You mean you write, you're an engineer AND you have ethical credibility? Did I say Joyce needs to watch his back? Scratch that - Leonardo Davinci is the one who should be nervous.


Should I not convince you to do the right thing here I will do my best to find another way to force a change.

Dude please - please. You've done enough. We need you - we need your mind. Don't put yourself in danger - for us. The best thing you can do for us is to stay safe and keep writing.

The issue is your use of public shadow driving as a primary or significant means of creating your autonomous technology. Versus the use of aerospace level simulation.

Well - DUH! WTF Elon!

I will start with the punch line – you will never reach autonomy using this approach.

There's a punchline here - but it's ELON.

You will never save lives using it.

Well damn - that's all the case I need to hear. Get me outta this damn Tesla and somebody find me a Cadillac.

...you and the majority of the industry, using this approach, are not competent or ethical enough to continue.

Wait. Wait. What if you - @imispgh - can police Elon? Like - you could be a special rep of TMC who can sit on Tesla's board, perform surprise inspections of Tesla's simulation labs - and then report back to the forum for a feedback roundtable with Elon and the rest of us? This could be a win-win. Don't say I'm crazy until you think it over. PM me - I have some high level contacts inside Tesla who might consider this idea (seriously).

In an effort to keep this letter concise please find links two of my articles.

Concise writing is for the weak and ADHD challenged. Give us everything you've got, please.

"Letter to Congress"..."Who will get to Autonomous Level 5 First and Why"

Did you ever see that Frank Capra movie - "Mr. Imispgh Goes to Washington?" Seriously - you inspire me that way. PM me for my Tesla contacts.


Michael DeKort

And THIS is why Michael DeKort is a BOSS - he uses his real name. Bravo. Bravo. Bravo.
 
What testing services?

Sometimes people do the right things for the right reasons.

IEEE Xplore Full-Text PDF:

IEEE Xplore Full-Text PDF:

From your linked in

Founder
Professionals for Responsible Mobility
May 2017 – Present (5 months)Greater Pittsburgh Area

I am in the process of creating an international Autonomous Vehicle and Mobility Simulation Association and Trade Study/Exhibit. The purpose is to help navigate the industry to a new paradigm. One that replaces most of the public shadow driving AI companies are using to achieve level 4 and 5 autonomy with simulation. The reasons for this being that AI, while clearly having value, also has its significant weaknesses. Weaknesses so significant that autonomous levels 4 and 5 cannot be reached using it as the primary method for AI, engineering and testing.
 
From your linked in

Founder
Professionals for Responsible Mobility
May 2017 – Present (5 months)Greater Pittsburgh Area

I am in the process of creating an international Autonomous Vehicle and Mobility Simulation Association and Trade Study/Exhibit. The purpose is to help navigate the industry to a new paradigm. One that replaces most of the public shadow driving AI companies are using to achieve level 4 and 5 autonomy with simulation. The reasons for this being that AI, while clearly having value, also has its significant weaknesses. Weaknesses so significant that autonomous levels 4 and 5 cannot be reached using it as the primary method for AI, engineering and testing.
Maybe his first client is Cadillac? :)
 
No one forces owners to pay $8,000 pre-delivery or $10,000 post-delivery.

As long as they volunteer to pay to be in the program, I am for it.

What counts is an owner's consent.

If you don't like it, you don't pay for it!
The accident you referred to wasn't a fully autonomous system. The driver was responsible for what the car did in the end.
 
  • Like
Reactions: davidc18
The false implication in the news stories was that Josh continued to ignore these repeated reminders (and keep his hands off the wheel) until he finally struck the truck. In reality, Teslas are designed so that every few minutes if the car does not sense hands on the steering wheel, it provides the driver with a visual reminder. If that is ignored, it then gives an audible reminder. If that is ignored, the car will slow down and stop. Understanding how this technology works, we now know Joshua responded by putting his hands on the steering wheel. Aware of both the vehicle’s abilities and limitations, Joshua followed the prompts of the Tesla with each series of indications received. Otherwise, the Tesla would have automatically slowed and stopped.

This, I believe, was unfortunately not true - or at least misleading - at the time of the Joshua Brown incident? Originally AP1 did not really enforce this in the sense that it nowadays does. Otherwise people couldn't have been filming from the back seat etc.?
 
  • Helpful
Reactions: OPRCE and davidc18
I can kind of see the OPs point.

I guess everyone driving AP2 wonders at times if Tesla's super-bold approach to autonomous isn't detrimental to the company and its mission in the end. We make fun of Audi's Level 3 approach on TMC all the time, but there is something to be said about being conservative in the limits and the redundancy they are investing in...

Tesla, ever since the early AP1 days, has decided to be more agile and do more with less development time, less sensors etc.I mean, it isn't completely unfathomable that this approach is one disaster away from collapse, even though I hope it never comes to that. The state of AP2, while driver is responsible, feels in part downright dangerous.

So Tesla certainly has an aggressive approach - and seems to look forward to using real world fleet data to develop the safety of the system. If that is what OP is critiquing, I can see some merit to it. Time will tell if and how Tesla's approach works or not.
 
The accident you referred to wasn't a fully autonomous system. The driver was responsible for what the car did in the end.

That accident is relevant because:

Google believes it is irresponsible to release to the public semi-autonomous system as a bridge to fully autonomous goal. It noticed that drivers would feel complacent and overconfident thinking semi-autonomous system would work so well and they would no longer take responsibility as a main monitor for road conditions.

Tesla believes it is unethical to withhold from the public any incremental progress such as semi-autonomous system while working toward autonomous goal.
 
If the OP actually did have a good idea (referring to 'his method'), he's chosen the absolute worst way to get the info to Elon. Grandstanding on public forums, rather than reaching out to have an actual conversation, leaves motives in doubt (for me, at least).

@imispgh, Elon has always been clear that he's open to hearing new ideas that are accompanied by defensible data. That's not what's happening here.
 
I say almost zero power because I have done somethings that have earned me some ethical and engineering credibility
And the bombastic attack on Musk that is the thrust of your full "open letter" squanders every bit of that ethical and engineering credibility you claimed.

And yes, "attack" is exactly what it is. Your tome is loaded with phrases and words which carry highly negative connotations, along with outright insults to the man. Rather than a reasoned presentation of your point of view you try (rather ineffectively) to force your viewpoint to be heard through unreasonable accusations and fear mongering.

Your approach is an embarrassment to engineers everywhere.
 
Google believes it is irresponsible to release to the public semi-autonomous system as a bridge to fully autonomous goal. It noticed that drivers would feel complacent and overconfident thinking semi-autonomous system would work so well and they would no longer take responsibility as a main monitor for road conditions.

Fair point.

There are sort of three... ahem... "levels" of approach to autonomous:

1) Tesla's renegade approach - take responsibility for nothing, but release increasingly aggressive self-driving features on an agile basis.

2) Audi's approach - take responsibility for self-driving, but create gradual, piecemeal solutions to get there faster, yet still in a responsible manner (first Level 3 traffic jam pilot, them Level 4 highway pilot and Level 4 parking-lot system)...

3) Google's approach - skip Level 3 entirely (many manufacturers seem to plan that, and in case of Google even skipping Level 2 driver's aids) until Level 4-5 is possible to avoid confusion.
 
This, I believe, was unfortunately not true - or at least misleading - at the time of the Joshua Brown incident? Originally AP1 did not really enforce this in the sense that it nowadays does. Otherwise people couldn't have been filming from the back seat etc.?
Actually, I think it IS true. As I recall, I have had to at least wiggle the steering wheel periodically ever since I received my AP1 car - and that was in late March 2016 (Mr. Brown's accident was in May).
Yes, they did strengthen it (i.e., made the need more frequent at highway speeds) after Mr. Brown's demise, but it was there earlier, though not at the very beginning (your reference to the back seat videographers being the evidence).
 
  • Like
Reactions: OPRCE and rxlawdude
The issue is your use of public shadow driving as a primary or significant means of creating your autonomous technology. Versus the use of aerospace level simulation.

I... think you're misunderstanding what "shadow driving" is. "Shadow driving", as used and defined by Tesla, involves running the AP software in simulation on the car when it's being manually driven and recording the car's reaction to an event and projected results, the human's reaction to the same event, and comparing the two to see whether the software is acceptable or whether the event needs to be added to the training data.

So yes, Tesla is allowing people to run their cars with "shadow driving", in that they allow people to drive the cars themselves. For that matter, so does every single car manufacturer in existence, except maybe Waymo.
 
Actually, I think it IS true. As I recall, I have had to at least wiggle the steering wheel periodically ever since I received my AP1 car - and that was in late March 2016.
Yes, they did strengthen it (i.e., made the need more frequent at highway speeds) after Mr. Brown's demise, but it was there earlier, though not at the very beginning (your reference to the back seat videographers being the evidence).

Yeah, but that's the "misleading" part in my books.

The report says that AP1 was pinging Joshua Brown very regularily, much more so than just every few minutes, and that the driver did not respond to all of these reminders.

Did Brown respond to some reminders? Possibly. But that in no way would seem to negate the finding that not responding to many of the reminders (doing something else? computer and emails were speculated by some who knew him?) was a part of this equation - and AP1 did not, at the time, force his hand on this anywhere near as aggressively as it does these days...
 
I... think you're misunderstanding what "shadow driving" is. "Shadow driving", as used and defined by Tesla, involves running the AP software in simulation on the car when it's being manually driven and recording the car's reaction to an event and projected results, the human's reaction to the same event, and comparing the two to see whether the software is acceptable or whether the event needs to be added to the training data.

So yes, Tesla is allowing people to run their cars with "shadow driving", in that they allow people to drive the cars themselves. For that matter, so does every single car manufacturer in existence, except maybe Waymo.

I think the OP does understand and is advocating a different approach than basing things on human drivers (OP's reference to "aerospace level simulation"). Seems like a controversial position from the OP, but it doesn't seem to me that he doesn't understand that...

All that said, I did read some sentiment from OP that he is also fearful of Tesla's agile progress on driver's aids and self-driving, which is part of the fleet learning process of course...
 
I... think you're misunderstanding what "shadow driving" is. "Shadow driving", as used and defined by Tesla, involves running the AP software in simulation on the car when it's being manually driven and recording the car's reaction to an event and projected results, the human's reaction to the same event, and comparing the two to see whether the software is acceptable or whether the event needs to be added to the training data.

So yes, Tesla is allowing people to run their cars with "shadow driving", in that they allow people to drive the cars themselves. For that matter, so does every single car manufacturer in existence, except maybe Waymo.
THANK YOU. Finally somebody points it out. Shadow driving is I'm sure also done in autopilot mode but Tesla has said for a long time that shadow driving is also done during manual driving. One would assume that the video uploads underway now include events where he manually driven car's actions diverge in some way from the shadow car.
 
  • Like
Reactions: OPRCE
I think the OP does understand and is advocating a different approach than basing things on human drivers (OP's reference to "aerospace level simulation"). Seems like a controversial position from the OP, but it doesn't seem to me that he doesn't understand that...

All that said, I did read some sentiment from OP that he is also fearful of Tesla's agile progress on driver's aids and self-driving, which is part of the fleet learning process of course...
I don't think the OP understands it - if he did believe shadow driving to take place during manual driving as well as autopilot driving he wouldn't have an argument. His argument gets off the ground based on the assumption that "public shadow driving" means simply waiting for autopilot to screw up and then just record manual intervention. If this was the only way autopilot learned then yeah it would be a crap way of improving - if we ALSO had data to show that autopilot is statistically dangerous. We do not have that data. We have a bunch of pronouncements from the OP, other skeptics and competing companies.
 
So Tesla certainly has an aggressive approach - and seems to look forward to using real world fleet data to develop the safety of the system. If that is what OP is critiquing, I can see some merit to it. Time will tell if and how Tesla's approach works or not.

Although I like Tesla taking a forefront approach in pushing semi-autonomous system so that it can use the real world accurate data from its fleet, this needs to come with an assurance that it will continue to stay active in constantly improving its system to achieve FSD. The way things are right now, AP2 progress seems stagnant and in some areas, even reverted backwards. If you're going to be aggressive in furthering autonomy, this stagnation needs to be avoided at all costs since it risks putting someone in harm's way by leaving them with software that is unreliable.
 
  • Like
Reactions: AnxietyRanger