Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register
This site may earn commission on affiliate links.
Well you can't blame Elon for not being transparent. He wants to move volume at any margin so FSD can make it up afterwards. Not a business plan wallstreet agrees with. If FSD works someday, what Elon is doing would be the right business move.
Even if it eventually works how many people are actually going to put their personal vehicle on the taxi network? I'd like a car that drives itself but I'll never rent it out so it will never have the value Elon likes to claim. I'll also never bother to try and run another vehicle or fleet of vehicles as a robo taxi because there are easier ways to make money.
 
Even if it eventually works how many people are actually going to put their personal vehicle on the taxi network? I'd like a car that drives itself but I'll never rent it out so it will never have the value Elon likes to claim. I'll also never bother to try and run another vehicle or fleet of vehicles as a robo taxi because there are easier ways to make money.
I honestly think Musk's vision is pumping out metal at break even margins and just collect money via autonomy. There will be people who will pay you "the appreciating asset" at astronomical prices because are you not using this as a money printer. In an efficient capitalistic market, the money printer will be fully utilized generating a ROI.

So yeah as an investor you either bet on Musk's vision on autonomy or not. Musk has said many times if autonomy doesn't work Tesla is worth very little. He will keep talking about it and annoy wallstreet/shareholders because he doesn't want anyone to make any mistake what his vision for the company is when it comes to the future of operating income. What can I say, the guy is addicted to risk when everyone is screaming at him, telling him it's not necessary.
 
  • Like
Reactions: GSP
Sure he could tank the value of the company if he wants. Hopefully it doesn't come to that but I continue to take profits and sell covered calls as the share price rises just in case.
Well at least there's one good thing about Elon talking about FSD every chance he gets. It means Tesla still have yet to face any regulatory hurdles through all the nonstop probing and complaints.
 
11.4.4 FSDb did a full reversion for me, with no update. It has been a few weeks, and after a few odd behaviors here and there, it’s starting to act like It did with 11.4.1. We are back to driving the merge point to the end, stopping early for signs, and inconsistency with left turns. All of the good things that came with 11.4.4 vanished, the good news is all of the crazy issues it introduced also vanished. At this point it feels like all of the micro improvements have more to do with the map updates, and less to do with actual firmware. Someone mentioned to try a drive without using the gps navigation, maybe I’ll give that a spin before 11.4.6.
 
'17 TMX 11.4.4 -- 1) Just finished a 5300 road trip this week and FSDbeta on the highway screws up a LOT. Multiple times it tried to jump into a turning lane that was parallel with the two lanes AT highway speed. The map and display showed it was supposed to go straight. It would NOT have had time to brake at that highway speed and I had to jerk it back into the lane.

2) When taking a normal exit is abruptly enters the turning lane and goes too far then overcorrects back resulting in a very jerky exit.

3) It also very oddly slows down in the exiting lane on the off ramp ... going way too slow and stopping way too far back from the stop sign. Very far from human-like.

4) There are other examples of odd behavior like ping-ponging just enough to be annoying ... it depended a little on the road surface or if there was any side winds BUT the normal AP before FSDb was rock solid and smooth in all these scenarios.
 
11.4.4 FSDb did a full reversion for me, with no update. It has been a few weeks, and after a few odd behaviors here and there, it’s starting to act like It did with 11.4.1. We are back to driving the merge point to the end, stopping early for signs, and inconsistency with left turns. All of the good things that came with 11.4.4 vanished, the good news is all of the crazy issues it introduced also vanished. At this point it feels like all of the micro improvements have more to do with the map updates, and less to do with actual firmware. Someone mentioned to try a drive without using the gps navigation, maybe I’ll give that a spin before 11.4.6.
To follow up, I did a drive with no destination and it performed significantly better. Stopped where it should. Lefts were fine, highway merges were fine, and there were fewer nags believe it or not. It really seems like the map data that is loaded upon entry has significant influence on how FSDb performs.
 
To follow up, I did a drive with no destination and it performed significantly better. Stopped where it should. Lefts were fine, highway merges were fine, and there were fewer nags believe it or not. It really seems like the map data that is loaded upon entry has significant influence on how FSDb performs.


Curious when you say "lefts were fine" - How does fsdb turn left at an intersection without a destination? Is it manual-turn-signal initiated or something?
 
For what it's worth, our former member Discoducky has told me that James Douma knows what he's talking about when it comes to Tesla's AI approach. Discoducky actually worked on the Autopilot software development team in 2014-2015, had weekly meetings with Elon and worked with Ashok Elluswamy to build Tesla's AI team. He has also told me that Green often thinks he knows more than he actually does. James probably gets stuff wrong sometimes and has biases and blind spots like everyone else, but in general I am personally inclined to give him credibility.

I believe @kbM3 's rebuttals were valid as well. On what basis can we confidently conclude that necessity is the reason that both nodes of the HW3 computer are being used to run the net? With FSD Beta being a level 2 ADAS that still requires active human oversight, is computer redundancy even a priority right now? In the event of a core failure, the driver should be the 2nd layer of protection.

Tesla have been redesigning the neural net architecture frequently and it would not be surprising if they were deliberately allowing bloat to exist in order to save engineering time and training compute so as to speed up iteration cycles. Premature optimization is the root of all evil. It is a fact that neural nets can be shrunk with optimization, but how much FSD can be compressed is uncertain. Considering that none of us here are working for Tesla AI, we are left with no option but handwaving about the possibility of squeezing a future level 4 or 5 version to fit in a single HW3 node.
 
Tesla have been redesigning the neural net architecture frequently and it would not be surprising if they were deliberately allowing bloat to exist in order to save engineering time and training compute so as to speed up iteration cycles. Premature optimization is the root of all evil. It is a fact that neural nets can be shrunk with optimization, but how much FSD can be compressed is uncertain. Considering that none of us here are working for Tesla AI, we are left with no option but handwaving about the possibility of squeezing a future level 4 or 5 version to fit in a single HW3 node.
Doubt it. They use as much compute as they can, not because of bloat but because of performance of neural networks tend to perform better the bigger they are. They do a tradeoff between size of neural networks, the number of neural networks and frame rate and try to fit it within the compute budget that they have.
Screenshot 2023-07-25 at 14.12.38.png

Ie if their compute budget was 3.5e+13 flops, they would have chosen the MoS Large on the task in the study.

They bought a company deepscale.ai that were experts at optimizing neural networks and have since then refined this skill over the years. Likely they have an entire pipeline from getting from dataset to neural network where they optimize it many steps to get it to fit within compute envelope they have. I guess they have some score function they use with lots of performance metrics baked in and whatever networks scores best on this they use and they have an entire team trying out new network parameters to improve the score with lots of the tuning being done automatically.
 
  • Informative
Reactions: jerry33
Weirdly Buckminster reposted Gigapresses post from the main thread when the topic was banned, but not my reply to it which was the last post before the ban. There's no easy way to neatly reply based on how he reposted without a ton of manual effort, so best effort below:


Regarding KBm3s ideas being reasonable in claiming there's no evidence they are out of compute-


I literally quoted his own expert, James Douma, saying the NNs being used locally on-car now were too large to run in a single node anymore.

For a third time now I'm mentioning that- because it seems like people keep pretending it wasn't presented?


That's in addition to all the further evidence from Green- or simple understanding of what cross node costs you- like the fact it's flat out costly from a compute perspective to split anything across nodes, so the ONLY reason you'd want to do that is you don't have a choice.

Hell, one of the big innovations in Dojo was massively faster interconnects to avoid taking the large "move across chips" hits traditional designs like the FSD computer have on them.


Regarding the claims there's no evidence they intended full redundancy on HW3 (which is impossible now that they're using both nodes fully for a single stack of the software to run instead of a full stack running in each)-


At AI day we had Jim Keller (presumably we can agree HE is an expert on his subject?) telling us they designed not only for redundancy, but so that the FULL stack could do planning on each node independently- then check the two against each other before executing to insure everything was working correctly.

Which seems like a good plan regardless of SAE level- but again is not possible if you can't run the full stack inside a single node.


On Gigapresses remark defending those who believe they can still make L4/L5 work despite all these facts- specifically in reply to his "we are left with no option but handwaving about the possibility of squeezing a future level 4 or 5 version to fit in a single HW3 node"


Indeed. It's certainly, theoretically, possible Tesla will somehow shrink all existing code by at least 50%, and then somehow add a slew of additional, higher complexity, functionality that doesn't exist today- like a complete OEDR among others and do that without taking up ANY added space. That's, technically, if you squint enough, something you can't 100% rule out as possible. It's phenomenally, requires lots of hopium, unlikely, but it's possible.

But he was trying to claim there wasn't evidence they'd run out of single-node compute NOW. Which there's an abundance of evidence for. That I've already cited. Including from his own expert.




Mind you--- none of this is stuff that is new (the ran out of single note compute thing dates it mid 2020.... the fact they intended redundancy goes back to AI day in 2019)--- it's been discussed in detail many times in this subforum. But it seems there's still folks in denial.
 
  • Like
Reactions: 30seconds and GOVA