jebinc
Well-Known Member
Just said hopefully not would or even should. So for the hopeful it did not happen:
Elon Musk on Twitter
While "Hope" may be a strategy for some, consider it may not be the best one to rely on.
You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
Just said hopefully not would or even should. So for the hopeful it did not happen:
Elon Musk on Twitter
While "Hope" may be a strategy for some, consider it may not be the best one to rely on.
They are still using branches. We use a modern devops approach at work and we also use branches. Each patch get a branch until they are sufficiently good to be merged into the master branch. Number of concurrent branches depends on the size of the team.Yeah, OK I'm just old school. But...
The evidence suggests Tesla is too. Why would early access users get such an old release with many features missing that are already on main? If it were just feature flag methodology there would be no reason to disable current main features and send a release to EA users. That's' negative progress for those users. This is what has happened. The current EA release did not have features that were released from main at the same time. I really do think Tesla is trying to isolate the EA features so they can be tested in a relative vacuum.
Still, today EA users are on 2019.20.4.6 sent out the second week of July. If we believe Tesla's release labeling this release was branched from main in May and took another two months to get to end users.
If feature enabling were the rule, releases would come much quicker and with less incremental implementation. Now we wait for V10 to go to EA users. I'm gonna go out on a limb and say that it will be labeled 2019.28 or so since it was likely branched off main in mid-July and will take until sometime in September to get out to users.
Agree with all that. If a branch exists for a long time the difficulty in merging it back rises pretty quick. So keeping branches alive for a few days is ideal. Merge main out (it's probably not changed much) and then merge the branch in. Pretty standard stuff.They are still using branches. We use a modern devops approach at work and we also use branches. Each patch get a branch until they are sufficiently good to be merged into the master branch. Number of concurrent branches depends on the size of the team.
The big difference is that our branches rarely live for more than 2-3 days, maximum 7 days. If they do, then the mistake was made in the scheduling by making the patch too complex (could probably be split up into more dependencies).
The fact that Tesla has diverted from the main branch for about 3 months suggests that they are still doing some very significant rewriting and restructuring, or maybe they just don't structure their software department very well. Because even significant restructuring or replacing central infrastructure can usually be scheduled for weekly patching by planning correct (usually by 1. partial use, 2. replace 1:1, 3. introduce the feature, 4. activate the feature).
Earlier speech by Karpathy suggest they also iterate their neural-network pretty much the same way as traditional code. By committing changes to the datasets in a repo that is built (trained) by the servers, having their NN layout descriptors by code and unit-testing through their similators and the "shadow mode" we hear so much about.
They are still using branches. We use a modern devops approach at work and we also use branches. Each patch get a branch until they are sufficiently good to be merged into the master branch. Number of concurrent branches depends on the size of the team.
The big difference is that our branches rarely live for more than 2-3 days, maximum 7 days. If they do, then the mistake was made in the scheduling by making the patch too complex (could probably be split up into more dependencies).
The fact that Tesla has diverted from the main branch for about 3 months suggests that they are still doing some very significant rewriting and restructuring, or maybe they just don't structure their software department very well. Because even significant restructuring or replacing central infrastructure can usually be scheduled for weekly patching by planning correct (usually by 1. partial use, 2. replace 1:1, 3. introduce the feature, 4. activate the feature).
Earlier speech by Karpathy suggest they also iterate their neural-network pretty much the same way as traditional code. By committing changes to the datasets in a repo that is built (trained) by the servers, having their NN layout descriptors by code and unit-testing through their similators and the "shadow mode" we hear so much about.
At work we do things that has never been done before too, although not quite self-driving, it is considered highly innovative and disruptive. I lead the software development crew for this company.Everyone uses branches. The question is how difficult and extensive the new branch is. I disagree that incremental changes can be integrated in every few days or a week for the most difficult problems.
Projects with known solutions are easy, but problems for which a solution is unknown require a lot of time for experimentation. The research can take far longer than a few days or weeks, especially for open ended behaviors that react to the real world.
However, we all seem to be saying the same thing - that there is either brand new code or a significant rewrite afoot.
What do you mean by core of FSD ? NN or procedural code or both ?But while we're on the subject.. small things like traffic sign recognition can be incrementally added without side effects. However, my sense is that the core of Tesla's FSD needs to be a complete rewrite, and not just a series of incremental changes. The wide gulf between the tech day demos and the current EAP implies that FSD is a completely different animal.
Didn't Karpathy talk about software branches in his most recent talk about multitasking? He mentioned how tricky it can be because one team may have developed say recognizing traffic lights and added it to software version A while another team has developed say recognizing road debris but they were working off of software version B. So now you have to combine A and B.
Yepp exactly.Didn't Karpathy talk about software branches in his most recent talk about multitasking? He mentioned how tricky it can be because one team may have developed say recognizing traffic lights and added it to software version A while another team has developed say recognizing road debris but they were working off of software version B. So now you have to combine A and B.
You can source control the training material as well as the neural network description and the software v1 code around. Add cases which the system should be able to solve for each new feature.With Neural Networks, I'm not sure how you would combine them after the fact.
Off the top of my head, I'd think that the network's variables can't be mixed or operated on arithmetically to produce a network that does both.
You can of course run the two NNs in sequence, but that doubles the processor workload.
Actually, deciding which pieces you can combine effectively into one network and which should be separated for a given camera image analysis might be important to FSD, I'd think.
But most of what I know about modern complex neural networks I learned from TMC...
Looks like Tesla is adding pick up trucks to the driver display in an upcoming software update. Check out the little pick up truck in this pic:
Any idea what version? 32.1?
No. It might be 32.1 but I don't know.
What do you mean by core of FSD ? NN or procedural code or both ?
There are basically 2 schools of thought around Tesla FSD.
- What we see now in production cars + EAP is quite current and Tesla doesn't have any separate HW3.0 FSD NN/software.
- Tesla has separate HW3.0 FSD NN/software that we saw in the demo.
I'm referring to the navigational / motion planning component, and its ability to handle exceptions.
Whether it's procedural, NN, or a combination doesn't really matter. NNs are good at recognizing objects and feeding their interpretations into the next component, but deciding what to do in the face of imperfect detection and context is much harder.
Darn. I want that and the visualization that you can spin around with your fingers
I just want stationery vehicles.
Looks like Tesla is adding pick up trucks to the driver display in an upcoming software update. Check out the little pick up truck in this pic: