Agree. Very interesting and thought provoking.
I think he is tongue-in-cheek because he is trying to walk a high-wire line of doing relevant reverse-engineering without revealing too much, raising the ire of Tesla legal or Musk.
One way to read it is plausible deniability: 'I was just dreaming'. There is also a tone of almost disbelief - they are further along than he thought: It is too good to be true.
His observations re. all the settings and meta-data/annotation/'augmented reality' are interesting. Maybe it is not either-or: Either classical programming, or ML/AI/Neural nets. Maybe the ML builds up a really detailed world-view - maybe more detailed than perhaps thought to make sense previously.
What are the potential benefits? Well, first of all the real-world understanding seem to be highly detailed. If it is also correct then this means that the ML-layer generating this is very good. Then that high-level world understanding is then served up to a different software layer. This layer can then drive the car - or illustrate it driving. (Some of the parameters seem game-like in that they offer the ability to fine-tune rendering of visual representation)
Is the driving layer traditionally coded or ML? We don't know. Would it make sense to have to 2 different ML-layers, one for world-view generation, and one for driving? Maybe it is legacy stuff, because the old driving software (and maybe still is) was traditionally coded.
Perhaps it still makes sense to keep this architecture to have a lot of hyper-parameters for tuning various stuff. The separation of concern also allows for debugging or verification scenarios. For developers, this would be observer-modes, debugging, or meta-observer modes.
For more visually inclined, think West World late season 2 where Maeve has the vertigo-inducing sensation of observing herself thinking, and speaking, and thinking about thinking (and thinking about thinking about ..) and early season 3 where Bernard either hacks, self-diagnoses or does a repository-rollback to another version of himself, depending on what base-layer of reality you initially assume.
(Disclaimer: I don't have any deep background in AI/ML)
I very briefly looked at it...just like I looked at the slide bars on the equalizers of high end stereo systems back in the last century before I either left them alone or pushed them all to max.
But thinking just a little bit...
First it made me think a person can go in and truly customize how the "car" behaves. They can give the "car" their personality. Then it made me think that perhaps the "car" itself will learn from the driver to reprogram itself to match the driver's inclinations. As an example, people that press the accelerator when the car is being a bit timid for their tastes. The "car" learns to go faster sooner for that driver.
Now a side point. The scale of timid/aggressive has a range where both the law is being followed AND the driver/passenger wants the "car" to behave. It is within that range that the software needs to allow the "driver" to feel as though his "car" is doing exactly as it should, and Tesla is making it so.
AND additionally, the "car" probably already has the ability to "drive like a maniac" but still be completely safe. However as this would make the "driver" and other drivers sharing the road feel uncomfortable (scare the SH!T out of them) because it is not how safe "drivers" drive. The "car" doesn't drive in the optimal parameters for neither safety or speed. It must behave within the misperceptions of "safe" of today's firmly-entrenched luddites.
Second, I could see customized settings becoming available and/or enforced due to various outside variables. Well let me give the example that one member incorrectly posted that FSD would have an issue, School Zones. To legally use FSD within certain jurisdictions the "car" might be required to have the inability to go over a "safe" speed limit, AND behave even more cautious in such zones in relation to pedestrians.
Where this would be fun for "drivers" would be at race tracks like the Nurburgring. Not only could the software be tweaked manually, but the "car" could detect the location and offer a special setting for the track...and even go so far as to adjust the settings for every feature of the track in terms of geography and climate.
But in every day FSD mode the "car" would just behave as the driver expects/wants, but also consider the expectations of other drivers so the vehicle would not operate outside of the perceived acceptable range.
Once humans become accepting of the ability of the "cars," the "cars" will eventually be allowed to drive in a safer more efficient manner than just what has been the limitations of human drivers have made the norm.