Thank you diplomat33, this is very helpful for my understanding of FSD. Re your second paragraph, this issue would seem to not bode well for Musk's vision for FSD, no pun intended.
Remember that people "hallucinate" as well. Optical illusions, tricks of light and shadow, etc. It's popular clickbait on social media. No vision system is perfect, so the key to getting FSD to be reliable is training, training and more training. Fortunately, Tesla has the data to do extensive training.
What follows is a lengthy description of how the problem of hallucinations is playing out, but the short form is that they seem to have vanished for the driving system, while they may remain for some other driver assistance features that use an older perception system that is known to sometimes hallucinate. Such as is seen in the graveyard video.
Earlier generations of Tesla's driver assists were divided into two major parts.
The first part was a neural network perception system that looked at the video coming from the cameras and came up with a description of the world around it. It was sufficiently detailed that Tesla could create a visualization of it. The perception system identified cars, pedestrians, lane lines and so on.
The second part was a set of hand-built rules written with traditional coding techniques that looked at the information coming from the perception system and decided what the car should do to drive around.
The perception system is what you see in that video. It was known to hallucinate, and would even cause the car to sometimes "phantom brake", which is to perform a hard brake for no apparent reason. It could scare the pants off drivers.
Now come forward to the latest generation of Tesla's driver assists; version 12. It is entirely neural networks. The system looks at the video coming in from the cameras and, through the magic of extensively-trained neural networks, tells the car what to do. That system hasn't been demonstrating any behaviors that suggest hallucinations. That may be because the system isn't obliged to come up with cars, pedestrians and such. It doesn't have to do anything but drive the car correctly. It may be that asking for too many details of a neural network encourages it to fill in the blanks by making up stuff. ChatGPT is infamous for that.
With the latest generation, the still car shows the same visualization that shows ghosts in graveyards. That's because even today, Tesla has kept the older perception system for that visualization (and perhaps other things), which is a neat toy, and great marketing tool. But the driving system is not using that visualization information. When the car shows ghosts in the graveyard, the driving software doesn't care because it decides on its own what's in the graveyard.
One other bit of possible confusion could be that there are still driver assistance features that use the older perception software. For example, when using the latest parking assist, people have occasionally reported that the car will show a phantom person standing in the parking spot, and the parking assist will refuse to drive into them. Given that the visualization is based on the old perception system, it suggests that the parking assist is also based on the old perception system.