AP1 / Mobileye's EyeQ3 undoubtedly had a lot of smart tricks up its sleeve (which, apparently was enough to get to semi-level 2 autonomy), but it relied heavily on mere oldschool computer vision. And quite a bit of smoke and mirrors.
Around '14/'15 I think both Tesla and Mobileye figured out this wouldn't make for a great foundation for level 3+, hence their shift to deep neural nets with later versions. This came with the drawback of virtually having to start from scratch again, which was painful but ultimately necessary.
Now, if you compare neural nets (AP2) against traditional algos (AP1), there's a pattern that can be seen throughout the industry right now:
The neural nets usually don't hold a candle to traditional algos until a certain tipping point, but boy, once it reaches it, it runs circles around traditional algos.
I can pretty much guarantee you that Tesla's AP will hit this point within the next 6-24 month, which, by the way, perfectly aligns with Mobileye claims of reaching level 5 autonomy by 2020.
And one last thing: FSD is NOT some sort the holy grail of software engineering. (General AI is.). It's a software problem, like many others before, that is first deemed as being impossible to achieve, then it will eventually be solved and in less than 5 years later you'll be able to grab your open source FSD models from Github. That's how those things work. No magic sauce required.