Your previous post states "His compiler and computer languages expertise made him perfect for Tesla's efforts to design their own chips", which is easy to misinterpret..Regardless, you don't need either of those when designing a new chip, unless so also want to design a new ISA, in which case you need someone like Jim more than someone like Chris. In reality, a new ISA is completely unnecessary, and most companies with their own silicon (e.g. Apple with their Ax chips, Qualcomm's Snapdragon, Nvidia Tegra, Huawei Hisilicon...) all license reference designs and/or ISA from ARM. They then amortize costs over millions in sales.
On the GPU side, you'd likewise pick CUDA from NVidia or AMD Mantle, or OpenGL/Vulkan to address programmability. None of these require reinventing the wheel, and in fact, since AP2 is NVidia DrivePX2, Tesla is implicitly committed to CUDA anyway. There's a wealth of CNN/DNN work leveraging CUDA (
cuDNN), and most recent deep learning and computer vision projects leverage NVidia gear, starting from AlexNet, which used 2x GTX580s (trained for 2 weeks!) for image recognition using ConvNets. Tesla could create their own chips, but it's a pointless capital sink when NVidia builds kit like Drive PX2, and its follow-on, the Xavier SOC.
In my opinion, this entire discussion about building compilers, languages, chips etc is pointless. The hard problem of deep learning for computer vision in ADAS is a) accurate image recognition and discrimination and b) real time results to car controls. Lots of things can be recognized with great accuracy today, given adequate training and adequate time to convolve choices with high confidence and then drive the car accordingly. For that, you need deep learning experts to help you train the cars accurately using the wealth of shadow-collected data from the cars. That's where Karpathy comes in. Tesla should have hired someone like him, Yann Le Cun or another well known member of the computer vision community much sooner.