I found this interesting lil' tidbit on
OpenAI's blog:
"
We believe the largest training runs today employ hardware that cost in the single digit millions of dollars to purchase (although the amortized cost is much lower)."
AlphaGo is the most computationally intensive network that OpenAI lists. So, how much training compute would the new Tesla neural network require, relative to AlphaGo, to fully utilize all its parameters? 10x more? 100x more?
I think it's pretty reasonable to guess that Tesla would spend $100 million on compute hardware if it materially helped the development of Autopilot and higher levels of autonomy. So 10x would be doable.
Depending on various factors, 100x could also be doable: the acceptable length of training time for Tesla, the cost of owning hardware vs. renting cloud cycles, and whether "single digit millions" is closer to $2 million or $9 million.