You can install our site as a web app on your iOS device by utilizing the Add to Home Screen feature in Safari. Please see this thread for more details on this.
Note: This feature may not be available in some browsers.
At GTC20204, Jensen just revealed the new Blackwell platform with an AI superchip with 208B transistors!!! WOW! Imagine what a trillion parameter scale generative AI could achieve?
Historically people use FP64 for TFLOPS, so take the numbers with a mine of salt. They’re big, but they’re TF32, not even FP32.
Then there is the matter of keeping it busy all the time, which is programmatically difficult. You also have to use CUDA to get that level of performance in any sensible way.
Then there is the fact that not all applications are suited to be run massively parallel or use tensor cores.
You could keep an A100 mostly busy, people struggle to keep h100 busy unless very specialist. That’s ignoring h200 and never mine the Blackwell.
It’s not about stats in this game, it’s about usability and efficiency for the applications and they are not just AI.
It’s all feeling a bit late 90s to me, with some useful truths and benefits and some crazy hype too.
At GTC20204, Jensen just revealed the new Blackwell platform with an AI superchip with 208B transistors!!! WOW! Imagine what a trillion parameter scale generative AI could achieve?