We sure can. Eyeq4 is currently in production in multiple cars and has 2.5 TFLOPS while running on 3 Watts. It processes a more vast variety of NN compared to anything Tesla has with far better accuracy on 8+ cameras at 36 frames per second.
There are some that might suggest that Mobileye is being slightly... disingenuous with their spec quoting:
https://www.eetimes.com/document.asp?doc_id=1332687
Best guess is that the Eyeq4 "platform" is > 3 watts and that is what Tesla is referring to on a comparative basis (platform power). They've given no real specifics here, so a meaningful comparison remains impossible on a hardware level. That said, compared to your typical ARM core, NVidia's Xavier is a computing monster and I doubt that the Eyeq4 platform can come close to keeping up with it.
If you look at the block diagram for Xavier, the NN is only a small part of the overall package:
Tegra Xavier - Nvidia - WikiChip
So basically, if you just consider the NN part of the die only, then they (Nvidia, Mobileye, and Tesla) are all probably low single digit watt parts assuming a manufacturing node of 14 nm or smaller.
From a software perspective, we know that Mobileye advertises more capability than Tesla does right now. That said, it does not appear to translate into an actual product that exceeds what Cruise is delivering (today), and that appears more dependent on high resolution mapping than actual image recognition. So even if the paper capabilities are better, the delivered products are simply... different with one not meaningfully better than the other (right now).
Elon also indicated that AP 9.x right now does not fully utilize the compute resources of HW 2.0. However, the next iteration apparently will (but no details have been provided by Elon or Tesla as to what exactly those expected capabilities are planned to be). It is great that there are those on this forum who have provided some insight based on 3rd party evaluations, but that is not the same as actually having the full picture presented by the company. For all we know, the "super NN" waiting in the wings will do everything that Eyeq5 will do - but there is no way to prove / disprove based on actually available data (i.e., what was recently assessed is unlikely to even be the latest / greatest version that Telsa is working on).
Ultimately, I think it is too early to call victory (or failure) based on what we have high confidence knowledge on. We know that it is possible for Tesla to leverage existing hardware to do autonomous lange changes, but there are undefined edge cases that must be resolved (and no specifics on exactly what those are or how often they occur). But it seems likely that additional minor changes are possible to improve the product until version 3.0 of the hardware comes out. Right now, both companies are talking about what they eventually hope to get to - and debating speculation vs. speculation isn't something that will be a settled thing any time soon.