Edit: Still pretty impressive
Originally posted March 27, 2018 on AIXchange
I’d heard rumors for a while, but those rumors were confirmed last week: Google runs IBM Power Systems* in its production environment. This is from Forbes.com:
The biggest OpenPOWER Summit user news was that Google confirmed that it has deployed the “Zaius” platform into its data centers for production workloads. Google’s Maire Mahony, on stage at the event today said, we have “Zaius deployed in Google’s Data Center,” and we are “scaling up machine count.” She concluded by saying she considers the platform “Google Strong.” Mahony shared with me afterward that “Google Strong” refers to the reliability and robustness. Not to take away from the other deployments announced at the event, but this announcement is huge.
Mahony explained what Google likes about POWER9:
- More cores and threads for core Google search
- More memory bandwidth for RNN machine learning execution
- Faster and “more open” flash NAND sitting on OpenCAPI acceleration bus
I was told it was a simple recompile to get their code to run on POWER, but I’d still love to hear Google engineers talk about their actual use of POWER and how these systems perform compared to the others in the data centers.
The Forbes article itself is more generally focused on POWER9 and news from the OpenPOWER Summit. The Motley Fool gets more into specifics:
Why, and for what, is Google using POWER9 processors? Google found that the performance of its web search algorithm, the heart and soul of the company, scaled well with both the number of cores and the number of threads available to it. IBM’s POWER9 processor is a many-core, many-thread beast. Variants of the chip range from 12 to 24 cores, with eight threads per core for the 12-core version and four threads per core for the 24-core version. Intel’s chips support only two threads per core via hyperthreading.
The bottom line is that IBM’s POWER9 chips are ideally suited for workloads that fully take advantage of the large number of threads available. Google’s web search is one such workload. They’re not well suited for workloads that don’t benefit from more threads, which is why the market-share ceiling for POWER isn’t all that high.
Mahony also talked about the importance of bandwidth. It doesn’t matter how fast a processor is if it can’t move data fast enough. IBM claims that one of its POWER9-based systems can transfer data up to 9.5 times faster than an Intel-based system, using OpenCAPI and NVIDIA NVLink technology. That’s important for any kind of big data or artificial intelligence (AI) workload.
AI workloads are often accelerated by GPUs or other specialized hardware. Google developed its own accelerator, the Tensor Processing Unit, which it uses in its own data centers for AI tasks. But these accelerators still require a host processor that can move data fast enough.
Obviously readers of this blog–as well as the guy who writes it–already know and love POWER. But it’s always nice to see some big name enterprises get on board with POWER hardware.