At its inaugural Innovation spotlight event today, Intel revealed that it’s collaborating with Google to develop the chipmaker’s first application-specific integrated circuit-based (ASIC) GPU, called Mount Evans. Intel says that Mount Evans — which it’s calling an “intelligent processing unit,” or IPU — will be supported by an “industry-standard” programming language and open-sourced Infrastructure Programmer Development kit designed specifically to simplify access to the technology in Google Cloud datacenters.
IPUs and data processing units (DPUs) are gaining steam as cloud service providers look to offload certain networking, security, and storage functions from CPUs across a cluster of systems. These tasks have steadily sucked up CPU resources within datacenters and the cloud as cloud adoption climbs, particularly during the pandemic. Research from Google and Facebook reveals that infrastructure workloads consume from 22% to 80% of CPU cycles across a range of microservice workloads.
First announced this summer at Intel’s Architecture Day, Mount Evans pairs up to four Xeon CPUs with packet processing technology developed by Barefoot Networks, a networking startup Intel acquired in 2019. The IPU also packs up to sixteen Arm Neoverse N1 cores running as fast as 3Ghz as well as I/O interfaces, caches, and three dual-mode LPDDR4 controllers with 102Gbps of memory bandwidth.
In a blog post, Patricia Kummrow, GM at Intel’s ethernet division, explains that Mount Evans’ compute complex is tightly coupled with the network subsystem, allowing the accelerator cores to use the system-level cache as their own last-level cache.
“This ASIC supports many existing use cases including vSwitch offload, firewalls, virtual routing,” and telemetry functions, while supporting up to 200 million packets per second and up to 16 million secure connections, Kummrow wrote. “The Mount Evans IPU emulates NVMe devices at very high input and output operations per second (IOPS) rates by leveraging and extending the Intel Optane NVMe controller. The same Intel infrastructure OS that runs on FPGA-based IPUs will run on Mount Evans as well. Additional technology innovations in the Mount Evans IPU are a next-generation reliable transport protocol, co-innovated with [Google] to solve the long-tail latency problem on lossy networks, and our advanced crypto and compression accelerators.”
Brad Burres, an Intel fellow leading the company’s IPU architecture work, asserts that the advantage of devices like Mount Evans are their separation of functions between service providers and tenants. This offers greater security and isolation for all parties, he says, in addition to supporting use cases like bare-metal hosting to run on the same exact hardware platforms, using the same services as virtual machines. Beyond this, DPUs and IPUs can enable new service models for storage by abstracting the storage initiated from the tenant, Burres says.
“[Mount Evans] lets tenants have full control over their CPU. They can do things like run their own hypervisor. And in that case, the cloud operators still fully control of the infrastructure functions such as networking, storage and security because those live out in the IPU,” Burres told HPC Wire in August. “The IPU [also] provides [an] infrastructure-optimized execution environment. This includes a significant investment in hardware accelerators in the IPU, which enable the IPU to process the infrastructure tasks very efficiently. That allows better tuning of software and cores for these types of workloads. Overall, this optimizes the performance and the cloud operator can now rent out 100% CPU to his guest, which also maximizes revenues.”
Intel’s a bit late to the DPU/IPU party, with rivals like Nvidia and Marvell already several generations into their designs. In April, Nvidia announced the BlueField-3 DPU for AI and analytics workloads, which packs software-defined networking, storage, and cybersecurity acceleration capabilities — offering what Nvidia claims is the equivalent of up to 300 CPU cores of horsepower. Startups like Fungible and Speedata are also developing DPUs for datacenter-centric workloads, while Amazon Web Services several years ago launched the Nitro System, a combination of DPU hardware and software aimed at providing improved performance and security.
But according to Kummrow, the IPU is positioned to become a strategic element in Intel’s broader cloud hardware strategy. Indeed, the global datacenter accelerator market could be worth $53 billion by 2027 compared to just $4.2 billion in 2020, Valuates Reports projects — clocking a compound annual growth rate of nearly 44%.
“We believe our leading IPU portfolio provides the common infrastructure foundation that allows our cloud customers to fully leverage their general-purpose compute, XPU, and acceleration resources in the heterogeneous data center architectures of the near future,” she wrote. “The blending of these capabilities perfectly matches the ongoing trends in microservices development and offers a unique opportunity for building optimized, function-based infrastructure that matches high-speed hardware networking components and common software frameworks. The IPU provides cloud service providers with an opportunity to rethink data center architecture, to accelerate the cloud, and to host more revenue-generating services — tenant apps running on virtual machines — on every server CPU in the datacenter.”
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Become a member