Home PC News Graphcore claims its M2000 AI computer hits 1 petaflop

Graphcore claims its M2000 AI computer hits 1 petaflop

Last Chance: Register for Transform, VB’s AI occasion of the 12 months, hosted on-line July 15-17.

Graphcore, a U.Ok.-based firm growing accelerators for AI workloads, this morning unveiled the second era of its Intelligence Processing Units (IPUs), which can quickly be made obtainable within the firm’s M2000 IPU Machine. Graphcore claims this new GC200 chip will allow the M2000 to attain a petaflop of processing energy in an enclosure that measures the width and size of a pizza field.

AI accelerators just like the GC200 are a kind of specialised {hardware} designed to hurry up AI functions, notably synthetic neural networks, deep studying, and machine studying. They’re usually multicore in design and give attention to low-precision arithmetic or in-memory computing, each of which might increase the efficiency of enormous AI algorithms and result in state-of-the-art ends in pure language processing, pc imaginative and prescient, and different domains.

The M2000 is powered by 4 of the brand new 7-nanometer GC200 chips, every of which packs 1,472 processor cores (working 8,832 threads) and 59.four billion transistors on a single die, and it delivers greater than Eight instances the processing efficiency of Graphcore’s current IPU merchandise. In benchmark checks, the corporate claims the four-GC200 M2000 ran a picture classification mannequin — Google’s EfficientNet B4 with 88 million parameters — greater than 64 instances quicker than an Nvidia V100 -based system and over 16 instances quicker than the newest 7-nanometer graphics card. A single GC200 can ship as much as 250 TFLOPS, or one trillion floating-point operations per second.

Graphcore IPU-POD 64

Above: The GC200.

Image Credit: Graphcore

Beyond the M2000, Graphcore says clients will be capable to join as many as 64,000 GC200 chips for as much as 16 exaflops of computing energy and petabytes of reminiscence, supporting AI fashions with theoretically trillions of parameters. That’s made attainable by Graphcore’s IP-Fabric interconnection expertise, which helps low-latency knowledge transfers as much as charges of two.8 Tbps and instantly connects with IPU-based programs (or through Ethernet switches).

The GC200 and M2000 are designed to work with Graphcore’s bespoke Poplar, a graph toolchain optimized for AI and machine studying. It integrates with Google’s TensorFlow framework and the Open Neural Network Exchange (an ecosystem for interchangeable AI fashions), within the latter’s case offering a full coaching runtime. Preliminary compatibility with Facebook’s PyTorch arrived in This fall 2019, with full function help following in early 2020. The latest model of Poplar — model 1.2 — launched change reminiscence administration options meant to make the most of the GC200’s distinctive {hardware} and architectural design with respect to reminiscence and knowledge entry.

Graphcore M2000 IPU machine

Above: Graphcore’s M2000 IPU Machine.

Image Credit: Graphcore

Graphcore, which was based in 2016 by Simon Knowles and Nigel Toon, has raised over $450 million so far from Robert Bosch Venture Capital, Samsung, Dell Technologies Capital, BMW, Microsoft, and AI luminaries Arm cofounder Hermann Hauser and DeepMind cofounder Demis Hassabis at a $1.95 billion valuation. Its first industrial product was a 16-nanometer PCI Express card — C2 — that turned obtainable in 2018, and it’s this bundle that launched on Microsoft Azure in November 2019. (Microsoft can be utilizing Graphcore’s merchandise internally for numerous AI initiatives.)

Graphcore GC011 rack

Earlier this 12 months, Graphcore introduced the supply of the DSS8440 IPU Server in partnership with Dell and launched Cirrascale IPU-Bare Metal Cloud, an IPU-based managed service providing from cloud supplier Cirrascale. More just lately, Graphcore revealed a few of its different early clients — amongst them Citadel Securities, Carmot Capital, the University of Oxford, J.P. Morgan, Lawrence Berkeley National Laboratory, and European search engine firm Qwant — and open-sourced on GitHub libraries for constructing and executing apps on IPUs.

Graphcore IPU benchmarks

Graphcore may need momentum on its aspect, nevertheless it’s bought competitors in a market that’s anticipated to achieve $91.18 billion by 2025. In March, Hailo, a startup growing {hardware} designed to hurry up AI inferencing on the edge, nabbed $60 million in enterprise capital. California-based Mythic has raised $85.2 million to develop customized personal in-memory structure. Mountain View-based Flex Logix in April launched an inference coprocessor it claims delivers as much as 10 instances the throughput of current silicon. And final November, Esperanto Technologies secured $58 million for its 7-nanometer AI chip expertise.

Most Popular

Recent Comments