For more performance results, visit our Performance Results page
Experience the democratization of AI and bring IPU-powered machine intelligence at supercomputer scale within reach with the arrival of IPU-POD256 in the cloud or in your datacenter. Designed to accelerate large and demanding machine learning models, IPU-POD256 gives you the AI resources of a tech giant.
Slurm and Kubernetes support makes it simple to automate application deployment, scaling, and management of IPU-PODs. Virtual-IPU™ technology offers secure multi-tenancy. Developers can build model replicas within and across multiple IPU-PODs and provision IPUs across many IPU-PODs for very large models.
World-class results whether you want to explore innovative models and new possibilities, faster time to train, higher throughput or performance per TCO dollar.
IPUs | 256x GC200 IPUs |
IPU-M2000s | 64x IPU-M2000s |
Memory | 230.4GB In-Processor-Memory™ and up to 16,384GB Streaming Memory |
Performance | 64 petaFLOPS FP16.16 16 petaFLOPS FP32 |
IPU Cores | 376,832 |
Threads | 2,260,992 |
IPU-Fabric | 2.8Tbps |
Host-Link | 100 GE RoCEv2 |
Software |
Poplar TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana Slurm, Kubernetes OpenStack, VMware ESG |
System Weight | 1,800kg + Host servers and switches |
System Dimensions | 64U + Host servers and switches |
Host Server | Selection of approved host servers from Graphcore partners |
Thermal | Air-Cooled |
For more performance results, visit our Performance Results page