For more performance results, visit our Performance Results page
Experience the democratisation of AI and bring IPU-powered machine intelligence at supercomputer scale within reach with Bow Pod256 in the cloud or in your datacenter. Designed to accelerate large and demanding machine learning models, Bow Pod256 gives you the AI resources of a tech giant.
Slurm and Kubernetes support makes it simple to automate application deployment, scaling, and management of Bow Pods. Virtual-IPU™ technology offers secure multi-tenancy. Developers can build model replicas within and across multiple Bow Pods and provision IPUs across many Bow Pods for very large models.
World-class results whether you want to explore innovative models and new possibilities, faster time to train, higher throughput or performance per TCO dollar.
Processors | 256x Bow IPUs |
1U blade units | 64x Bow 2000 machines |
Memory |
230.4GB In-Processor-Memory™ Up to 16.3TB Streaming Memory™ |
Performance | 89.6 petaFLOPS FP16.16 22.4 petaFLOPS FP32 |
IPU Cores | 376,832 |
Threads | 2,260,992 |
Host-Link | 100 GE RoCEv2 |
Software |
Poplar TensorFlow, PyTorch, PyTorch Lightning, Keras, Paddle Paddle, Hugging Face, ONNX, HALO OpenBMC, Redfish DTMF, IPMI over LAN, Prometheus, and Grafana Slurm, Kubernetes OpenStack, VMware ESG |
System Weight | 1,800kg + Host servers and switches |
System Dimensions | 64U + Host servers and switches |
Host Server | Selection of approved host servers from Graphcore partners |
Storage | Selection of approved systems from Graphcore partners |
Thermal | Air-Cooled |
For more performance results, visit our Performance Results page