<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=145304570664993&amp;ev=PageView&amp;noscript=1">
Graphcore Poplar SDK 2.4 software release

Dec 21, 2021

Poplar SDK 2.4 now available

Written By:

Laurence Herbert

We're Hiring

Join us and build the next generation AI stack - including silicon, hardware and software - the worldwide standard for AI compute

Join our team
We are pleased to announce the release of Poplar SDK 2.4 which is now available to download from our support portal and Docker Hub.
 
This is the culmination of a very successful year of software releases, optimisations and new ecosystem partnerships demonstrating continued progress, software maturity, ease-of-use, scale-out capabilities and high-performing benchmarks as demonstrated by our reference applications and MLPerf submissions.

New Features in Poplar SDK 2.4

 The new release provides a host of improvements to further enhance ease-of-use, performance and help developers run their machine learning models even faster, as well as code for new applications added to our public examples on GitHub.
 
  • New public examples, including ViT, UNet, GPT, RNN-T, FastSpeech2 and TGN
  • Compilation time optimisations
  • Dynamic configuration of gradient accumulation count at runtime in TensorFlow
  • IPU TensorFlow Addons package
  • PopRun/PopDist for Distributed TensorFlow 2 (Preview): easy-to-use, distributed multi-host scale-out support 
  • Overlapping I/O and compute for PopART & PyTorch
  • Enhanced IPU utilisation reporting for PopVision System Analyser
  • Full support for Debian 10.7
 
To learn more about these new features, see our SDK 2.4.0 Release Notes

Enhancing the Developer Experience

 Poplar SDK 2.4 provides numerous improvements to help you accelerate your AI applications with ease on IPU systems.  
 
Our growing Model Garden has seen its most significant update yet, including many more applications for AI practitioners, providing greater model coverage across multiple ML domains covering Computer Vision, NLP, Speech Processing & GNNs. These new models include Vision Transformer, UNet, GPT, RNN-T, FastSpeech2, Temporal Graph Networks (TGN) and more. These can be accessed directly from GitHub or via our Developer Portal Model Garden on the Graphcore website.
 
Compilation time has been optimised to reduce iteration time when developing models, with improvements of up to 28%.

TensorFlow Features

In TensorFlow, the gradient accumulation count can now be specified at runtime for pipelined models. This means that the global batch size can be defined dynamically and enables more rapid experimentation when investigating or tuning this hyper-parameter. Our new IPU TensorFlow Addons package includes IPU-specific Keras optimisers (for use with TensorFlow 2) developed by Graphcore’s Applications team including Adam, Stochastic Gradient Descent and LAMB.

Overlapping I/O

We are also introducing the capability to overlap I/O and compute for Poplar advanced runtime (PopART) and PyTorch frameworks, which can boost compute efficiency and contribute to significantly accelerating programs running on IPU hardware.

PopVision Tools

Our PopVision analysis tools continue to provide developers with a deeper understanding of how their applications are performing, with enhanced IPU utilisation reporting added for the PopVision System Analyser with this release. You can also download PopVision directly from our website.

Debian 10.7 Support

Following Poplar SDK 2.3’s preview support for Debian 10.7, this latest release includes full support for this operating system. 

Further Developer Resources

A new document for developers, the Memory and Performance Optimisation Guide, is now available on our documentation portal, providing detailed guidance on porting and optimising models for the IPU. New examples (TensorFlow 2 and PyTorch) are available demonstrating how to use PopRun/PopDist for distributed training.
 
For access to all the latest documentation, tutorials, code examples, webinars, video, research papers and further resources for IPU programming, check out our Developer portal.