Today we are announcing the availability of a major new software release for our customers - Poplar® SDK 1.1. This release provides several significant new features and optimisations for the Poplar SDK to make it easier to use and to improve performance by 10% or more for many applications.
What’s new in the Poplar SDK 1.1 release?
New features and optimisations include:
- First release of new PopVision™ Analysis Tool
- Packaged Poplar SDK Docker Containers
- Frameworks Support - Distributed TensorFlow now added
- Optimised Poplar library support for depthwise convolutions
- Profiling support for control flow
- Optimised low latency for financial applications
- Extended OS support
- Poplar Technical Documentation
New PopVision™ Analysis Tool
The PopVision Graph Analyser gives developers a deeper understanding of their applications’ performance and utilisation of the IPU.
Features include:
- Local and remote profile viewing
- Visual comparisons (diffs) of application profiles
- Windows, Linux and OSX front-end support
- Integrated documentation
Read our full overview of the PopVision Graph Analyser
Packaged Poplar SDK Docker Containers
The Poplar 1.1 release includes Poplar Docker Containers, complete with SDK and Frameworks so it's easier and faster to use:
- Tools: contains only tools to interact with IPU devices
- Poplar: contains Poplar, PopART and tools to interact with IPU devices
- TensorFlow 1: contains everything in Poplar, with TensorFlow pre-installed
- TensorFlow 2: contains everything in Poplar, with TensorFlow 2 pre-installed
Optimised Poplar library support for Depthwise Convolutions
Depthwise convolutions are particularly useful in machine intelligence applications such as image classification, natural language processing and computer vision. The optimised support for depthwise convolutions in Poplar 1.1 includes:
- Up to 7x automatic throughput increase in grouped convolutions with small group dimensions
- Optimised Assembly kernels added to PopLibs and released as source code
- Automatic targeting from PopART & TensorFlow without framework changes
Optimised Low Latency for Financial Applications
In Poplar 1.1, we have achieved additional host I/O performance improvements for latency constrained use cases. The financial services industry provides many examples of latency constrained applications where accelerated compute is essential.
Profiling Support for Control Flow
Many emerging machine learning models depend on dynamic control flow for training and inference. Poplar 1.1 extends profiling support for replicated graphs and those containing control flow, including models that use Gradient Accumulation.
Frameworks Support - Distributed TensorFlow now added
Poplar currently supports TensorFlow 1.x and 2.x, PyTorch, ONNX and Keras. Poplar 1.1 now includes distributed TensorFlow supporting 1.x and 2.x with:
- Host based reduction, gRPC communication transport
- Ethernet / Infiniband support
- IPUEstimator + IPUMultiWorkerStrategy implementation
- Examples provided
Extended Operating System Support
For this major release, we have extended our operating system support. We already support Ubuntu 18.04 and have now added support for CentOS 7.6.
As many of our customers who are exclusively Red Hat will know, CentOS 7.6 is functionally compatible with its upstream source, Red Hat Enterprise Linux (RHEL) 7.6. Fully tested RHEL support is expected later this year.
Poplar Developer Documentation
Last month, we made our Poplar software SDK documentation publicly available on our website to meet overwhelming interest from developers wanting to find out more about how easy our Poplar® SDK is to use. The documentation has been updated and extended with this new release.
Read our Developer Documentation
Programming with Poplar 1.1
We look forward to hearing feedback from the IPU developer community on this major new release and we are planning to make available plenty of new features and optimisations throughout 2020 to enhance the Poplar development experience.
Learn more about Poplar
If you’re an advanced AI practitioner interested in creating your own machine intelligence models on an IPU, the option to program directly at the hardware level without sacrificing ease of use is an exciting prospect. We developed our Poplar software stack hand in hand with our IPU processor and both are specifically designed for machine intelligence.
Read about Poplar
Check out our GitHub repositories