Pienso – maker of AI software designed to help non-developers extract insights from their business data – is now available running on Graphcore IPUs in both Europe and the US.
Pienso’s decision to use Graphcore compute on Gcore cloud as part of its commercial offering reflects the growing number of AI-as-a-Service businesses looking to take advantage of the IPU’s unrivalled speed, efficiency and cost.
Graphcore compute has enabled Pienso to dramatically accelerate the performance of its interactive AI platform, delivering faster customer insights, document intelligence and content moderation.
The company saw a 35x performance improvement using IPUs compared to A100 GPUs, at extremely low latency.
“Graphcore IPUs in the cloud deliver several important things for Pienso. The most important is a better user experience. If AI models take too long to return results, the user experience discourages use. To this end, we’ve developed efficiency techniques, enabled by the IPU, which dramatically increase performance for both training and inference. Then there’s the superior economics of IPU-based systems, in terms of performance-per-dollar, that we can pass on to end-users,” said Karthik Dinakar, Co-founder and CTO of Pienso.
“The user experience we deliver with Graphcore IPUs in not only fast, it’s computationally efficient, which makes critical real-time insights available for customers who operate in fast-moving environments but need AI to deliver ROI.”
Leading-edge LLMs, accessible to everyone
Pienso is breaking new ground by putting state-of-the-art large language models in the hands of users with little or no experience of coding or AI.
“Pienso users are subject-matter experts and decision makers. They are the people best positioned to direct AI toward specific uses, and to take action, based on the insights that it reveals,” said Karthik.
Using a simple visual interface, Pienso customers can develop bespoke language models, trained on their own data, while building on the capabilities of a variety of popular open-source models such as BERT.
Unlike some commercial LLM products, Pienso guarantees that user data will only be available to the customer that owns it, and will not be used to train other people’s models or enrich a subsequent foundation model.
Cloud native
Pienso’s service is available running on Graphcore IPUs on Gcore, using datacenters situated in mainland Europe – a requirement for customers who need to ensure data privacy and sovereignty.
Gcore’s reputation for delivering ultra low-latency services made them the ideal choice to support Pienso’s service.
The combination of the performance boost enabled by IPUs and low cloud latency offered by Gcore, allows Pienso to service the growing number of customers requiring near real-time insights, such as customer contact centres looking to monitor emerging problems and potential opportunities among large volumes of inbound communication.
To try Pienso running on Graphcore IPUs, contact us.
Find out more about building and running an AI-as-a-Service platform on the Graphcore cloud technology stack.