Graphcore IPUs, built specifically for AI, are the ideal cloud compute for training, fine-tuning and deploying NLP applications quickly and efficiently.
Whether you are an AI Saas company focused on NLP-based platforms like intelligent chatbots or delivering real-time insights from customer service conversations or an enterprise exploring more efficient Large Language Models (LLMs) like GPT, read on to find out more.
Jump to ResourcesApplications with Natural Language Processing (NLP) models at their core are deployed by companies around the world to boost productivity, provide faster business insights, save money, increase security and reduce fraud, improve customer retention and overall business competitiveness.
Use cases include intelligent chatbots, sentiment analysis in finance, real time insights for customer service, content moderation in social networks, fraud detection in financial services, protein and genome analysis in drug discovery, content generation in marketing and translation, text summarisation of news and social media feeds and much more.
Graphcore and Aleph Alpha are working together to research and deploy the next generation of multi-lingual Large Language Models (LLMs) on current & next-generation IPU systems. Applications include conversational platforms for more intelligent and efficient Q&A in chatbots and advanced semantic search for knowledge management systems with an interface that resembles asking questions to a human expert than entering keywords into a search engine.
Learn moreGraphcore partner, Pienso, delivers a machine learning platform, based on IPU-powered NLP models, to help enterprises understand text data better than ever before. Customer service teams use the low code/no code Pienso service to generate insight, inform strategy, and inspire action, investment firms use Pienso to monitor the news and social feeds to inform investment strategies and social media groups have access to highly intelligent and easy to use content moderation tools.
Learn moreDolly 2.0 – The World’s First, Truly Open Instruction-Tuned LLM on IPUs – Inference
OpenAssistant Pythia 12B is an open-source and commercially usable chat-based assistant model trained on the OpenAssistant Conversations Dataset (OASST1)
Text entailment on IPU using GPT-J 6B on PyTorch using fine-tuning.
GPT2-L training in PyTorch leveraging the Hugging Face Transformers library.
GPT2-L inference in PyTorch leveraging the Hugging Face Transformers library.
HuggingFace Optimum implementation for fine-tuning a BERT-Large transformer model.
SQuAD and MNLI on IPUs using DeBERTa with Hugging Face - Inference
HuggingFace Optimum implementation for fine-tuning RoBERTa-Base on the squad_v2 dataset for text generation and comprehension tasks