Unlike the number-crunching alternatives, British startup Graphcore has developed a brain for computers that excels at guesswork.
By Austin Carr, 5 June 2019
Simon Knowles, chief technology officer of Graphcore Ltd., is smiling at a whiteboard as he maps out his vision for the future of machine learning. He uses a black marker to dot and diagram the nodes of the human brain: the parts that are “ruminative, that think deeply, that ponder.” His startup is trying to approximate these neurons and synapses in its next-generation computer processors, which the company is betting can “mechanize intelligence.”
Artificial intelligence is often thought of as complex software that mines vast datasets, but Knowles and his co-founder, Chief Executive Officer Nigel Toon, argue that more important obstacles still exist in the computers that run the software. The problem, they say, sitting in their airy offices in the British port city of Bristol, is that chips—known, depending on their function, as CPUs (central processing units) or GPUs (graphics processing units)—weren’t designed to “ponder” in any recognizably human way. Whereas human brains use intuition to simplify problems such as identifying an approaching friend, a computer might try to analyze every pixel of that person’s face, comparing it to a database of billions of images before attempting to say hello. That precision, which made sense when computers were primarily calculators, is massively inefficient for AI, burning huge quantities of energy to process all the relevant data.
When Knowles and the more business-minded Toon founded Graphcore in 2016, they put “less precise” computing at the heart of their chips, which they call intelligence processing units, or IPUs. “The concepts in your brain are quite vague. It’s really the aggregation of very approximate data points that causes you to have precise thoughts,” says Knowles, whose English accent and frequent chuckle invite comparisons to a Hogwarts headmaster. (Given his constant whiteboard pontificating, Toon jokingly addresses him as “Professor Knowles.”) There are various theories on why human intelligence forms this way, but for machine learning systems, which need to process huge and amorphous information structures known as “graphs,” building a chip that specializes in connecting nodelike data points may prove key in the evolution of AI. “We wanted to build a very high-performance computer that manipulates numbers very imprecisely,” Knowles says.
Put another way, Graphcore is developing a brain for computers that, if its co-founders are right, will be able to process information more like a human instead of faking it through massive feats of number crunching. “For decades, we’ve been telling machines what to do, step by step, but we’re not doing that anymore,” Toon says, describing how Graphcore’s chips instead teach machines how to learn. “This is like going back to the 1970s—we need to break out our wide lapels—when microprocessors were first coming out. We’re reinventing Intel.”
Investor Hermann Hauser, co-founder of Arm Holdings Plc, which controls the most widely used chip designs, is betting that Knowles and Toon’s IPUs will unleash the next wave of computing. “This has only happened three times in the history of computers,” Hauser says—CPUs in the 1970s, GPUs in the 1990s. “Graphcore is the third. Their chip is one of the great new architectures of the world.”
Graphcore’s origins lie in a series of symposiums Hauser organized in 2011 and 2012 at the University of Cambridge for the Royal Society, the scientific fellowship that counts Isaac Newton and Charles Darwin as alums. Around a posh dining room at King’s College, AI experts, neuroscientists, statisticians, and zoologists debated the impact advanced computing would have on society.
Days later, Young emailed Knowles to say his students investigated the matter and discovered they were using 64 bits of data per calculation. They realized they could perform the same function, as Knowles had suggested, with less precise arithmetic, using 8 bits. When the computer had less math to do, it could use the energy savings to crunch more numbers; it’s sort of the equivalent of a human brain shifting from calculating the exact GPS coordinates of a restaurant to just remembering its name and neighborhood. “If we built a processor more attuned to this sort of work, we could increase the performance by a factor of a thousand,” Knowles says. Young and others were so impressed that Knowles and Toon decided they had to start Graphcore. They began raising capital to develop the idea as early as 2013 and revealed the company to the world in 2016.
The semiconductor industry is currently debating the sustainability of Moore’s law, an observation dating back to the 1960s that says the number of transistors on a chip—and thus, its price performance—will double about every two years. Graphcore’s leaders are instead concerned with a related concept, called Dennard scaling, which stated that as transistor density improved, power demands would stay constant. But the principle no longer applies, and adding more transistors to chips now means the chips tend to get hotter and more energy-hungry. To mitigate this issue, some chipmakers design their products so they don’t use all their processing power at once—unused areas of the chip are called “dark silicon”—and instead run only the parts necessary to support an application.
Knowles and Toon say the heat problem especially will stop phones and laptops from getting much faster in the years ahead unless circuits can be radically redesigned for efficiency. “I was given a blank sheet of paper to start, which never happens in chip design,” says Daniel Wilkinson, who works on Graphcore’s chip architecture. They challenged the team of a few dozen engineers, mostly castoffs from their past startups, to design a chip that could harness all its processing horsepower at once while using less energy than a state-of-the-art GPU. One of the bigger energy stresses in silicon involves moving and retrieving data, but historically processors are kept separate from memory. Transporting that data back and forth between these components is “very energy expensive,” Knowles says. Graphcore set out to design what he calls a more “homogeneous structure” that “intermingles” a chip’s logic with memory, so it doesn’t have to expend as much power to transport data to another piece of hardware.
Over three years, they simulated computer tests on hundreds of chip layouts, eventually settling on a design with 1,216 processor cores, which Knowles refers to as “lots of tiny islands of processors that split up energy resources.” The resulting IPU, first manufactured in 2018, is a sleek chip the size of a Wheat Thin that has almost 24 billion transistors and is able to access data for a fraction of the power of a GPU. “Each of these chips runs at 120 watts”—about the same as a bright incandescent lightbulb—“so about 0.8 of a volt and about 150 amps,” Toon says, standing in a messy electronics lab at the Bristol headquarters, sliding his thumb over an IPU’s mirrorlike finish.
To test out its prototype, the team fed it a standard data-training model of millions of labeled images of common objects (fruits, animals, cars). An engineer then queried the IPU with a photo of his own cat, Zeus, and within an hour the computer not only identified it correctly but correctly described Zeus’ coat. “The IPU was able to recognize it as a tabby,” Knowles says. Since that first test, the IPU has sped up and can now recognize more than 10,000 images per second. The goal is for the chip to be able to digest and ascertain far more complex data models, to the point the system would understand what a cat is on some more fundamental level. “We don’t tell the machine what to do; we just describe how it should learn and give it lots of examples and data—it doesn’t actually have to be supervised,” he says. “The machines are finding out what to do for themselves.”
On the fifth floor of Graphcore’s offices, hulking industrial air conditioners blast cool air into the company’s data server room and flap window shades to and fro, letting in some of Bristol’s unusual mid-May sunlight. As energy-efficient as the chips are, planted in servers stacked together in fridge-size casings, the machines still generate a hellish amount of heat. These IPU server racks are potent enough to perform 64 petaflops of computing, the processing equivalent of 183,000 iPhone Xs working simultaneously at max speed. Knowles and Toon nicknamed their IPU “Colossus,” after the world’s first electronic programmable computer, which the British government developed to crack encrypted messages from Germany during World War II.
Graphcore has raised $328 million from investors, including BMW, Microsoft, and Samsung, and was last valued in December at $1.7 billion. It declined to comment on specific applications for its chips, citing nondisclosure agreements, but given its investors, some seem obvious—self-driving cars, Siri-like voice assistants, and cloud server farms. But Knowles is most excited about humanity-altering applications, such as the impact IPUs could have on the complex analysis that scientists need for research in climate change and medicine.
To help big corporate customers figure out how to build the next-gen computers required to use the chips properly, Graphcore offers server blueprints and packages its products with free software tools. “We’ll give you the recipe for the computer design and then sell you the ingredients,” Toon says. IPUs rely on a concept known as parallel computing. The basic idea is that programs need to be written for each processor for it to function, but as processors built into chips proliferate—a large Graphcore installation includes about 5 million processor cores and is capable of running almost 30 million programs at once—this coding task has superseded human authorship, meaning programing has to be automated for the processors to execute independently. In layman’s terms, Graphcore has sliced up mammoth computing undertakings into mini data problems, which are each handled separately on those “tiny islands of processors,” before they sync up like a Marine marching band to share what they learn at the most efficient moment.
Tobias Jahn, principal investor at BMW’s venture capital arm, envisions Graphcore chips in the automaker’s data centers and perhaps its cars. “BMW has an interest in Graphcore becoming a large-scale, worldwide silicon supplier,” Jahn says. The immediacy with which autonomous vehicles must execute so many critical tasks makes them a key market for something like an IPU, given the lag time that so often accompanies work in the cloud. Arm Holdings co-founder Hauser, now a partner at Amadeus Capital, estimates that each driverless car may need two IPUs. Graphcore says it’s on track to reach $50 million in revenue in 2019.
Big-name rivals are crowding into the field, too. Tesla Inc. recently applied for patents on its own AI chips. Google last year unveiled a class of microprocessors designed for machine learning. And Nvidia has been modifying its dominant GPU chip designs so they’re less precise and more efficient—more like Graphcore’s. “Everyone else is sort of knocking at Nvidia’s door,” says Alan Priestley, an analyst at researcher Gartner Inc. “Graphcore has a good position, but it’s still a very small competitor compared to Nvidia’s market presence. So although their IPUs may be better than Nvidia’s GPUs for these workloads, the risk they face is customers choosing ‘good enough’ over ‘brilliant.’ ”
Another significant challenge is the ethical dilemma IPUs present if, as promised, they enable machines to operate 100 times more powerfully than today’s computers. Toon and Knowles are wary of the dangers, particularly how such technology could be misused for weapons and authoritarian surveillance. Ultimately, though, they say governments will need to be the ones to set limits. “Machine power gave us airlines and cars,” Knowles says. “But it also gave us tanks. Society will have to determine the balance of good and evil over time.”
For now, Graphcore is focused on developing more software that will open customers’ eyes to the power of IPUs, while expanding its business toward what the co-founders see as an eventual public offering. The company pops a bottle of Champagne for each major milestone, such as a $50 million funding round in late 2017 and a $10 million sales order in 2018. Signs of this growth are all around Graphcore’s office in the form of larger and larger empty bottles of bubbly. Knowles and Toon always start with Winston Churchill’s favorite brand of Pol Roger, which they say represents their pride that they might give Britain its first tech giant on the order of Apple or Alibaba Group Holding Ltd. “Start with Pol and end with Pol,” Knowles says, chuckling again while doting over a recently consumed 9-liter Salmanazar of Champagne. “By the time you IPO, you pop the biggest bottle.”