In machine intelligence today we can see many interesting new approaches emerging, from the growth of large models such as GPT-3 or BERT-Large to the convergence of traditional convolutional neural networks and attention models.
But if we want to unlock AI’s full potential, to what extent do we instead need to focus on the fundamentals of intelligent algorithms?
On Pieter Abbeel’s most recent episode of The Robot Brains Podcast, Simon Knowles spoke about the underlying characteristics of AI computation, how machine learning algorithms resemble graphs and the importance of sparsity, as well as his predictions for the next big breakthroughs in machine intelligence.
Listen to the podcast