Turing Lecture: Deep Learning for AI

Yoshua Bengio

Abstract:

This lecture will look back at some of the principles behind the recent successes of deep learning as well as acknowledge current limitations, and finally propose research directions to build on top of this progress and towards human-level AI. Notions of distributed representations, the curse of dimensionality, and compositionality with neural networks will be discussed, along with the fairly recent advances changing neural networks from pattern recognition devices to systems that can process any data structure thanks to attention mechanisms, and that can imagine novel but plausible configurations of random variables through deep generative networks. At the same time, analyzing the mistakes made by these systems suggests that the dream of learning a hierarchy of representations which disentangle the underlying high-level concepts (of the kind we communicate with language) is far from achieved. This suggests new research directions for deep learning, in particular from the agent perspective, with grounded language learning, discovering causal variables and causal structure, and the ability to explore in an unsupervised way to understand the world and quickly adapt to changes in it.

Download link to the slides of Bengio's lecture