Panasonic AI | Panasonic Research and Development on Artificial Intelligence (AI)

Seminar Report

Progress and Challenges of Deep Learning and AIs

Professor Yoshua Bengio (University of Montreal, Canada)

Yoshua Bengio is Professor of Computer Science at the University of Montreal, a CIFAR Fellow and a world expert in the field of Artificial Intelligence. Prof. Bengio is also Director of the MILA, the Montreal Institute of Learning Algorithms and Scientific Director of IVADO, the Institute for Data Valorization.
Prof. Bengio first ever talk in Japan addressed the progress and challenges in the field of Deep Learning and more generally in AI. The lecture focused on the following three points:

1. Key elements contributing to the success of Deep Learning

It's important to start by clarifying what deep learning is good at and why we think it works so well. Fundamentally, while classical machine learning identifies a number of patterns roughly proportional to feature space size, Deep Learning can model exponentially complex patterns, thanks to distributed representations.

While this requires large computational resources and large labeled datasets, Deep Learning has the advantage of generalizing well to new patterns not included in the training data. Deep Learning achieve this thanks to the principle of "compositionality" that draw on existing knowledge of the world, and that is to be encoded in the model architecture.

Compositionality is the principle by which complex concepts are composed from simpler individual components: this appears to be an almost universal principle of nature and naturally produced data. For instance, in human language, complex concepts are represented in sentences by composing simpler individual words. In natural images, a scene is composed of sub-parts containing a multitude of individual objects and objects are composed of parts. Convolutional Neural Networks successfully exploit this structure by processing images in a stack of local operations, the convolutions.

Thanks to fundamental breakthroughs deriving from compositionality as well as many technical advances, large datasets and computational resources, Deep Learning has already had major effects on several fields, such as speech recognition, computer vision and machine translation.

2. Applications and future challenges of Deep Learning and AI

Recently, the combination of Deep Learning and Reinforcement Learning has been proven successful in the area of strategy and planning as well as control.

For example, Google DeepMind AlphaGo, has successfully defeated a world champion Go player, a long-time grand challenge for AI, and NVIDIA's demonstrated a self-driving car controller that is fully learnt from image data. Peculiar to these systems is that the machine developed internal understanding of the "meaning of inputs", and reflected them into appropriate representations.

Naturally, one may conclude, the next AI grand challenge is to enable it with human level "common sense". Learning common sense comparable to humans is key to achieve general AI, and a feature generally considered impossible for previous forms of AI. Initial results in this direction include caption generation from videos that translate distributed representations into human language. This points to a path where improvements in unsupervised learning and in language understanding may one day allow for autonomous acquisition of common sense, at least under constrained conditions.

3. Critical requirements for successful corporate AI Research

Two essential factors are required to ensure the success of corporate AI research.

First, the structure of successful AI R&D organizations is rather peculiar: different successful approaches emerged, including Google DeepMind acquisition, Facebook AI Research (FAIR) bootstrap from University teams, and OpenAI not for profit mdoel.

A key similarity to all these operations is the focus on aggressive long-term missions and the fostering of stable research teams and open collaborations. To achieve success in the competitive environment that is AI today, it's imperative to ensure stable operating structures over medium-to-long term.

Next, to assure the transfer of innovation to production, successful organizations foster vigorous interactions with operating business units, while at the same time avoiding short-term budgetary pressures and direct reporting structures. Rapid engineer to scientist level of interaction are the results of co-locating R&D and product groups in the same building, and not the result of slow and cumbersome reporting structures.

The second factor relates to how research is conducted: world-class AI projects are carried out in cooperation by numerous researcher institutions, at both companies and universities, making open and early publication of code and results a fundamental factor. In order to protect ideas, early publication of research findings and source code on arXiv, GitHub, is a very cost effective alternative to patents. Leaving precious research locked up in drawers is not in the interest of any company, because the technology will not be adopted and the development will stall. Active publication of information and source-code is the main reason behind today rapid progress of AI. No serious player of the AI sector can expect success from operations without actively contributing to the open-source ecosystem.

Yoshua Bengio

Prof. Yoshua Bengio

Full Professor of the Department of Computer Science and Operations Research, head of the Montreal Institute for Learning Algorithms (MILA)

His main research ambition is to understand principles of learning that yield intelligence. He teaches a graduate course in Machine Learning and supervises a large group of graduate students and post-docs.