10 October 2018

Highlights from the Google AI Residency Program




In 2016, we welcomed the inaugural class of the Google Brain Residency, a select group of 27 individuals participating in a 12-month program focused on jump-starting a career in machine learning and deep learning research. Since then, the program has experienced rapid growth, leading to its evolution into the Google AI Residency, which serves to provide residents the opportunity to embed themselves within the broader group of Google AI teams working on machine learning research and its applications.
Some of our 2017 Google AI residents at the 2017 Neural Information Processing Systems Conference, hosted in Long Beach, California.
The accomplishments of the second class of residents are as remarkable as those of the first, publishing multiple works to various top-tier machine learning, robotics and healthcare conferences and journals. Publication topics include:
  • A study on the effect of adversarial examples on human visual perception.
  • An algorithm that enables robots to learn more safely by avoiding states from which they cannot reset.
  • Initialization methods which enable training of neural network with unprecedented depths of 10K+ layers.
  • A method to make training more scalable by using larger mini-batches, which when applied to ResNet-50 on ImageNet reduced training time without compromising test accuracy.
  • And many more...
This experiment demonstrated (for the first time) the susceptibility of human time-limited vision to adversarial examples. For more details, see “Adversarial Examples that Fool both Computer Vision and Time-Limited Humans” accepted at NIPS 2018).
An algorithm for safe reinforcement learning prevents robots from taking actions they cannot undo. For more details, see “Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning” (accepted at ICLR 2018).
Extremely deep CNNs can be trained without the use of any special tricks simply by using a specially designed (Delta-Orthogonal) initialization. Test (solid) and training (dashed) curves on MNIST (top) and CIFAR10 (bottom). For more details, see “Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks” accepted at ICML 2018.
Applying a sequence of simple scaling rules, we increase the SGD batch size and reduce the number of parameter updates required to train our model by an order of magnitude, without sacrificing test set accuracy. This enables us to dramatically reduce model training time. For more details, see “Don’t Decay the Learning Rate, Increase the Batch Size”, accepted at ICLR 2018.
With the 2017 class of Google AI residents graduated and off to pursue the next exciting phase in their careers, their desks were quickly filled in June by the 2018 class. Furthermore, this new class is the first to be embedded in various teams across Google’s global offices, pursuing research in areas such as perception, algorithms and optimization, language, healthcare and much more. We look forward to seeing what they can accomplish and contribute to the broader research community!

If you are interested in joining the fourth class, applications for the 2019 Google AI Residency program are now open! Visit g.co/airesidency/apply for more information on how to apply. Also, check out g.co/airesidency to see more resident profiles, past Resident publications, blog posts and stories. We can’t wait to see where the next year will take us, and hope you’ll collaborate with our research teams across the world!

No comments:

Post a Comment