2020 AI Research Highlights: Learning Frameworks (part 7)

2020 has been an exciting time for DL frameworks and the AI stacks. We have seen more consolidation of frameworks into platforms that are domain-specific such as NVIDIA Omniverse and NVIDIA Clara. We have seen better abstractions in the AI stack that helps democratize AI and enable rapid prototyping and testing such Pytorch Lightning.

Below are some frameworks that my team at NVIDIA has been involved in building.

This is part of the blog series on 2020 research highlights. You can read other posts for research highlights on generalizable AI (part 1)handling distributional shifts (part 2)optimization for deep learning (part 3)AI4science (part 4)controllable generation (part 5), learning and control (part 6).

Announcing Tensorly-Torch

TensorLy-Torch is a PyTorch only library that builds on top of TensorLy and provides out-of-the-box tensor layers to replace matrix layers in any neural network. Link

  • Tensorize all layers of a neural network: This includes Factorized convolutions fully-connected layers and more!
  • Initialization: initializing tensor decompositions can be tricky since default parameters for matrix layers are not optimal. We provide good defaults to initialize using our tltorch.init module. Alternatively, you can initialize to fit the pretrained matrix layer.
  • Tensor hooks: you can easily augment your architectures with our built-in hooks. Robustify your network with Tensor Dropout. Automatically select the rank end-to-end with L1 Regularization.
  • Methods and model zoo: we are always adding more methods and models to make it easy to compare the performance of various deep tensor-based methods!

Minkowski Engine

Minkowski Engine is an auto-differentiation library for sparse tensors. It supports all standard neural network layers such as convolution, pooling, and broadcasting operations for sparse tensors. Popular architectures include 3D and higher-order vision problems such as semantic segmentation, reconstruction, and detection. Link

  • Unlimited high-dimensional sparse tensor support
  • All standard neural network layers (Convolution, Pooling, Broadcast, etc.)
  • Dynamic computation graph
  • Custom kernel shapes
  • Multi-GPU training
  • Multi-threaded kernel map
  • Multi-threaded compilation
  • Highly-optimized GPU kernels

End-to-end Reinforcement Learning on GPUs with NVIDIA Isaac Gym

We are excited about the preview release of Isaac Gym – NVIDIA’s physics simulation environment for reinforcement learning research that dramatically speeds up training. These environments are physically valid allowing for an efficient sim-to-real transfer. These include a robotic arm, legged robots, deformable objects, and humanoids.  Blog

Stay tuned for more in 2021! Here’s looking forward to exciting developments in AI in the new year.

2020 AI Research Highlights: Learning and Control (part 6)

Embodied AI is the union of “mind” (AI) and “body” (robotics). To achieve this, we need robust learning methods that can be embedded into control systems with safety and stability guarantees. Many of our recent works are advancing these goals on both theoretical and practical fronts. 

This is part of the blog series on 2020 research highlights. You can read other posts for research highlights on generalizable AI (part 1)handling distributional shifts (part 2)optimization for deep learning (part 3)AI4science (part 4), controllable generation (part 5).

Safe Exploration and Planning

My journey into this area of learning and control started with the neural lander. We used deep learning to learn the aerodynamic ground effects in drones. This led to improved landing speed without sacrificing stability requirements. In a subsequent work, we aimed to automate the collection of drone data while staying safe. 

Safe landing
Aggresive landing

We employed robust regression methods with guaranteed uncertainty bounds that guarantees safety even outside of the training domain. This allows the drone to progressively land faster while maintaining safety (i.e. not crashing). Our method trains a density-ratio estimator that accurately predicts the ability to maintain safety at higher speeds. This is based on the principle of adversarial risk minimization, that has also shown gains in sim-to-real generalization in computer vision (post 2).

We employed our method on a simulator, built with data collected from real drones. Our method is superior to popular Gaussian process (GP) method for uncertainty quantification and leads to faster exploration while maintaining safety. This is because GPs are brittle in high dimensions due to poor choice of kernels/priors.  

The ability to explore safely can now be combined with downstream trajectory planning methods in control. It allows us to propagate uncertainty bounds from robust regression and we pose it as chance constraints for planning methods. Thus, we can compute a pool of safe and information-rich trajectories.

Learning methods with accurate uncertainty bounds enable safe trajectory planning

The episodic learning framework is applied to the robotic spacecraft model to explore the state space and learn the friction under collision constraints. We show a significant reduction in variance of the learned model predictions and the number of collisions using robust regression models.

Reinforcement learning in control systems

Analyzing RL in control systems is challenging due to the following reasons: (1) state and action spaces are continuous (2) safety and stability requirements (3) partial observability.

A canonical setting is the linear quadratic Gaussian (LQG) that involves linear dynamics evolution and linear transformation of the hidden state to yield observations with Gaussian noise. LQG appears deceptively simple, but is notoriously challenging to analyze.

Previous methods focused on open loop control which uses random excitation (i.e. actions) to collect measurements for model estimation. However, this yields a regret of T^0.66 which is not optimal, where T is the number of time steps. Paper

Our method is the first closed-loop RL method with guaranteed regret bounds. In closed-loop control, the past measurements are all correlated with the control actions which makes it challenging to estimate the model parameters. We utilize tools from classical control theory (predictive form) to guarantee consistent estimation of the model parameters. This yields an improved regret bound of T^0.5. Paper

Surprisingly, we can do better in terms of the regret bound. We showed that combining online learning with episodic updates can lead to logarithmic regret. Intuitively, we decouple adaptive learning of model parameters (episodic updates) with online learning of control policy. This combination allows us to achieve fast learning (with low regret) in closed-loop control. Paper

2020 AI Research Highlights: Controllable Generation (part 5)

Generative models have greatly advanced over the last few years. We can now generate images and text that pass the Turing test: at the first glance, they look considerably realistic.

A major unsolved challenge is the ability to control the generative process. We would like specify attributes or style codes for image generation; we would like to shape the narrative of text generation. We have made progress with both these goals and will describe them in this post.

You can read previous posts for other research highlights: generalizable AI (part 1), handling distributional shifts (part 2), optimization for deep learning (part 3), AI4science (part 4), learning and control (part 6), learning framework (part 7).

Controllable Text Generation: Megatron-CTRL

Large pretrained language models like GPT can generate long paragraphs of text. However, these models are uncontrollable and make mistakes like common-sense errors, repetition, and inconsistency. In a recent paper, we add ability to dynamically control text generation using keywords and it also incorporates an external knowledge base. Our framework consists of a keyword generator, a knowledge retriever, a contextual knowledge ranker, and a conditional text generator. Results show that our model generates more fluent, consistent, and coherent stories with less repetition and higher diversity. Paper Slides 

Disentanglement Learning in StyleGAN

Disentanglement learning is crucial for obtaining disentangled representations and controllable generation. Current disentanglement methods face several inherent limitations: difficulty with high-resolution images, primarily on learning disentangled representations, and non-identifiability due to the unsupervised setting. To alleviate these limitations, we design new architectures and loss functions based on StyleGAN (Karras et al., 2019), for semi-supervised high-resolution disentanglement learning.

We create two complex high-resolution synthetic datasets for systematic testing. We investigate the impact of limited supervision and find that using only 0.25%~2.5% of labeled data is sufficient for good disentanglement on both synthetic and real datasets.

We propose new metrics to quantify generator controllability, and observe there may exist a crucial trade-off between disentangled representation learning and controllable generation. We also consider semantic fine-grained image editing to achieve better generalization to unseen images. Project page

There is still more to come!

2020 AI Research Highlights: AI4Science (part 4)

2020 has been a landmark year for AI4science. I have had the privilege to work with some of the world’s best experts in a number of challenging scientific domains. 

You can read previous posts for other research highlights: generalizable AI (part 1), handling distributional shifts (part 2), optimization for deep learning (part 3), controllable generation (part 5), learning and control (part 6), learning framework (part 7).

Fourier neural operator solves complex PDEs such as turbulent fluid flows, and Orbnet solves quantum chemistry calculations showing 1000x speedups over traditional solvers while maintaining fidelity.  

AI4PDE

One of the exciting breakthroughs of 2020 is the Fourier neural operator. Neural operators learn mappings from problem specification (e.g. initial and boundary conditions of a PDE) to the solution operator in infinite dimensional spaces. This means that there is no dependence on the resolution or grid of sample points. This allows neural operator to do zero-shot super-resolution, i.e. be able to evaluate at higher resolution and at arbitrary points compared to the training data.  None of the previous approaches using deep learning for solving PDEs have this capability. 

We show that our method can solve Navier Stokes PDE in the turbulent regime: the first result for a deep learning system. Blog

Fourier neural operator

In a related paper, we proposed an alternative framework for solving large-scale fluid flow problems. Meshfree flownet performs physically-valid super-resolution of fluid dynamics at scale (experiments were run on CORI cluster with 128 V100 GPUs)  Project page

Meshfree Flownet

AI4QuantumChemistry

Quantum chemistry is the study of chemical properties and processes at the quantum scale. It has been pivotal for research and discovery in modern chemistry. However, as powerful as quantum chemistry has shown itself to be, it also has a big drawback: Accurate calculations are resource-intensive and time consuming, with routine chemical studies involving computations that take days or longer.

We developed Orbnet: a deep-learning based calculator of quantum properties that preserves the fidelity of traditional solvers while obtaining 1000x speed ups. Orbnet combines domain-specific knowledge (molecular orbitals) with the flexibility of deep learning (graph neural networks). This hybrid model allows for transferability to much larger molecules (more than 10x) compared to molecules used for training Orbnet. We also show that Orbnet provides powerful representations for molecular properties and can be directly used for predicting them. News article

Molecular Orbitals

Stay tuned for more!

2020 AI Research Highlights: Optimization for Deep Learning (part 3)

In this post, I will focus on the new optimization methods we proposed in 2020.  Simple gradient-based methods such as SGD and Adam remain the “workhorses” for training standard neural networks. However, we find many instances where more sophisticated and principled approaches beat these baselines and show promising results. 

You can read previous posts for other research highlights: generalizable AI (part 1), handling distributional shifts (part 2), AI4science (part 4), controllable generation (part 5), learning and control (part 6), learning framework (part 7).

Low-Precision Training

Employing standard optimization techniques such as SGD and Adam for training in low precision systems leads to severe degradation as the bit width is reduced. Instead, we propose a co-design framework where we jointly design the bit representation and optimization algorithm.

We draw inspiration from how our own brains represent information: there is strong evidence that it uses a logarithmic number system. This system can efficiently handle a large dynamic range even with low bitwidth. We propose a new optimization method MADAM for directly optimizing in the logarithmic number system. This is a multiplicative weight update version of the popular Adam method. We show that it obtains state-of-art performance in low bitwidth training, often without any learning rate tuning. Thus, Madam can directly train compressed neural networks where the weights are efficiently represented in a logarithmic number system. Paper

Madam preserves performance even under low bitwidth

Competitive optimization 

This extends single-agent optimization to multiple agents with their own objective functions. It has applications ranging from constrained optimization to generative adversarial networks (GANs) and multi-agent reinforcement learning (MARL). 

We introduce competitive gradient descent (CGD) as a natural generalization of gradient descent (GD). In GD, each agent updates based on their own gradients, and there is no interaction among the players. In contrast, CGD incorporates interactions among the players by posing it as the Nash equilibrium of a local bilinear approximation of their objectives. This reduces to having a preconditioner based on the mixed Hessian function. This is efficient to implement using conjugate gradient (CG) updates. We see that CGD successfully converges in all instances of games where GD is unstable and exhibits oscillatory behavior. Blog

We further used the game-theoretic intuitions CGD to study dynamics in GAN training. There is a delicate balance between generator and discriminator capabilities to obtain the best performance. If the discriminator becomes too powerful on the training data, it will reject all samples outside of training leading to pathological solutions. We show that this pathology is prevented due to simultaneous training of both agents, and we term this as implicit competitive regularization (ICR).  We observe that CGD strengthens ICR and prevents oscillatory behavior, and thus improves GAN training. Blog

A pathological discriminator that overfits to training data.
CGD obtains best FID score without the need for explicit regularization

Stay tuned for more!

2020 AI Research Highlights: Handling distributional shifts (part 2)

Distributional shifts are common in real-world problems, e.g. often simulation data is used to train in data-limited domains. Standard neural networks cannot handle such large domain shifts. They also lack uncertainty quantification: they tend to be overconfident when they make errors. 

You can read previous posts for other research highlights: generalizable AI (part 1), optimization (part 3), AI4science (part 4), controllable generation (part 5), learning and control (part 6), learning framework (part 7).

A common approach for unsupervised domain adaptation is self training. Here, the model trained on source domain is fine-tuned on the target samples using self-generated labels (and hence, the name). Accurate uncertainty quantification (UQ) is critical here: we should only be selecting target labels with high confidence for self training. Otherwise it will lead to catastrophic failure.

We propose a distributionally robust learning (DRL) framework for accurate UQ. It is an adversarial risk minimization framework that leads to a joint training with an additional neural network – a density ratio estimator. This is obtained through a discriminative network that classifies the source and target domains. The density-ratio estimator prevents the model from being overconfident on target inputs far away from the source domain.  We see significantly better calibration and improvement in domain adaptation on VisDA-17.  Paper


Saliency map for model (DRST) compared to baselines for self-training.
Density ratio of source to target. A lower density ratio indicates a lower confidence.

In a previous project, we proposed another simple measure hardness of samples, termed as angular visual hardness (AVH). This score that does not need any additional computation or model capacity. We saw improved self-training performance compared to baseline softmax score for confidence. Project 

Another key ingredient for improved synthetic-to-real generalization involves domain distillation and automated layer-wise learning rates. We propose an Automated Synthetic-to-real Generalization (ASG) framework by formulating it as a lifelong learning problem with a pre-trained model on real images (e.g. Imagenet). Since it does not require any extra training loop other than synthetic training, it can be conveniently used as a drop-in module to many applications involving synthetic training.  Project

Learning without forgetting for synthetic to real generalization
Synthetic to real generalization.

Combining ASG with density-ratio estimator yields state-of-art results on unsupervised domain adaptation. Paper

Fair ML: Handling Imbalanced Datasets

It is common to have imbalanced training datasets where certain attributes (e.g. darker skin tone) are not well represented. We tackle a general framework that can handle any arbitrary distributional shift in the label proportions between training and testing data.  Simple approaches to handle label shift involve class balanced sampling or incorporating importance weights. However, we show that neither is optimal. We proposal a new method that optimally combines these methods and balances the bias introduced from class-balanced sampling and the variance due to importance weighting. Paper

Examples of label shifts
Example of label shift in binary classification (stars and dots). Our method is optimal since it combines subsampling with importance weighting for bias-variance tradeoff.

In the next post, I will be highlighting some important contributions to optimization methods.

2020 AI Research Highlights: Generalizable AI (part 1)

2020 has been an unprecedented year. There has been too much suffering around the world. I salute the brave frontline workers who have risked their lives to tackle this raging pandemic. Amidst all the negativity and toxicity in online social media, it is easy to miss many positive outcomes of 2020. 

Personally, 2020 has been a year of many exciting research breakthroughs for me and my collaborators at NVIDIA and Caltech. We are grateful to have this opportunity to focus on our research. Here are some important highlights. 

In the first part of this blog series, I will focus on generalizable AI algorithms while the subsequent posts will highlight ML methods, optimization, domain-specific AI algorithms, and DL frameworks. Check out the other posts here: handling distributional shifts (part 2)optimization for deep learning (part 3)AI4science (part 4)controllable generation (part 5)learning and control (part 6), learning frameworks (part 7).

Generalizable AI Highlights:

Concept learning and compositionality:  We developed a new benchmark Bongard-LOGO for human-level concept learning and reasoning. Our benchmark captures three core properties of human perception: 1) context-dependent perception, in which the same object has disparate interpretations, given different contexts; 2) analogy-making perception, in which some meaningful concepts are traded off for other meaningful concepts; and 3) perception with a few samples but infinite vocabulary. 

Our evaluations show that state-of-the-art deep learning methods perform substantially worse than human subjects, implying that they fail to capture core human cognition properties. Significantly, the neuro-symbolic method has the best performance across all the tests, implying the need for symbolic reasoning for efficient concept learning. Project 

Conscious AI: Adding Feedback to Feedforward Neural Networks It is hypothesized that the human brain derives consciousness with a top-down feedback mechanism that incorporates a generative model of the world. Inspired by this, we design a principled approach t adding a coupled generative recurrent feedback into feedforward neural networks. This vastly improves adversarial robustness, even when there is no explicit adversarial training. Paper. Blog.

Adaptive learning: Generalizable AI requires the ability to quickly adapt to changing environments. We designed practical hierarchical reinforcement learning (RL) in legged robots that can adapt to new environments and tasks, not available during training. The training is carried out with NVIDIA Flex simulation environment that is physically valid and GPU accelerated. We adopted a hierarchical RL framework where the high-level controller learns to choose from a set of primitives in response to changes in the environment and a low-level controller that utilizes an established control method to robustly execute the primitives. 

The model can easily transfer to a real-life robot without sophisticated randomization or adaption schemes due to this hierarchical design and having a curriculum of tasks during training. The designed controller is up to 85% more energy-efficient and is more robust compared to baseline methods. Blog 

Real-world tasks often have a compositional structure that contains a sequence of simpler sub-tasks. We proposed a multi-task RL framework OCEAN to perform online task inference for compositional tasks. Here, the current task composition is estimated from the agent’s past experiences with probabilistic inference. We model global and local context variables in a joint latent space, where the global variables represent a mixture of sub-tasks that constitute the given task, while the local variables capture the transitions between the sub-tasks. Our framework supports flexible latent distributions based on prior knowledge of the task structure and can be trained in an unsupervised manner. Experimental results show that the proposed framework provides more effective task inference with sequential context adaptation and thus leads to a performance boost on complex, multi-stage tasks.  Project

OCEAN for multi-task RL.
OCEAN is able to adapt to new goals in different stages

Causal learning: Being able to identify cause and effects is at the core of human cognition. This allows us to extrapolate to entirely new unseen scenarios and reason about them. We proposed the first framework that is able to learn causal structural dependencies directly from videos without any supervision on the ground-truth graph structure. This model combines unsupervised keypoint-based representation with causal graph discovery and graph-based dynamics learning. Experiments demonstrate that our model can correctly identify the interactions from a short sequence of images and make long-term future predictions on out of distribution interventions and counterfactuals. Project

Our method is able to predict on new cloth configurations due to causal learning.

Stay tuned for more!

My heartfelt apology

I want to wholeheartedly apologize to everyone hurt by my words. I want to assure you that I bear no animosity. I want to be part of an inclusive community where all voices are heard. 

I am sorry if my actions/words have ever created a threatening environment. My intention was only to change hearts and minds, and to raise awareness to the struggles that women and minorities face both online and in the real world. I will find better ways to achieve that goal. 

I am by no means perfect. I am here to learn from you. I am here to address your concerns. I hope you will join me in my quest to create a healthy and thriving community.

My departure from Twitter

Many of you are very concerned about why my Twitter account is no longer active. I have voluntarily decided to de-activate my account in the interest of my safety and to reduce anxiety for my loved ones. I want to focus on my research and my team where my attention and energy are badly needed.

I want to emphasize that this decision is solely mine. My employers NVIDIA and Caltech are fully supportive of me and my mission. They support employees expressing diverse personal views.

I am proud of the work we have done to promote diversity and inclusion. I encourage you to continue doing that. We are all bright creative minds with an endless potential to innovate. We will find new and safer ways to stay connected and build a better future. 

Coping with COVID

These are unprecedented times. So far, I have managed to avoid watching movies that involved pandemics or germs; paranoia and panic was not a source of entertainment for me. But now we see one play out in real life at a global scale. This affects us all. There is no escape. Maya Angelou said it best:

“Love Is like A Virus. It Can Happen To Anybody At Any Time.” Maya Angelou.

I have been in self isolation for more than a week, here in California. But I haven’t really been alone. My days have been busy with many meetings with collaborators and students. We are all worried, especially about our families. But we also want to focus on how we can help  fight this. We are working on a number of projects that will help build better foundations for AI and lead to better accountability and rigor.

It has not been easy over the last few years. While deep learning has shown good promise, the amount of hype around it has been staggering. It has been hard to push back against this, but I have no regrets that I spoke up. Most recently, I spoke out on how half-baked AI apps are being used in sensitive contexts such as law enforcement with no oversight or accountability.

So how is this related to Covid? I read an excellent article titled “You cannot gaslight a virus.”  Gaslighting is a gendered term, used to describe toxic and abusive behavior directed against women that leads her to question her own sanity. Every woman I know has experienced it to some degree. The last few years have seen an unprecedented rise in populism, misinformation, and pushback against rigor and expertise. We see it in politics, and I have lived it in the AI field. This will now all have to change. I read a honest article by the founder of Starsky robotics on why we won’t be having self-driving trucks on road anytime soon “..supervised machine learning doesn’t live up to the hype. It isn’t actual artificial intelligence..” I agree with this. No longer can we rely on empty promises and hype. We have to deliver, we have to demonstrate, we have to carefully experiment, we have to test and repeat. This is a good thing.

The last few years have been challenging for women and minorities. I have faced struggles when I spoke up against sexual harassment and #meToo in AI. I have publicly named some of my harassers and fought to ensure that they cannot claim more victims. This has taken a huge toll. I have made enemies, some who are senior and powerful. I have had anonymous threats made against me. Many men (they are almost always men) have brushed aside our concerns, our pains, our struggles.

There is a severe lack of empathy. I see this now play out now amidst the Covid pandemic. At a time, when there is a need for unity and for scientifically grounded planning, our so-called President #notmypresident talks about the “Chinese virus” that puts my Asian friends and students at the risk of racist attacks. I see trolls attack the brave woman who shared her experiences in the ICU. It is well documented that trolls disproportionately target women but this is a new low to target someone who is fighting in the ICU and urging others to be careful.

It was no surprise to me to see the infamous Steven Pinker propagate misinformation and downplay dangers of Covid. This was my response on Twitter:

Screen Shot 2020-03-21 at 12.37.59 AM

My previous online interactions with Pinker have been documented here. I lamented the severe lack of empathy and the tone-deaf nature of his views on rape and sexual harassment. I see the same lack of empathy when it comes to putting people at risk by spreading misinformation about Covid. For too long, we have celebrated narcissists in this culture without asking for any accountability and blindly following the path they prescribe. Not any more!

Let us hope that we usher in a new era as we unite together to fight this virus. We need new leaders with empathy and emotional intelligence. We need to spread love and hope. Stay safe, stay healthy and practice social distancing. I wish you all the best!