2020 has been an exciting time for DL frameworks and the AI stacks. We have seen more consolidation of frameworks into platforms that are domain-specific such as NVIDIA Omniverse and NVIDIA Clara. We have seen better abstractions in the AI stack that helps democratize AI and enable rapid prototyping and testing such Pytorch Lightning.
Below are some frameworks that my team at NVIDIA has been involved in building.
This is part of the blog series on 2020 research highlights. You can read other posts for research highlights on generalizable AI (part 1), handling distributional shifts (part 2), optimization for deep learning (part 3), AI4science (part 4), controllable generation (part 5), learning and control (part 6).
TensorLy-Torch is a PyTorch only library that builds on top of TensorLy and provides out-of-the-box tensor layers to replace matrix layers in any neural network. Link
- Tensorize all layers of a neural network: This includes Factorized convolutions fully-connected layers and more!
- Initialization: initializing tensor decompositions can be tricky since default parameters for matrix layers are not optimal. We provide good defaults to initialize using our
tltorch.initmodule. Alternatively, you can initialize to fit the pretrained matrix layer.
- Tensor hooks: you can easily augment your architectures with our built-in hooks. Robustify your network with
Tensor Dropout. Automatically select the rank end-to-end with L1 Regularization.
- Methods and model zoo: we are always adding more methods and models to make it easy to compare the performance of various deep tensor-based methods!
Minkowski Engine is an auto-differentiation library for sparse tensors. It supports all standard neural network layers such as convolution, pooling, and broadcasting operations for sparse tensors. Popular architectures include 3D and higher-order vision problems such as semantic segmentation, reconstruction, and detection. Link
- Unlimited high-dimensional sparse tensor support
- All standard neural network layers (Convolution, Pooling, Broadcast, etc.)
- Dynamic computation graph
- Custom kernel shapes
- Multi-GPU training
- Multi-threaded kernel map
- Multi-threaded compilation
- Highly-optimized GPU kernels
End-to-end Reinforcement Learning on GPUs with NVIDIA Isaac Gym
We are excited about the preview release of Isaac Gym – NVIDIA’s physics simulation environment for reinforcement learning research that dramatically speeds up training. These environments are physically valid allowing for an efficient sim-to-real transfer. These include a robotic arm, legged robots, deformable objects, and humanoids. Blog
Stay tuned for more in 2021! Here’s looking forward to exciting developments in AI in the new year.
You must be logged in to post a comment.