MXNet How To¶
The How-tos provide a range of information from installation, basic concepts and general guidance to demos complete with pre-trained models, instructions, and commands.
The following topics explain basic concepts and provide procedures for specific tasks. Some include demos complete with pre-trained models.
Setup and Installation¶
You can run MXNet on Amazon Linux, Ubuntu/Debian, OS X, and Windows operating systems. MXNet can also be run on Docker and on Cloud like AWS. MXNet currently supports the Python, R, Julia and Scala languages.
If you are running Python/R on Amazon Linux or Ubuntu, you can use Git Bash scripts to quickly install the MXNet libraries and all its dependencies.
Using Pre-trained Models¶
The MXNet Model zoo is a growing collection of pre-trained models for a variety of tasks. In particular, the popular task of using a ConvNet to figure out what is in an image is described in detail in the tutorial on use pre-trained image classification models. This provides step-by-step instructions on loading, customizing, and predicting image classes with the provided pre-trained image classification model.
Use MXNet to Perform Specific Tasks¶
- How to use Fine-tune with Pre-trained Models Provides instructions for tuning a pre-trained neural network for use with a new, smaller data set. It describes preparing the data, training a new final layer, and evaluating the results. For comparison, it uses two pre-trained models.
- How to visualize Neural Networks as computation graph Provides commands and instructions to visualize neural networks on a Jupyter notebook.
- How to train with multiple CPU/GPUs with data parallelism Provides the MXNet defaults for using multiple GPUs. It also provides instructions and commands for customizing GPU data parallelism setting (such as the number of GPUs and individual GPU workload), training a model with multiple GPUs, setting GPU communication options, synchronizing directories between GPUs, choosing a network interface, and debugging connections.
- How to train with multiple GPUs in model parallelism - train LSTM Discusses the basic practices of model parallelism, such as using each GPU for a layer of a multi-layer model, and how to balance and organize model layers among multiple GPUs to reduce data transmission and bottlenecks.
- How to run MXNet on smart or mobile devices Provides general guidance on porting software to other systems and languages. It also describes the trade-offs that you need to consider when running a model on a mobile device. Provides basic pre-trained image recognition models for use with a mobile device.
- How to set up MXNet on the AWS Cloud using Amazon EC2 and Amazon S3 Provides step-by-step instructions on using MXNet on AWS. It describes the prerequisites for using Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Compute Cloud (Amazon EC2) and the libraries that MXNet depends on in this environment. Instructions explain how to set up, build, install, test, and run MXNet, and how to run MXNet on multiple GPUs in the Cloud.
- How to use MXNet on variable input length/size (bucketing) Explains the basic concepts and reasoning behind using bucketing for models that have different architectures, gives an example of how to modify models to use buckets, and provides instructions on how to create and iterate over buckets.
- How to improve MXNet performance Explains how to improve MXNet performance by using the recommended data format, storage locations, batch sizes, libraries, and parameters, and more.
- How to use nnpack improve cpu performance of MXNet Explains how to improve cpu performance of MXNet by using nnpack. currently, nnpack support convolution, max-pooling, fully-connected operator.
- How to use MXNet within a Matlab environment Provides the commands to load a model and data, get predictions, and do feature extraction in Matlab using the MXNet library. It includes an implementation difference between the two that can cause issues, and some basic troubleshooting.
Develop and Hack MXNet¶
- Create new operators Provides an example of a custom layer along with the steps required to define each part of the layer. It includes a breakdown of the parameters involved, calls out possible optimizations, and provides a link to the complete code for the example.
- Use Torch from MXNet Describes how to install and build MXNet for use with a Torch frontend. It also provides a list of supported Torch mathematical functions and neural network modules, and instructions on how to use them.
Provides a list of default MXNet environment variables, along with a short description of what each controls.