Skip to content

Neural Graphics Model Gym is a Python® toolkit for developing real-time Neural Graphics machine learning models.

License

Notifications You must be signed in to change notification settings

arm/neural-graphics-model-gym

Neural Graphics Model Gym

Model Card License Python versions Neural Graphics Model Gym CLI help output

Note

Please be aware that this is a beta release. Beta means that the product may not be functionally or feature complete. At this early phase the product is not yet expected to fully meet the quality, testing or performance requirements of a full release. These aspects will evolve and improve over time, up to and beyond the full release. We welcome your feedback.

Table of contents

  1. Introduction
  2. Quick Start
  3. Monitoring and profiling
  4. Logging
  5. Testing
  6. Adding custom models, datasets, and usecases
  7. Generating new training data
  8. Troubleshooting
  9. Code contributions
  10. Security
  11. License
  12. Trademarks and copyrights

Introduction

Neural Graphics Model Gym is a Python® toolkit for developing real-time Neural Graphics machine learning models.

With Neural Graphics Model Gym you can train, finetune and evaluate your Neural Graphics models. Neural Graphics Model Gym also enables you to perform quantization of your model before exporting it to a format compatible with ML extensions for Vulkan® - allowing you to run on the latest mobile devices.

Currently, we include the following Neural Graphics use cases:

  • Neural Super Sampling (NSS)
    • NSS allows for high-fidelity, real-time graphics in game engines. By feeding low-resolution frames, along with spatial and motion information, into a neural network we are able to construct high-resolution frames that suffer no loss in quality.

Quick Start

Prerequisites

To build and run Neural Graphics Model Gym, the following are required:

  • Ubuntu® >= 22.04
    • Neural Graphics Model Gym has been tested on 22.04 LTS and 24.04 LTS, but should work on other Linux® distributions
  • 3.10 <= Python < 3.13
  • Python development package (e.g. python3-dev)
  • NVIDIA® CUDA® capable GPU
  • CUDA Toolkit v12.8 or later
  • Git LFS

Setup

  1. Clone the repository:
git clone https://github.com/arm/neural-graphics-model-gym.git
  1. Install the project:
pip install .

For more details including how to install in development mode and how to run using Docker see setup.md.

Usage

Neural Graphics Model Gym can be used either as a command line tool or as a package which may be imported into a Python application.

Basic usage is shown here. More detailed commands can be found in usage.md.

Command line usage

Generate a configuration file (config.json):

ng-model-gym init

This file contains configuration options for the different usage modes (training, evaluation, and exporting) and paths to local datasets. Some entries have placeholder values (e.g. "<...>"). Make sure to replace those with your own settings.

For Windows users:

When editing config.json, Windows paths must either use forward slashes (path/to/location) or escaped backslashes (path\\to\\location). Single backslashes (e.g. path\to\location) are invalid JSON and will cause a JSONDecodeError.

Use your custom configuration when invoking CLI commands by providing its path with the --config-path or -c flag as shown below:

# Perform model training and evaluation
ng-model-gym --config-path=<path/to/config.json> train
# Evaluate a previously trained model
ng-model-gym -c <path/to/config/file> evaluate --model-path=<path/to/model.pt> --model-type=<fp32|qat_int8>
# Perform quantization aware training (QAT) and evaluation
ng-model-gym -c <path/to/config/file> qat
# Export a trained model to VGF file
ng-model-gym -c <path/to/config/file> export --model-path=<path/to/model.pt> --export-type=<fp32|qat_int8|ptq_int8>

The --config-path (or -c) flag is required when running the train, qat, evaluate, or export commands. These commands will fail if a valid config file path is not provided.

If you would like to view and download the available pre-trained models, use the following commands:

# List downloadable models hosted on the configured repositories
ng-model-gym list-models

# Download a specific model to a directory of your choice
# ng-model-gym download <repo_name>/<file_name> <destination>
ng-model-gym download neural-super-sampling/nss_v0.1.0_fp32.pt ./myfolder

The remote string identifier (e.g. @neural-super-sampling/nss_v0.1.0_fp32.pt) can also be used directly to automatically fetch and use models when running certain CLI commands. See the commands in usage.md for more details.

The complete list of CLI commands can be seen by running ng-model-gym --help and more detailed information about the commands can be found in usage.md.

Usage as a Python package

The second way to use Neural Graphics Model Gym is to import it as a Python package.

The following snippet shows how to use the package to generate a config, perform training, evaluation and exporting the model.

import ng_model_gym as ngmg

# Generate config file in specified directory using the API or CLI
# Note: The config file must be filled in before use
ngmg.generate_config_file("/save/dir")
import ng_model_gym as ngmg
from pathlib import Path

# Create a Config object using path to a configuration file
# and extract parameters from it.
config = ngmg.load_config_file(Path("/path/to/config/file"))

# Enable logging for ng_model_gym
ngmg.logging_config(config)

# Do training and evaluation.
trained_model_path = ngmg.do_training(config, ngmg.TrainEvalMode.FP32)
ngmg.do_evaluate(config, trained_model_path, ngmg.TrainEvalMode.FP32)

# Export the trained fp32 model to a VGF file.
ngmg.do_export(config, trained_model_path, export_type=ngmg.ExportType.FP32)

Jupyter® notebook tutorials on how to use the package, including:

  • Training
  • Quantization-aware training and exporting
  • Evaluation
  • Fine-tuning
  • Adding a custom model

can be found in the neural-graphics-model-gym-examples repository.

Monitoring and profiling

The following tools have been set up to track models during training and to capture performance profiles:

Their usage is demonstrated in monitoring-and-profiling.md.

Logging

By default, logging is enabled and set to INFO mode, which will print helpful information during execution. All logs will be written to an output.log file located within the output directory specified in the configuration file. The logging mode is customizable by using flags with the ng-model-gym CLI command. See the options below for examples.

--log-level=quiet can be added to silence all logs, except errors.

ng-model-gym --log-level=quiet -c <path/to/config/file> train

--log-level=debug can be added to print even more information during the process.

ng-model-gym --log-level=debug -c <path/to/config/file> train

Logging can also be specified when importing the package as follows.

import ng_model_gym as ngmg
from pathlib import Path

# Create a Config object using path to a configuration file
parameters = ngmg.load_config_file(Path("/path/to/config"))

# Enable logging for ng_model_gym
ngmg.logging_config(parameters)

Testing

A collection of unit and integration tests are provided to ensure the functionality of Neural Graphics Model Gym.

Testing can be run using Hatch commands. First install Hatch and create a dev environment. This will install all the dependencies for Neural Graphics Model Gym, plus the additional dependencies required for testing. The list of testing commands can be found here.

Adding custom models, datasets, and usecases

Neural Graphics Model Gym supports adding custom models and datasets, enabling their use across all workflows. Detailed documentation on how to implement this can be found in custom-models-and-datasets.md.

We also support defining custom use cases to group together related models, datasets, configurations, and any additional required code. See adding custom usecases to see the implementation guide.

Generating new training data

To train the Neural Super Sampling model, you will first need to capture training data from your game engine in the format expected by the model. Information regarding the types of data to capture and how to convert your captured frames can be found here.

Troubleshooting

A list of common known issues and their workarounds can be found at troubleshooting.md

Code contributions

The Neural Graphics Model Gym project welcomes contributions. For more details on contributing to the project, please see CONTRIBUTING.md.

Security

Arm takes security issues seriously: please see SECURITY.md for more details.

After creating an editable installation using Hatch, you can run the security vulnerabilities checker with the following command:

hatch run static-analysis:bandit-check

License

Neural Graphics Model Gym is licensed under Apache License 2.0.

Trademarks and copyrights

  • Linux® is the registered trademark of Linus Torvalds in the U.S. and elsewhere.
  • Python® is a registered trademark of the Python Software Foundation.
  • Ubuntu® is a registered trademark of Canonical.
  • Docker and the Docker logo are trademarks or registered trademarks of Docker, Inc. in the United States and/or other countries. Docker, Inc. and other parties may also have trademark rights in other terms used herein.
  • NVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries.
  • “Jupyter” and the Jupyter logos are trademarks or registered trademarks of LF Charities.
  • Vulkan is a registered trademark and the Vulkan SC logo is a trademark of the Khronos Group Inc.
  • Microsoft, Windows are trademarks of the Microsoft group of companies

About

Neural Graphics Model Gym is a Python® toolkit for developing real-time Neural Graphics machine learning models.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Contributors 8