Skip to content
Change the repository type filter

All

    Repositories list

    • FIL backend for the Triton Inference Server
      Jupyter Notebook
      3882510Updated Sep 12, 2025Sep 12, 2025
    • server

      Public
      The Triton Inference Server provides an optimized cloud and edge inferencing solution.
      Python
      1.6k9.8k75478Updated Sep 12, 2025Sep 12, 2025
    • The Triton TensorRT-LLM Backend
      Shell
      13088731323Updated Sep 11, 2025Sep 11, 2025
    • The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
      C++
      33138236Updated Sep 10, 2025Sep 10, 2025
    • Python
      3229509Updated Sep 9, 2025Sep 9, 2025
    • tutorials

      Public
      This repository contains tutorials and examples for Triton Inference Server
      Python
      127768816Updated Sep 9, 2025Sep 9, 2025
    • Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inference Server.
      Python
      56733Updated Sep 9, 2025Sep 9, 2025
    • Third-party source packages that are modified for use in Triton.
      C
      61704Updated Sep 9, 2025Sep 9, 2025
    • The Triton backend for TensorRT.
      C++
      337701Updated Sep 9, 2025Sep 9, 2025
    • Simple Triton backend used for testing.
      C++
      5300Updated Sep 9, 2025Sep 9, 2025
    • An example Triton backend that demonstrates sending zero, one, or multiple responses for each request.
      C++
      7700Updated Sep 9, 2025Sep 9, 2025
    • TRITONCACHE implementation of a Redis cache
      C++
      41530Updated Sep 9, 2025Sep 9, 2025
    • The Triton backend for the PyTorch TorchScript models.
      C++
      5815906Updated Sep 9, 2025Sep 9, 2025
    • Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.
      C++
      178639014Updated Sep 9, 2025Sep 9, 2025
    • Python
      301062616Updated Sep 9, 2025Sep 9, 2025
    • OpenVINO backend for Triton.
      C++
      183262Updated Sep 9, 2025Sep 9, 2025
    • The Triton backend for the ONNX Runtime.
      C++
      70161734Updated Sep 9, 2025Sep 9, 2025
    • Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.
      Python
      78490296Updated Sep 9, 2025Sep 9, 2025
    • Implementation of a local in-memory cache for Triton Inference Server's TRITONCACHE API
      C++
      1510Updated Sep 9, 2025Sep 9, 2025
    • Example Triton backend that demonstrates most of the Triton Backend API.
      C++
      13700Updated Sep 9, 2025Sep 9, 2025
    • C++
      92104Updated Sep 9, 2025Sep 9, 2025
    • core

      Public
      The core library and APIs implementing the Triton Inference Server.
      C++
      114149018Updated Sep 9, 2025Sep 9, 2025
    • common

      Public
      Common source, scripts and utilities shared across all Triton repositories.
      C++
      747605Updated Sep 9, 2025Sep 9, 2025
    • client

      Public
      Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.
      Python
      2446445027Updated Sep 9, 2025Sep 9, 2025
    • The Triton repository agent that verifies model checksums.
      C++
      71100Updated Sep 9, 2025Sep 9, 2025
    • backend

      Public
      Common source, scripts and utilities for creating Triton backends.
      C++
      10034703Updated Sep 9, 2025Sep 9, 2025
    • pytriton

      Public
      PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.
      Python
      55816120Updated Aug 13, 2025Aug 13, 2025
    • The Triton backend for TensorFlow.
      C++
      225302Updated Jun 18, 2025Jun 18, 2025
    • Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.
      Python
      2721040Updated Apr 22, 2025Apr 22, 2025
    • .github

      Public
      Community health files for NVIDIA Triton
      2200Updated Mar 27, 2025Mar 27, 2025