|
1 | 1 | Release Notes |
2 | 2 | ============= |
3 | 3 |
|
| 4 | +Unreleased: v0.2.0 |
| 5 | +------------------ |
| 6 | + |
| 7 | +This release of Isaac Lab-Arena focuses on adding essential features needed for creation and |
| 8 | +execution of large-scale task libraries with complex long-horizon tasks. |
| 9 | + |
| 10 | +.. note:: |
| 11 | + |
| 12 | + Changes on ``main`` contains an in development version of v0.2.0. |
| 13 | + As of March 16th 2026 (GTC San Jose 2026), ``main`` contains most of the features for the v0.2.0 release, |
| 14 | + however, is based on Isaac Lab 2.3 (rather than Isaac Lab 3.0) and has not been SQA tested.* |
| 15 | + |
| 16 | +**Key Features** |
| 17 | + |
| 18 | +- **LEGO-like Composable Environments** — Mix and match scenes, embodiments, and tasks independently |
| 19 | +- **On-the-fly Assembly** — Environments are built at runtime; no duplicate config files to maintain. |
| 20 | +- **New Sequential Task Chaining** — Chain atomic skills (e.g. Pick + Walk + Place + …) to create complex long-horizon tasks. |
| 21 | +- **New Natural Language Object Placement** — Define scene layouts using semantic relationships |
| 22 | + like "on" or "next to", instead of manually specified coordinates. |
| 23 | +- **Integrated Evaluation** — Extensible metrics and evaluation pipelines for policy benchmarking |
| 24 | +- **New Large-scale Parallel Evaluations with Heterogeneous Objects** — Evaluate policy on multiple parallel |
| 25 | + environments, each with different objects, to maximize evaluation throughput. |
| 26 | +- **New RL Workflow Support and Seamless Interoperation with Isaac Lab** — Plug Isaac Lab-Arena environments |
| 27 | + into Isaac Lab workflows for Reinforcement learning and Data generation for imitation learning. |
| 28 | + |
| 29 | + |
| 30 | +**Ecosystem** |
| 31 | +NVIDIA and partners are building Industrial and academic benchmarks on the unified Isaac Lab-Arena core, |
| 32 | +so you can reuse LEGO blocks (tasks, scenes, metrics, and datasets) for your custom evaluations. |
| 33 | + |
| 34 | +- `Lightwheel RoboFinals <https://lightwheel.ai/robofinals>`_ — high fidelity industrial benchmarks |
| 35 | +- `Lightwheel RoboCasa Tasks <https://github.com/LightwheelAI/LW-BenchHub>`_ — 138+ open-source tasks, |
| 36 | + 50 datasets per task, 7+ robots |
| 37 | +- `Lightwheel LIBERO Tasks <https://github.com/LightwheelAI/LW-BenchHub>`_ — Adapted LIBERO benchmarks |
| 38 | +- `RoboTwin 2.0 <https://github.com/RoboTwin-Platform/RoboTwin/tree/IsaacLab-Arena>`_ — Extended simulation |
| 39 | + benchmarks using Arena (`arxiv <https://arxiv.org/abs/2603.08164>`_) |
| 40 | +- `LeRobot Environment Hub <https://huggingface.co/blog/nvidia/generalist-robotpolicy-eval-isaaclab-arena-lerobot>`_ — Share |
| 41 | + and discover Arena environments on Hugging Face |
| 42 | +- **Coming Soon:** NIST Board 1, NVIDIA Isaac GR00T Industrial Benchmarks, NVIDIA DexBench, NVIDIA RoboLab, and more. |
| 43 | + |
| 44 | + |
| 45 | +**Developer preview branches** |
| 46 | + |
| 47 | +- **A developer preview of Isaac Lab-Arena 0.2 (based on Isaac Lab 2.3) is now available** on |
| 48 | + `main <https://github.com/isaac-sim/IsaacLab-Arena/tree/main>`_. |
| 49 | + This early version includes the 0.2 features and is meant for users who can accept some instability. |
| 50 | +- **Isaac Lab-Arena 0.2 on Isaac Lab 3.0 is underway in a dedicated feature branch** |
| 51 | + `feature/isaac_lab_3_newton <https://github.com/isaac-sim/IsaacLab-Arena/tree/feature/isaac_lab_3_newton>`_. |
| 52 | + This branch is subject to significant changes and instability as Lab 3.0 (Newton) is evolving quickly. |
| 53 | +- **The official, stable, and tested release of Isaac Lab-Arena 0.2 on Isaac Lab 3.0 is coming soon in April 2026.** |
| 54 | + |
| 55 | + |
| 56 | +**Collaboration** |
| 57 | + |
| 58 | +Isaac Lab-Arena is being developed as an open-source, shared evaluation framework that the community can |
| 59 | +collectively enhance and expand. We invite you to try Isaac Lab-Arena 0.2 Alpha, share feedback, and help |
| 60 | +shape its future. In Alpha stage, development velocity is high and core features/APIs are evolving. Your |
| 61 | +input at this stage is especially valuable. |
| 62 | + |
| 63 | +**What's Next** |
| 64 | + |
| 65 | +Future releases will focus on agentic, prompt-first scene and task generation, non-sequential long horizon |
| 66 | +tasks, easy-to-configure sensitivity analysis with targeted environment variations and evaluation sweeps without |
| 67 | +code changes, enhanced heterogeneity across parallel evaluations, and VLM-augmented analysis to surface |
| 68 | +insights from large-scale evaluations. These will come with ongoing improvements to performance and usability, |
| 69 | +such as PIP packaging. |
| 70 | + |
| 71 | +**Limitations** |
| 72 | + |
| 73 | +- pip install support is coming soon (current installation method is Docker-based). |
| 74 | +- Performance is not yet hardened for production-scale workloads in Alpha stage. |
| 75 | + |
| 76 | + |
| 77 | + |
4 | 78 |
|
5 | 79 | v0.1.1 |
6 | 80 | ------ |
|
0 commit comments