Skip to content

Releases: amazon-science/chronos-forecasting

2.0.0

20 Oct 13:48
7a8427d

Choose a tag to compare

🚀 Introducing Chronos-2: From univariate to universal forecasting

This release adds support for Chronos-2. It is a 120M-parameter time series foundation model that offers zero-shot support for univariate, multivariate, and covariate-informed forecasting tasks. Chronos-2 delivers state-of-the-art zero-shot performance across multiple benchmarks (including fev-bench and GIFT-Eval), with the largest improvements observed on tasks that include exogenous features. In head-to-head comparisons, it outperforms its predecessor, Chronos-Bolt, over 90% of times.

📌 Get started with Chronos-2: Chronos-2 Quick Start

Chronos-2 offers significant improvements in capabilities and can handle diverse forecasting scenarios not supported by earlier models.

Capability Chronos Chronos-Bolt Chronos-2
Univariate Forecasting
Cross-learning across items
Multivariate Forecasting
Past-only (real/categorical) covariates
Known future (real/categorical) covariates 🧩 🧩
Fine-tuning support
Max. Context Length 512 2048 8192

🧩 Chronos/Chronos-Bolt do not natively support future covariates, but they can be combined with external covariate regressors (see AutoGluon tutorial). This only models per-timestep effects, not effects across time. In contrast, Chronos-2 supports all covariate types natively.

fig1
Figure 1: The complete Chronos-2 pipeline. Input time series (targets and covariates) are first normalized using a robust scaling scheme, after which a time index and mask meta features are added. The resulting sequences are split into non-overlapping patches and mapped to high-dimensional embeddings via a residual network. The core transformer stack operates on these patch embeddings and produces multi-patch quantile outputs corresponding to the future patches masked out in the input. Each transformer block alternates between time and group attention layers: the time attention layer aggregates information across patches within a single time series, while the group attention layer aggregates information across all series within a group at each patch index. The figure illustrates two multivariate time series with one known covariate each, with corresponding groups highlighted in blue and red. This example is for illustration purposes only; Chronos-2 supports arbitrary numbers of targets and optional covariates.

fig2
Figure 2: Results of experiments on the fev-bench time series benchmark. The average win rate and skill score are computed with respect to the scaled quantile loss (SQL) metric, which evaluates probabilistic forecasting performance. Higher values are better for both. Chronos-2 outperforms all existing pretrained models by a substantial margin on this comprehensive benchmark, which includes univariate, multivariate, and covariate-informed forecasting tasks.

fig3
Figure 3: Chronos-2 results in univariate mode and the corresponding gains from in-context learning (ICL), shown as stacked bars on the covariates subset of fev-bench. ICL delivers large gains on tasks with covariates, demonstrating Chronos-2’s ability to effectively use covariates through ICL. Besides Chronos-2, only TabPFN-TS and COSMIC support covariates, and Chronos-2 outperforms all baselines (including TabPFN-TS and COSMIC) by a wide margin.

fig4
Figure 4: Results on the GIFT-Eval time series benchmark. The average win rate and skill score with respect to the (a) probabilistic and (b) point forecasting metrics. Higher values are better for both win rate and skill score. Chronos-2 outperforms the previously best-performing models, TimesFM-2.5 and TiRex.

What's Changed

Full Changelog: v1.5.3...v2.0.0

2.0.0rc1

20 Oct 10:06
48cdf1f

Choose a tag to compare

2.0.0rc1 Pre-release
Pre-release

Chronos-2 Pre-release

What's Changed

Full Changelog: v1.5.3...v2.0.0rc1

1.5.3

05 Aug 08:50
fcd09fe

Choose a tag to compare

What's Changed

  • Fix issue with new caching mechanism in transformers and bump versions by @abdulfatir in #313

Full Changelog: v1.5.2...v1.5.3

1.5.2

06 May 08:22
6a9c8da

Choose a tag to compare

v1.5.2 relaxes the upper bound on accelerate to <2.

What's Changed

New Contributors

Full Changelog: v1.5.1...v1.5.2

1.5.1

10 Apr 15:26
f40a266

Choose a tag to compare

🐛 Fixed an issue with forecasting constant series for Chronos-Bolt. See #294.

What's Changed

Full Changelog: v1.5.0...v1.5.1

1.5.0

06 Feb 15:38
73d6c9a

Choose a tag to compare

What's Changed

Full Changelog: v1.4.1...v1.5.0

1.4.1

04 Dec 17:38
133761a

Choose a tag to compare

What's Changed

Full Changelog: v1.4.0...v1.4.1

1.4.0

02 Dec 11:09
47cac08

Choose a tag to compare

Key Changes

  • predict and predict_quantiles will return predictions on cpu in float32.

What's Changed

Full Changelog: v1.3.0...v1.4.0

1.3.0

28 Nov 12:41
ebaa13c

Choose a tag to compare

Highlight

Chronos-Bolt⚡: a 250x faster, more accurate Chronos model

Chronos-Bolt is our latest foundation model for forecasting. It is based on the T5 encoder-decoder architecture and has been trained on nearly 100 billion time series observations. It chunks the historical time series context into patches of multiple observations, which are then input into the encoder. The decoder then uses these representations to directly generate quantile forecasts across multiple future steps—a method known as direct multi-step forecasting. Chronos-Bolt models are up to 250 times faster and 20 times more memory-efficient than the original Chronos models of the same size.

The following plot compares the inference time of Chronos-Bolt against the original Chronos models for forecasting 1024 time series with a context length of 512 observations and a prediction horizon of 64 steps.

Chronos-Bolt models are not only significantly faster but also more accurate than the original Chronos models. The following plot reports the probabilistic and point forecasting performance of Chronos-Bolt in terms of the Weighted Quantile Loss (WQL) and the Mean Absolute Scaled Error (MASE), respectively, aggregated over 27 datasets (see the Chronos paper for details on this benchmark). Remarkably, despite having no prior exposure to these datasets during training, the zero-shot Chronos-Bolt models outperform commonly used statistical models and deep learning models that have been trained on these datasets (highlighted by *). Furthermore, they also perform better than other FMs, denoted by a +, which indicates that these models were pretrained on certain datasets in our benchmark and are not entirely zero-shot. Notably, Chronos-Bolt (Base) also surpasses the original Chronos (Large) model in terms of the forecasting accuracy while being over 600 times faster.

Chronos-Bolt models are now available on HuggingFace🤗 in four sizes—Tiny (9M), Mini (21M), Small (48M), and Base (205M)—and can also be used on the CPU. Check out the example in the README to learn how to use Chronos-Bolt models. You can use Chronos-Bolt models for forecasting in just a few lines of code.

import pandas as pd  # requires: pip install pandas
import torch
from chronos import BaseChronosPipeline

pipeline = BaseChronosPipeline.from_pretrained(
    "amazon/chronos-bolt-base", 
    device_map="cuda",  # use "cpu" for CPU inference
    torch_dtype=torch.bfloat16,
)

df = pd.read_csv(
    "https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv"
)

# context must be either a 1D tensor, a list of 1D tensors,
# or a left-padded 2D tensor with batch as the first dimension
# Chronos-Bolt models generate quantile forecasts, so forecast has shape
# [num_series, num_quantiles, prediction_length].
forecast = pipeline.predict(
    context=torch.tensor(df["#Passengers"]), prediction_length=12
)

Note

We have also integrated Chronos-Bolt models into AutoGluon which is a more feature complete way of using Chronos models for production use cases. With the addition of Chronos-Bolt models and other enhancements, AutoGluon v1.2 achieves a 70%+ win rate against AutoGluon v1.1! In addition to the new Chronos-Bolt models, AutoGluon 1.2 also enables effortless fine-tuning of Chronos and Chronos-Bolt models. Check out the updated Chronos AutoGluon tutorial to learn how to use and fine-tune Chronos-Bolt models using AutoGluon.

What's Changed

New Contributors

Full Changelog: v1.2.0...v1.3.0

1.2.0

17 May 13:42
7a019b3

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.1.0...v1.2.0