Skip to content

Conversation

cyang49
Copy link
Contributor

@cyang49 cyang49 commented Oct 6, 2025

Purpose

This PR attempts to implement FP8 Mamba SSM Cache support incrementally. The cache management is already capable of allocating the right size when enabling fp8 by setting mamba_ssm_cache_dtype=fp8. However, naively enabling this results in the mamba state being casted (instead of scaled) to higher precision types for computations, and the store back is also forced type casts instead of scaling. This leads to output quality degradation.

We will add the support gradually:

Basic requirements

  • support static/dynamic per-tensor scale
  • support finer grained (per-head, per-token, per-group) scales
  • support dynamic scaling
  • support fp8 in prefix-caching use cases
  • memory footprint measurements to ensure correctness of the implementation
  • performance (latency/throughput) measurements

Further optimizations

  • fuse dequantization/quantization into mamba kernels

Test Plan

We will use a few hybrid model checkpoints

Test Result

To be added


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant