FastLM
We develop efficient LM in large-scale, distributed, parallel, sparsity senarios.
Popular repositories Loading
-
CXL-SpecKV
CXL-SpecKV Public[FPGA'26 Highlight] CXL-SpecKV: A Disaggregated FPGA Speculative KV-Cache for Datacenter LLM Serving
-
CSV-Decode
CSV-Decode PublicCSV-Decode: Certifiable Sub-Vocabulary Decoding for Efficient Large Language Model Inference
Python 12
-
tinyserve-vllm
tinyserve-vllm Public[ACM MM 2025 Oral] TinyServe: Query-Aware Page Allocation Optimization
Repositories
Showing 8 of 8 repositories
- CSV-Decode Public
CSV-Decode: Certifiable Sub-Vocabulary Decoding for Efficient Large Language Model Inference
FastLM/CSV-Decode’s past year of commit activity - CXL-SpecKV Public
[FPGA'26 Highlight] CXL-SpecKV: A Disaggregated FPGA Speculative KV-Cache for Datacenter LLM Serving
FastLM/CXL-SpecKV’s past year of commit activity
Top languages
Loading…
Most used topics
Loading…