Update on the development branch #1765
kaiyux
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
The TensorRT-LLM team is pleased to announce that we are pushing an update to the development branch (and the Triton backend) this June 11, 2024.
This update includes:
examples/phi/README.md
max_batch_size
intrtllm-build
command is 256 by default now.max_num_tokens
intrtllm-build
command is 8192 by default now.api
ingptManagerBenchmark
command isexecutor
by default now.bias
argument to theLayerNorm
module, and supports non-bias layer normalization.LLM.generate()
API.SamplingConfig
SamplingParams
with some sampling parameters, seetensorrt_llm/hlapi/utils.py
SamplingParams
instead ofSamplingConfig
inLLM.generate()
API, seeexamples/high-level-api/README.md
GptManager
APImaxBeamWidth
intoTrtGptModelOptionalParams
schedulerConfig
intoTrtGptModelOptionalParams
convert_hf_mpt_legacy
call failure when the function is called in other than global scope, thanks to the contribution from @bloodeagle40234 in Define hf_config explisitly for convert_hf_mpt_legacy #1534.use_fp8_context_fmha
broken outputs (use_fp8_context_fmha broken outputs #1539).--ipc=host
notes to installation guide to prevent bus error, seedocs/source/installation/build-from-source-linux.md
anddocs/source/installation/linux.md
(Bus error running t5 conversion script using the latest main #1538)Thanks,
The TensorRT-LLM Engineering Team
Beta Was this translation helpful? Give feedback.
All reactions