Skip to content

Commit 6ed6248

Browse files
committed
rename references to get_chat_template in doc strings
1 parent eaf0782 commit 6ed6248

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

llama-cpp-2/src/model.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ pub struct LlamaLoraAdapter {
3636
pub(crate) lora_adapter: NonNull<llama_cpp_sys_2::llama_adapter_lora>,
3737
}
3838

39-
/// A performance-friendly wrapper around [LlamaModel::get_chat_template] which is then
39+
/// A performance-friendly wrapper around [LlamaModel::chat_template] which is then
4040
/// fed into [LlamaModel::apply_chat_template] to convert a list of messages into an LLM
4141
/// prompt. Internally the template is stored as a CString to avoid round-trip conversions
4242
/// within the FFI.
@@ -627,7 +627,7 @@ impl LlamaModel {
627627
/// use "chatml", then just do `LlamaChatTemplate::new("chatml")` or any other model name or template
628628
/// string.
629629
///
630-
/// Use [Self::get_chat_template] to retrieve the template baked into the model (this is the preferred
630+
/// Use [Self::chat_template] to retrieve the template baked into the model (this is the preferred
631631
/// mechanism as using the wrong chat template can result in really unexpected responses from the LLM).
632632
///
633633
/// You probably want to set `add_ass` to true so that the generated template string ends with a the

0 commit comments

Comments
 (0)