Skip to content

Commit 6f5d870

Browse files
[InferenceClient] flag chat_completion()'s logit_bias as UNUSED (#2724)
* update logit bias doc * improve unused parameters documentation
1 parent 438f2fb commit 6f5d870

File tree

2 files changed

+4
-14
lines changed

2 files changed

+4
-14
lines changed

src/huggingface_hub/inference/_client.py

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -576,25 +576,20 @@ def chat_completion(
576576
The model to use for chat-completion. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
577577
Inference Endpoint. If not provided, the default recommended model for chat-based text-generation will be used.
578578
See https://huggingface.co/tasks/text-generation for more details.
579-
580579
If `model` is a model ID, it is passed to the server as the `model` parameter. If you want to define a
581580
custom URL while setting `model` in the request payload, you must set `base_url` when initializing [`InferenceClient`].
582581
frequency_penalty (`float`, *optional*):
583582
Penalizes new tokens based on their existing frequency
584583
in the text so far. Range: [-2.0, 2.0]. Defaults to 0.0.
585584
logit_bias (`List[float]`, *optional*):
586-
Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens
587-
(specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically,
588-
the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model,
589-
but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should
590-
result in a ban or exclusive selection of the relevant token. Defaults to None.
585+
UNUSED. Currently not implemented in text-generation-inference (TGI). Kept as a parameter for OpenAI compatibility.
591586
logprobs (`bool`, *optional*):
592587
Whether to return log probabilities of the output tokens or not. If true, returns the log
593588
probabilities of each output token returned in the content of message.
594589
max_tokens (`int`, *optional*):
595590
Maximum number of tokens allowed in the response. Defaults to 100.
596591
n (`int`, *optional*):
597-
UNUSED.
592+
UNUSED. Currently not implemented in text-generation-inference (TGI). Kept as a parameter for OpenAI compatibility.
598593
presence_penalty (`float`, *optional*):
599594
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the
600595
text so far, increasing the model's likelihood to talk about new topics.

src/huggingface_hub/inference/_generated/_async_client.py

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -612,25 +612,20 @@ async def chat_completion(
612612
The model to use for chat-completion. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed
613613
Inference Endpoint. If not provided, the default recommended model for chat-based text-generation will be used.
614614
See https://huggingface.co/tasks/text-generation for more details.
615-
616615
If `model` is a model ID, it is passed to the server as the `model` parameter. If you want to define a
617616
custom URL while setting `model` in the request payload, you must set `base_url` when initializing [`InferenceClient`].
618617
frequency_penalty (`float`, *optional*):
619618
Penalizes new tokens based on their existing frequency
620619
in the text so far. Range: [-2.0, 2.0]. Defaults to 0.0.
621620
logit_bias (`List[float]`, *optional*):
622-
Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens
623-
(specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically,
624-
the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model,
625-
but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should
626-
result in a ban or exclusive selection of the relevant token. Defaults to None.
621+
UNUSED. Currently not implemented in text-generation-inference (TGI). Kept as a parameter for OpenAI compatibility.
627622
logprobs (`bool`, *optional*):
628623
Whether to return log probabilities of the output tokens or not. If true, returns the log
629624
probabilities of each output token returned in the content of message.
630625
max_tokens (`int`, *optional*):
631626
Maximum number of tokens allowed in the response. Defaults to 100.
632627
n (`int`, *optional*):
633-
UNUSED.
628+
UNUSED. Currently not implemented in text-generation-inference (TGI). Kept as a parameter for OpenAI compatibility.
634629
presence_penalty (`float`, *optional*):
635630
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the
636631
text so far, increasing the model's likelihood to talk about new topics.

0 commit comments

Comments
 (0)