Skip to content

Commit 752a4b9

Browse files
hanouticelinagithub-actions[bot]
authored andcommitted
Update API inference documentation (automated)
1 parent 8736fb4 commit 752a4b9

15 files changed

+16
-16
lines changed

docs/api-inference/tasks/audio-classification.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
104104
| Payload | | |
105105
| :--- | :--- | :--- |
106106
| **inputs*** | _string_ | The input audio data as a base64-encoded string. If no `parameters` are provided, you can also provide the audio data as a raw bytes payload. |
107-
| **parameters** | _object_ | Additional inference parameters for Audio Classification |
107+
| **parameters** | _object_ | |
108108
| **        function_to_apply** | _enum_ | Possible values: sigmoid, softmax, none. |
109109
| **        top_k** | _integer_ | When specified, limits the output to the top K most probable classes. |
110110

docs/api-inference/tasks/automatic-speech-recognition.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -105,9 +105,9 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
105105
| Payload | | |
106106
| :--- | :--- | :--- |
107107
| **inputs*** | _string_ | The input audio data as a base64-encoded string. If no `parameters` are provided, you can also provide the audio data as a raw bytes payload. |
108-
| **parameters** | _object_ | Additional inference parameters for Automatic Speech Recognition |
108+
| **parameters** | _object_ | |
109109
| **        return_timestamps** | _boolean_ | Whether to output corresponding timestamps with the generated text |
110-
| **        generation_parameters** | _object_ | Ad-hoc parametrization of the text generation process |
110+
| **        generation_parameters** | _object_ | |
111111
| **                temperature** | _number_ | The value used to modulate the next token probabilities. |
112112
| **                top_k** | _integer_ | The number of highest probability vocabulary tokens to keep for top-k-filtering. |
113113
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;top_p** | _number_ | If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. |

docs/api-inference/tasks/fill-mask.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
100100
| Payload | | |
101101
| :--- | :--- | :--- |
102102
| **inputs*** | _string_ | The text with masked tokens |
103-
| **parameters** | _object_ | Additional inference parameters for Fill Mask |
103+
| **parameters** | _object_ | |
104104
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;top_k** | _integer_ | When passed, overrides the number of predictions to return. |
105105
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;targets** | _string[]_ | When passed, the model will limit the scores to the passed targets instead of looking up in the whole vocabulary. If the provided targets are not in the model vocab, they will be tokenized and the first resulting token will be used (with a warning, and that might be slower). |
106106

docs/api-inference/tasks/image-classification.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
9999
| Payload | | |
100100
| :--- | :--- | :--- |
101101
| **inputs*** | _string_ | The input image data as a base64-encoded string. If no `parameters` are provided, you can also provide the image data as a raw bytes payload. |
102-
| **parameters** | _object_ | Additional inference parameters for Image Classification |
102+
| **parameters** | _object_ | |
103103
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;function_to_apply** | _enum_ | Possible values: sigmoid, softmax, none. |
104104
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;top_k** | _integer_ | When specified, limits the output to the top K most probable classes. |
105105

docs/api-inference/tasks/image-segmentation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
100100
| Payload | | |
101101
| :--- | :--- | :--- |
102102
| **inputs*** | _string_ | The input image data as a base64-encoded string. If no `parameters` are provided, you can also provide the image data as a raw bytes payload. |
103-
| **parameters** | _object_ | Additional inference parameters for Image Segmentation |
103+
| **parameters** | _object_ | |
104104
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;mask_threshold** | _number_ | Threshold to use when turning the predicted masks into binary values. |
105105
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;overlap_mask_area_threshold** | _number_ | Mask overlap threshold to eliminate small, disconnected segments. |
106106
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;subtask** | _enum_ | Possible values: instance, panoptic, semantic. |

docs/api-inference/tasks/image-to-image.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ No snippet available for this task.
4747
| Payload | | |
4848
| :--- | :--- | :--- |
4949
| **inputs*** | _string_ | The input image data as a base64-encoded string. If no `parameters` are provided, you can also provide the image data as a raw bytes payload. |
50-
| **parameters** | _object_ | Additional inference parameters for Image To Image |
50+
| **parameters** | _object_ | |
5151
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;guidance_scale** | _number_ | For diffusion models. A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. |
5252
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;negative_prompt** | _string[]_ | One or several prompt to guide what NOT to include in image generation. |
5353
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;num_inference_steps** | _integer_ | For diffusion models. The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. |

docs/api-inference/tasks/object-detection.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
9999
| Payload | | |
100100
| :--- | :--- | :--- |
101101
| **inputs*** | _string_ | The input image data as a base64-encoded string. If no `parameters` are provided, you can also provide the image data as a raw bytes payload. |
102-
| **parameters** | _object_ | Additional inference parameters for Object Detection |
102+
| **parameters** | _object_ | |
103103
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;threshold** | _number_ | The probability necessary to make a prediction. |
104104

105105

docs/api-inference/tasks/question-answering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
109109
| **inputs*** | _object_ | One (context, question) pair to answer |
110110
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;context*** | _string_ | The context to be used for answering the question |
111111
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;question*** | _string_ | The question to be answered |
112-
| **parameters** | _object_ | Additional inference parameters for Question Answering |
112+
| **parameters** | _object_ | |
113113
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;top_k** | _integer_ | The number of answers to return (will be chosen by order of likelihood). Note that we return less than topk answers if there are not enough options available within the context. |
114114
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;doc_stride** | _integer_ | If the context is too long to fit with the question for the model, it will be split in several chunks with some overlap. This argument controls the size of that overlap. |
115115
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;max_answer_len** | _integer_ | The maximum length of predicted answers (e.g., only answers with a shorter length are considered). |

docs/api-inference/tasks/summarization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
9999
| Payload | | |
100100
| :--- | :--- | :--- |
101101
| **inputs*** | _string_ | The input text to summarize. |
102-
| **parameters** | _object_ | Additional inference parameters for summarization. |
102+
| **parameters** | _object_ | |
103103
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;clean_up_tokenization_spaces** | _boolean_ | Whether to clean up the potential extra spaces in the text output. |
104104
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;truncation** | _enum_ | Possible values: do_not_truncate, longest_first, only_first, only_second. |
105105
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;generate_parameters** | _object_ | Additional parametrization of the text generation algorithm. |

docs/api-inference/tasks/table-question-answering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
125125
| **inputs*** | _object_ | One (table, question) pair to answer |
126126
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;table*** | _object_ | The table to serve as context for the questions |
127127
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;question*** | _string_ | The question to be answered about the table |
128-
| **parameters** | _object_ | Additional inference parameters for Table Question Answering |
128+
| **parameters** | _object_ | |
129129

130130

131131
Some options can be configured by passing headers to the Inference API. Here are the available headers:

0 commit comments

Comments
 (0)