You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/content/docs/reference/components/amazon-bedrock.md
+6Lines changed: 6 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,6 +56,7 @@ Ask anything you want.
56
56
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
57
57
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
58
58
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
59
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
59
60
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
60
61
| Top P | NUMBER | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
61
62
| Top K | INTEGER | INTEGER | Specify the number of token choices the generative uses to generate the next token. |
@@ -75,6 +76,7 @@ Ask anything you want.
75
76
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
76
77
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
77
78
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
79
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
78
80
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
79
81
| Top P | NUMBER | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
80
82
| Top K | INTEGER | INTEGER | Specify the number of token choices the generative uses to generate the next token. |
@@ -93,6 +95,7 @@ Ask anything you want.
93
95
| Model | STRING | SELECT | ID of the model to use. |
94
96
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
95
97
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
98
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
96
99
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
97
100
| Number of Chat Completion Choices | INTEGER | INTEGER | How many chat completion choices to generate for each input message. |
98
101
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
@@ -116,6 +119,7 @@ Ask anything you want.
116
119
| Model | STRING | SELECT | ID of the model to use. |
117
120
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
118
121
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
122
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
119
123
| Min Tokens | INTEGER | INTEGER | The minimum number of tokens to generate in the chat completion. |
120
124
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
121
125
| Prompt | STRING | TEXT | The text which the model is requested to continue. |
@@ -141,6 +145,7 @@ Ask anything you want.
141
145
| Model | STRING | SELECT | ID of the model to use. |
142
146
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
143
147
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
148
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
144
149
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
145
150
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
146
151
| Top P | NUMBER | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
@@ -158,6 +163,7 @@ Ask anything you want.
158
163
| Model | STRING | SELECT | ID of the model to use. |
159
164
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
160
165
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
166
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
161
167
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
162
168
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
163
169
| Top P | NUMBER | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
Copy file name to clipboardExpand all lines: docs/src/content/docs/reference/components/anthropic.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,6 +54,7 @@ Ask anything you want.
54
54
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
55
55
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
56
56
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
57
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
57
58
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
58
59
| Top P | NUMBER | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
59
60
| Top K | INTEGER | INTEGER | Specify the number of token choices the generative uses to generate the next token. |
Copy file name to clipboardExpand all lines: docs/src/content/docs/reference/components/azure-openai.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,6 +54,7 @@ Ask anything you want.
54
54
| Model | STRING | TEXT | Deployment name, written in string. |
55
55
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
56
56
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
57
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
57
58
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
58
59
| Number of Chat Completion Choices | INTEGER | INTEGER | How many chat completion choices to generate for each input message. |
59
60
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
Copy file name to clipboardExpand all lines: docs/src/content/docs/reference/components/groq.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,6 +53,7 @@ Ask anything you want.
53
53
| Model | STRING | TEXT | ID of the model to use. |
54
54
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
55
55
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
56
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
56
57
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
57
58
| Number of Chat Completion Choices | INTEGER | INTEGER | How many chat completion choices to generate for each input message. |
58
59
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
Copy file name to clipboardExpand all lines: docs/src/content/docs/reference/components/mistral.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,6 +53,7 @@ Ask anything you want.
53
53
| Model | STRING | SELECT | ID of the model to use. |
54
54
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
55
55
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
56
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
56
57
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
57
58
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
58
59
| Top P | NUMBER | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
Copy file name to clipboardExpand all lines: docs/src/content/docs/reference/components/nvidia.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,6 +53,7 @@ Ask anything you want.
53
53
| Model | STRING | TEXT | ID of the model to use. |
54
54
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
55
55
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
56
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
56
57
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
57
58
| Number of Chat Completion Choices | INTEGER | INTEGER | How many chat completion choices to generate for each input message. |
58
59
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
Copy file name to clipboardExpand all lines: docs/src/content/docs/reference/components/ollama.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,6 +53,7 @@ Ask anything you want.
53
53
| Model | STRING | SELECT | ID of the model to use. |
54
54
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
55
55
| Response format | INTEGER | SELECT | In which format do you want the response to be in? |
56
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
56
57
| Keep alive for | STRING | TEXT | Controls how long the model will stay loaded into memory following the request |
57
58
| Num predict | INTEGER | INTEGER | Maximum number of tokens to predict when generating text. (-1 = infinite generation, -2 = fill context) |
58
59
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
Copy file name to clipboardExpand all lines: docs/src/content/docs/reference/components/openai.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,6 +53,7 @@ Ask anything you want.
53
53
| Model | STRING | SELECT | ID of the model to use. |
54
54
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
55
55
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
56
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
56
57
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
57
58
| Number of Chat Completion Choices | INTEGER | INTEGER | How many chat completion choices to generate for each input message. |
58
59
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
Copy file name to clipboardExpand all lines: docs/src/content/docs/reference/components/watsonx.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -57,6 +57,7 @@ Ask anything you want.
57
57
| Model | STRING | TEXT | Model is the identifier of the LLM Model to be used. |
58
58
| Messages |[{STRING\(content), STRING\(role)}]| ARRAY_BUILDER | A list of messages comprising the conversation so far. |
59
59
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
60
+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
60
61
| Decoding Method | STRING | TEXT | Decoding is the process that a model uses to choose the tokens in the generated output. |
61
62
| Repetition Penalty | NUMBER | NUMBER | Sets how strongly to penalize repetitions. A higher value (e.g., 1.8) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. |
62
63
| Min Tokens | INTEGER | INTEGER | Sets how many tokens must the LLM generate. |
0 commit comments