Skip to content

Commit 96a6cfb

Browse files
monikakusterivicac
authored andcommitted
1639 - docs - generated
1 parent cd67137 commit 96a6cfb

File tree

10 files changed

+15
-0
lines changed

10 files changed

+15
-0
lines changed

docs/src/content/docs/reference/components/amazon-bedrock.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,7 @@ Ask anything you want.
5656
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
5757
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
5858
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
59+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
5960
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
6061
| Top P | NUMBER | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
6162
| Top K | INTEGER | INTEGER | Specify the number of token choices the generative uses to generate the next token. |
@@ -75,6 +76,7 @@ Ask anything you want.
7576
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
7677
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
7778
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
79+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
7880
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
7981
| Top P | NUMBER | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
8082
| Top K | INTEGER | INTEGER | Specify the number of token choices the generative uses to generate the next token. |
@@ -93,6 +95,7 @@ Ask anything you want.
9395
| Model | STRING | SELECT | ID of the model to use. |
9496
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
9597
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
98+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
9699
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
97100
| Number of Chat Completion Choices | INTEGER | INTEGER | How many chat completion choices to generate for each input message. |
98101
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
@@ -116,6 +119,7 @@ Ask anything you want.
116119
| Model | STRING | SELECT | ID of the model to use. |
117120
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
118121
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
122+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
119123
| Min Tokens | INTEGER | INTEGER | The minimum number of tokens to generate in the chat completion. |
120124
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
121125
| Prompt | STRING | TEXT | The text which the model is requested to continue. |
@@ -141,6 +145,7 @@ Ask anything you want.
141145
| Model | STRING | SELECT | ID of the model to use. |
142146
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
143147
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
148+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
144149
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
145150
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
146151
| Top P | NUMBER | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
@@ -158,6 +163,7 @@ Ask anything you want.
158163
| Model | STRING | SELECT | ID of the model to use. |
159164
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
160165
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
166+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
161167
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
162168
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
163169
| Top P | NUMBER | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |

docs/src/content/docs/reference/components/anthropic.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,7 @@ Ask anything you want.
5454
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
5555
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
5656
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
57+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
5758
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
5859
| Top P | NUMBER | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
5960
| Top K | INTEGER | INTEGER | Specify the number of token choices the generative uses to generate the next token. |

docs/src/content/docs/reference/components/azure-openai.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,7 @@ Ask anything you want.
5454
| Model | STRING | TEXT | Deployment name, written in string. |
5555
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
5656
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
57+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
5758
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
5859
| Number of Chat Completion Choices | INTEGER | INTEGER | How many chat completion choices to generate for each input message. |
5960
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |

docs/src/content/docs/reference/components/groq.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ Ask anything you want.
5353
| Model | STRING | TEXT | ID of the model to use. |
5454
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
5555
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
56+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
5657
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
5758
| Number of Chat Completion Choices | INTEGER | INTEGER | How many chat completion choices to generate for each input message. |
5859
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |

docs/src/content/docs/reference/components/hugging-face.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ Ask anything you want.
5353
| URL | STRING | TEXT | Url of the inference endpoint |
5454
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
5555
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
56+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
5657

5758

5859

docs/src/content/docs/reference/components/mistral.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ Ask anything you want.
5353
| Model | STRING | SELECT | ID of the model to use. |
5454
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
5555
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
56+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
5657
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
5758
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |
5859
| Top P | NUMBER | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |

docs/src/content/docs/reference/components/nvidia.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ Ask anything you want.
5353
| Model | STRING | TEXT | ID of the model to use. |
5454
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
5555
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
56+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
5657
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
5758
| Number of Chat Completion Choices | INTEGER | INTEGER | How many chat completion choices to generate for each input message. |
5859
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |

docs/src/content/docs/reference/components/ollama.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ Ask anything you want.
5353
| Model | STRING | SELECT | ID of the model to use. |
5454
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
5555
| Response format | INTEGER | SELECT | In which format do you want the response to be in? |
56+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
5657
| Keep alive for | STRING | TEXT | Controls how long the model will stay loaded into memory following the request |
5758
| Num predict | INTEGER | INTEGER | Maximum number of tokens to predict when generating text. (-1 = infinite generation, -2 = fill context) |
5859
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |

docs/src/content/docs/reference/components/openai.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,7 @@ Ask anything you want.
5353
| Model | STRING | SELECT | ID of the model to use. |
5454
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
5555
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
56+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
5657
| Max Tokens | INTEGER | INTEGER | The maximum number of tokens to generate in the chat completion. |
5758
| Number of Chat Completion Choices | INTEGER | INTEGER | How many chat completion choices to generate for each input message. |
5859
| Temperature | NUMBER | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. |

docs/src/content/docs/reference/components/watsonx.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,7 @@ Ask anything you want.
5757
| Model | STRING | TEXT | Model is the identifier of the LLM Model to be used. |
5858
| Messages | [{STRING\(content), STRING\(role)}] | ARRAY_BUILDER | A list of messages comprising the conversation so far. |
5959
| Response Format | INTEGER | SELECT | In which format do you want the response to be in? |
60+
| Response Schema | STRING | TEXT_AREA | Define the JSON schema for the response. |
6061
| Decoding Method | STRING | TEXT | Decoding is the process that a model uses to choose the tokens in the generated output. |
6162
| Repetition Penalty | NUMBER | NUMBER | Sets how strongly to penalize repetitions. A higher value (e.g., 1.8) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient. |
6263
| Min Tokens | INTEGER | INTEGER | Sets how many tokens must the LLM generate. |

0 commit comments

Comments
 (0)