Skip to content

Commit 22492a2

Browse files
authored
Merge branch 'main' into DDOC-1176-appcenter-name-change
2 parents 06e3e7d + e06a6f7 commit 22492a2

36 files changed

+1269
-300
lines changed

.spelling

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -315,4 +315,22 @@ GPT-3
315315
freeform
316316
pre-defined
317317
stringified
318-
params
318+
textembedding
319+
Gecko
320+
16k
321+
4k
322+
200k
323+
128k
324+
8k
325+
1k
326+
multimodal
327+
1m
328+
32k
329+
2k
330+
summarization
331+
GPT-4o
332+
Anthropic
333+
GPT-4o-2024-05-13
334+
text-embedding-ada-002
335+
params
336+
GPT-4o-mini

content/guides/api-calls/api-versioning-strategy.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -214,6 +214,10 @@ Breaking changes in the Box API occur within versioned releases, typically accom
214214
We use [oasdiff](https://github.com/Tufin/oasdiff/blob/main/BREAKING-CHANGES-EXAMPLES.md) tool to detect most of the possible breaking changes.
215215
</Message>
216216

217+
## AI agent configuration versioning
218+
219+
[AI agent](g://box-ai/ai-agents) versioning gives the developers more control over model version management and ensures consistent responses. For details, see [AI agent configuration versioning guide](g://box-ai/ai-agents/ai-agent-versioning).
220+
217221
## Support policy and deprecation information
218222

219223
When new versions of the Box APIs and Box SDKs are released, earlier versions will be retired. Box marks a version as `deprecated` at least 24 months before retiring it. In other words, a deprecated version cannot become end-of-life

content/guides/box-ai/ai-agents/ai-agent-versioning.md

Lines changed: 212 additions & 0 deletions
Large diffs are not rendered by default.

content/guides/box-ai/ai-agents/get-agent-default-config.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,14 @@ related_guides:
88
- box-ai/prerequisites
99
- box-ai/ask-questions
1010
- box-ai/generate-text
11+
- box-ai/extract-metadata
12+
- box-ai/extract-metadata-structured
1113
---
1214

1315
# Get default AI agent configuration
1416

1517
<Message type="notice">
16-
Box AI API is currently a beta feature offered subject to Box’s Main Beta Agreement, and the available capabilities may change. Box AI API is available to all Enterprise Plus customers.
17-
18+
Endpoints related to metadata extraction are currently a beta feature offered subject to Box’s Main Beta Agreement, and the available capabilities may change. Box AI API is available to all Enterprise Plus customers.
1819
</Message>
1920

2021
The `GET /2.0/ai_agent_default` endpoint allows you to fetch the default configuration for AI services.
@@ -254,6 +255,6 @@ When you set the `mode` parameter to `extract_structured` the response will be a
254255
</Tabs>
255256

256257
[prereq]: g://box-ai/prerequisites
257-
[models]: g://box-ai/supported-models
258+
[models]: g://box-ai/ai-models
258259
[ai-agent-config]: g://box-ai/ai-agents/overrides-tutorial
259-
[override-tutorials]: g://box-ai/ai-agents/overrides-tutorial
260+
[override-tutorials]: g://box-ai/ai-agents/overrides-tutorial

content/guides/box-ai/ai-agents/index.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,10 @@ related_guides:
1313

1414
# AI model overrides
1515

16+
<Message type="notice">
17+
Endpoints related to metadata extraction are currently a beta feature offered subject to Box’s Main Beta Agreement, and the available capabilities may change. Box AI API is available to all Enterprise Plus customers.
18+
</Message>
19+
1620
Box updates the default models across the endpoints on a regular basis to stay up to date with the most advanced options.
1721

1822
If your implementation is based on Box AI, a new default model might alter the results in a way that could break or change a downstream process. Switching to a specific version may prevent encountering any issues.
@@ -27,4 +31,4 @@ To see specific use cases, check the [overrides tutorial][overrides].
2731
[text-gen]: e://post_ai_text_gen#param_ai_agent
2832
[agent-default]: g://box-ai/ai-agents/get-agent-default-config
2933
[overrides]: g://box-ai/ai-agents/overrides-tutorial
30-
[models]: g://box-ai/supported-models
34+
[models]: g://box-ai/ai-models

content/guides/box-ai/ai-agents/overrides-tutorial.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,7 @@ related_guides:
1313
# Override AI model configuration
1414

1515
<Message type="notice">
16-
Box AI API is currently a beta feature offered subject to Box’s Main Beta Agreement, and the available capabilities may change. Box AI API is available to all Enterprise Plus customers.
17-
16+
Endpoints related to metadata extraction are currently a beta feature offered subject to Box’s Main Beta Agreement, and the available capabilities may change. Box AI API is available to all Enterprise Plus customers.
1817
</Message>
1918

2019
The `agent_ai` configuration allows you to override the default AI model configuration. It is available for the following endpoints:
@@ -127,11 +126,11 @@ The set of parameters available for `ask`, `text_gen`, `extract`, `extract_struc
127126

128127
### LLM endpoint params
129128

130-
The `llm_endpoint_params` configuration options differ depending on the overall AI model being [Google][google-params] or [OpenAI][openai-params] based.
129+
The `llm_endpoint_params` configuration options differ depending on the overall AI model being [Google][google-params], [OpenAI][openai-params] or [AWS][aws-params] based.
131130

132131
For example, both `llm_endpoint_params` objects accept a `temperature` parameter, but the outcome differs depending on the model.
133132

134-
For Google models, the [`temperature`][google-temp] is used for sampling during response generation, which occurs when `top-P` and `top-K` are applied. Temperature controls the degree of randomness in the token selection.
133+
For Google and AWS models, the [`temperature`][google-temp] is used for sampling during response generation, which occurs when `top-P` and `top-K` are applied. Temperature controls the degree of randomness in the token selection.
135134

136135
For OpenAI models, [`temperature`][openai-temp] is the sampling temperature with values between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. When introducing your own configuration, use `temperature` or or `top_p` but not both.
137136

@@ -354,4 +353,5 @@ Using this model results in a response listing more metadata entries:
354353
[openai-tokens]: https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
355354
[agent]: e://get_ai_agent_default
356355
[google-temp]: https://ai.google.dev/gemini-api/docs/models/generative-models#model-parameters
357-
[openai-temp]: https://community.openai.com/t/temperature-top-p-and-top-k-for-chatbot-responses/295542
356+
[openai-temp]: https://community.openai.com/t/temperature-top-p-and-top-k-for-chatbot-responses/295542
357+
[aws-params]: r://ai-llm-endpoint-params-aws
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
---
2+
rank: 14
3+
related_guides:
4+
- box-ai/ask-questions
5+
- box-ai/generate-text
6+
- box-ai/extract-metadata
7+
- box-ai/extract-metadata-structured
8+
- box-ai/ai-agents/get-agent-default-config
9+
---
10+
# AWS Claude 3.5 Sonnet
11+
12+
## Overview
13+
14+
**AWS Claude 3.5 Sonnet** model is designed to enhance language understanding and generation tasks.
15+
16+
## Model details
17+
18+
| Item | Value | Description |
19+
|-----------|----------|----------|
20+
|Model name|**AWS Claude 3.5 Sonnet**| The name of the model. |
21+
|API model name|`aws__claude_3_5_sonnet`| The name of the model that is used in the [Box AI API for model overrides][overrides]. The user must provide this exact name for the API to work. |
22+
|Hosting layer| **Amazon Web Services (AWS)** | The trusted organization that securely hosts LLM. |
23+
|Model provider|**AWS Bedrock**| The organization that provides this model. |
24+
|Release date| **June 20th, 2024** | The release date for the model.|
25+
|Knowledge cutoff date| **April 2024**| The date after which the model does not get any information updates. |
26+
|Input context window |**200k tokens**| The number of tokens supported by the input context window.|
27+
|Maximum output tokens | **4k tokens** |The number of tokens that can be generated by the model in a single request.|
28+
|Empirical throughput| **Not specified**| The number of tokens the model can generate per second.|
29+
|Open source | **No** | Specifies if the model's code is available for public use. |
30+
31+
## Additional documentation
32+
33+
For additional information, see [official AWS Claude 3.5 Sonnet documentation][aws-claude].
34+
35+
[aws-claude]: https://aws.amazon.com/bedrock/claude/
36+
[overrides]: g://box-ai/ai-agents/overrides-tutorial
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
---
2+
rank: 15
3+
related_guides:
4+
- box-ai/ask-questions
5+
- box-ai/generate-text
6+
- box-ai/extract-metadata
7+
- box-ai/extract-metadata-structured
8+
- box-ai/ai-agents/get-agent-default-config
9+
---
10+
# AWS Claude 3 Haiku
11+
12+
## Overview
13+
14+
**AWS Claude 3 Haiku** model is tailored for various language tasks, including creative writing and conversational AI.
15+
16+
## Model details
17+
18+
| Item | Value | Description |
19+
|-----------|----------|----------|
20+
|Model name|**AWS Claude 3 Haiku**| The name of the model. |
21+
|API model name|`aws__claude_3_haiku`| The name of the model that is used in the [Box AI API for model overrides][overrides]. The user must provide this exact name for the API to work. |
22+
|Hosting layer| **Amazon Web Services (AWS)** | The trusted organization that securely hosts LLM. |
23+
|Model provider|**Anthropic**| The organization that provides this model. |
24+
|Release date| **March 13th, 2024** | The release date for the model.|
25+
|Knowledge cutoff date| **August 2023**| The date after which the model does not get any information updates. |
26+
|Input context window |**200k tokens**| The number of tokens supported by the input context window.|
27+
|Maximum output tokens | **4k tokens** |The number of tokens that can be generated by the model in a single request.|
28+
|Empirical throughput| **117** | The number of tokens the model can generate per second.|
29+
|Open source | **No** | Specifies if the model's code is available for public use. |
30+
31+
## Additional documentation
32+
33+
For additional information, see [official AWS Claude 3 Haiku documentation][aws-claude].
34+
35+
[aws-claude]: https://aws.amazon.com/bedrock/claude/
36+
[overrides]: g://box-ai/ai-agents/overrides-tutorial
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
---
2+
rank: 16
3+
related_guides:
4+
- box-ai/ask-questions
5+
- box-ai/generate-text
6+
- box-ai/extract-metadata
7+
- box-ai/extract-metadata-structured
8+
- box-ai/ai-agents/get-agent-default-config
9+
---
10+
# AWS Claude 3 Sonnet
11+
12+
**AWS Claude 3 Sonnet** model is designed for advanced language tasks, focusing on comprehension and context handling.
13+
14+
## Model details
15+
16+
| Item | Value | Description |
17+
|-----------|----------|----------|
18+
|Model name|**AWS Claude 3 Sonnet**| The name of the model. |
19+
|API model name|`aws__claude_3_sonnet`| The name of the model that is used in the [Box AI API for model overrides][overrides]. The user must provide this exact name for the API to work. |
20+
|Hosting layer| **Amazon Web Services (AWS)** | The trusted organization that securely hosts LLM. |
21+
|Model provider|**Anthropic**| The organization that provides this model. |
22+
|Release date| **March 4th 2024** | The release date for the model.|
23+
|Knowledge cutoff date| **August 2023**| The date after which the model does not get any information updates. |
24+
|Input context window |**200k tokens**| The number of tokens supported by the input context window.|
25+
|Maximum output tokens | **4k tokens** |The number of tokens that can be generated by the model in a single request.|
26+
|Empirical throughput| **49.8** | The number of tokens the model can generate per second.|
27+
|Open source | **No** | Specifies if the model's code is available for public use.|
28+
29+
## Additional documentation
30+
31+
For additional information, see [official AWS Claude 3 Sonnet documentation][aws-claude].
32+
33+
[aws-claude]: https://aws.amazon.com/bedrock/claude/
34+
[overrides]: g://box-ai/ai-agents/overrides-tutorial
Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
---
2+
rank: 17
3+
related_guides:
4+
- box-ai/ask-questions
5+
- box-ai/generate-text
6+
- box-ai/extract-metadata
7+
- box-ai/extract-metadata-structured
8+
- box-ai/ai-agents/get-agent-default-config
9+
---
10+
# AWS Titan Text Lite
11+
12+
**AWS Titan Text Lite** model is designed for advanced language processing, capable of handling extensive contexts, making it suitable for complex tasks,
13+
although the model itself is lightweight.
14+
15+
## Model details
16+
17+
| Item | Value | Description |
18+
|-----------|----------|----------|
19+
|Model name|**AWS Titan Text Lite**| The name of the model. |
20+
|API model name|`aws__titan_text_lite`| The name of the model that is used in the [Box AI API for model overrides][overrides]. The user must provide this exact name for the API to work. |
21+
|Hosting layer| **Amazon Web Services (AWS)** | The trusted organization that securely hosts LLM. |
22+
|Model provider|**Anthropic**| The organization that provides this model. |
23+
|Release date| **September 2024** | The release date for the model.|
24+
|Knowledge cutoff date| **Not provided**| The date after which the model does not get any information updates. |
25+
|Input context window |**128k tokens**| The number of tokens supported by the input context window.|
26+
|Maximum output tokens | **4k tokens** |The number of tokens that can be generated by the model in a single request.|
27+
|Empirical throughput| **Not specified** | The number of tokens the model can generate per second.|
28+
|Open source | **No** | Specifies if the model's code is available for public use.|
29+
30+
## Additional documentation
31+
32+
For additional information, see [official AWS Titan documentation][aws-titan].
33+
34+
[aws-titan]: https://aws.amazon.com/bedrock/titan/
35+
[overrides]: g://box-ai/ai-agents/overrides-tutorial

0 commit comments

Comments
 (0)