Skip to content

Commit 43cb677

Browse files
authored
Merge branch 'main' into update-snippets-tester
2 parents 1772407 + 51a0243 commit 43cb677

File tree

3 files changed

+14
-1
lines changed

3 files changed

+14
-1
lines changed

cohere-openapi.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22943,6 +22943,15 @@ components:
2294322943
Dictates the approach taken to generating citations as part of the RAG flow by allowing the user to specify whether they want `"accurate"` results, `"fast"` results or no results.
2294422944

2294522945
**Note**: `command-r7b-12-2024` only supports `"fast"` and `"off"` modes. Its default is `"fast"`.
22946+
TruncationStrategy:
22947+
description: Describes the truncation strategy for when the prompt exceeds the
22948+
context length. Defaults to 'none'
22949+
oneOf: []
22950+
discriminator:
22951+
propertyName: type
22952+
mapping:
22953+
auto: "#/components/schemas/TruncationStrategyAutoPreserveOrder"
22954+
none: "#/components/schemas/TruncationStrategyNone"
2294622955
ResponseFormatTypeV2:
2294722956
x-fern-audiences:
2294822957
- public

fern/assets/input.css

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,10 @@ h2 {
3838
font-weight: 400 !important;
3939
}
4040

41+
.fern-announcement .fern-mdx-link, .fern-announcement svg.external-link-icon {
42+
color: #fff !important;
43+
}
44+
4145
.font-semibold {
4246
font-weight: 500 !important;
4347
}

fern/pages/changelog/2025-03-04-aya-vision-is-here.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ description: >-
77
Release announcement for the new multimodal Aya Vision model
88
---
99

10-
Today, Cohere For AI, Cohere’s research arm, is proud to announce [Aya Vision](link to blog post), a state-of-the-art multimodal large language model excelling across multiple languages and modalities. Aya Vision outperforms the leading open-weight models in critical benchmarks for language, text, and image capabilities.
10+
Today, Cohere For AI, Cohere’s research arm, is proud to announce [Aya Vision](https://cohere.com/blog/aya-vision), a state-of-the-art multimodal large language model excelling across multiple languages and modalities. Aya Vision outperforms the leading open-weight models in critical benchmarks for language, text, and image capabilities.
1111

1212
built as a foundation for multilingual and multimodal communication, this groundbreaking AI model supports tasks such as image captioning, visual question answering, text generation, and translations from both texts and images into coherent text.
1313

0 commit comments

Comments
 (0)