Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.11.0"
".": "0.11.1"
}
2 changes: 1 addition & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
configured_endpoints: 60
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-02200a58ed631064b6419711da99fefd6e97bdbbeb577a80a1a6e0c8dbcb18f5.yml
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-b5b0e2c794b012919701c3fd43286af10fa25d33ceb8a881bec2636028f446e0.yml
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,19 @@
# Changelog

## 0.11.1 (2025-01-09)

Full Changelog: [v0.11.0...v0.11.1](https://github.com/openai/openai-java/compare/v0.11.0...v0.11.1)

### Chores

* **internal:** add some missing newlines between methods ([#100](https://github.com/openai/openai-java/issues/100)) ([afc2998](https://github.com/openai/openai-java/commit/afc2998ac124a26fe3ec92207f5ff4c9614ff673))
* **internal:** spec update ([#97](https://github.com/openai/openai-java/issues/97)) ([0cff792](https://github.com/openai/openai-java/commit/0cff79271c63be46f5502a138ce1ad67a146724f))


### Documentation

* update some builder method javadocs ([#99](https://github.com/openai/openai-java/issues/99)) ([192965a](https://github.com/openai/openai-java/commit/192965abf73b9868d808c407bfc9fb73a507def7))

## 0.11.0 (2025-01-08)

Full Changelog: [v0.10.0...v0.11.0](https://github.com/openai/openai-java/compare/v0.10.0...v0.11.0)
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

<!-- x-release-please-start-version -->

[![Maven Central](https://img.shields.io/maven-central/v/com.openai/openai-java)](https://central.sonatype.com/artifact/com.openai/openai-java/0.11.0)
[![Maven Central](https://img.shields.io/maven-central/v/com.openai/openai-java)](https://central.sonatype.com/artifact/com.openai/openai-java/0.11.1)

<!-- x-release-please-end -->

Expand All @@ -32,7 +32,7 @@ The REST API documentation can be found on [platform.openai.com](https://platfo
<!-- x-release-please-start-version -->

```kotlin
implementation("com.openai:openai-java:0.11.0")
implementation("com.openai:openai-java:0.11.1")
```

#### Maven
Expand All @@ -41,7 +41,7 @@ implementation("com.openai:openai-java:0.11.0")
<dependency>
<groupId>com.openai</groupId>
<artifactId>openai-java</artifactId>
<version>0.11.0</version>
<version>0.11.1</version>
</dependency>
```

Expand Down
2 changes: 1 addition & 1 deletion build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ plugins {

allprojects {
group = "com.openai"
version = "0.11.0" // x-release-please-version
version = "0.11.1" // x-release-please-version
}


Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ private constructor(
*/
fun fileCitationAnnotation(): Optional<FileCitationAnnotation> =
Optional.ofNullable(fileCitationAnnotation)

/**
* A URL for the file that's generated when the assistant used the `code_interpreter` tool to
* generate a file.
Expand All @@ -58,6 +59,7 @@ private constructor(
*/
fun asFileCitationAnnotation(): FileCitationAnnotation =
fileCitationAnnotation.getOrThrow("fileCitationAnnotation")

/**
* A URL for the file that's generated when the assistant used the `code_interpreter` tool to
* generate a file.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ private constructor(
*/
fun fileCitationDeltaAnnotation(): Optional<FileCitationDeltaAnnotation> =
Optional.ofNullable(fileCitationDeltaAnnotation)

/**
* A URL for the file that's generated when the assistant used the `code_interpreter` tool to
* generate a file.
Expand All @@ -59,6 +60,7 @@ private constructor(
*/
fun asFileCitationDeltaAnnotation(): FileCitationDeltaAnnotation =
fileCitationDeltaAnnotation.getOrThrow("fileCitationDeltaAnnotation")

/**
* A URL for the file that's generated when the assistant used the `code_interpreter` tool to
* generate a file.
Expand Down
60 changes: 60 additions & 0 deletions openai-java-core/src/main/kotlin/com/openai/models/Assistant.kt
Original file line number Diff line number Diff line change
Expand Up @@ -500,14 +500,74 @@ private constructor(
fun responseFormat(behavior: AssistantResponseFormatOption.Behavior) =
responseFormat(AssistantResponseFormatOption.ofBehavior(behavior))

/**
* Specifies the format that the model must output. Compatible with
* [GPT-4o](https://platform.openai.com/docs/models#gpt-4o), [GPT-4
* Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4), and all GPT-3.5
* Turbo models since `gpt-3.5-turbo-1106`.
*
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs
* which ensures the model will match your supplied JSON schema. Learn more in the
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
*
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the
* model generates is valid JSON.
*
* **Important:** when using JSON mode, you **must** also instruct the model to produce JSON
* yourself via a system or user message. Without this, the model may generate an unending
* stream of whitespace until the generation reaches the token limit, resulting in a
* long-running and seemingly "stuck" request. Also note that the message content may be
* partially cut off if `finish_reason="length"`, which indicates the generation exceeded
* `max_tokens` or the conversation exceeded the max context length.
*/
fun responseFormat(responseFormatText: ResponseFormatText) =
responseFormat(AssistantResponseFormatOption.ofResponseFormatText(responseFormatText))

/**
* Specifies the format that the model must output. Compatible with
* [GPT-4o](https://platform.openai.com/docs/models#gpt-4o), [GPT-4
* Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4), and all GPT-3.5
* Turbo models since `gpt-3.5-turbo-1106`.
*
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs
* which ensures the model will match your supplied JSON schema. Learn more in the
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
*
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the
* model generates is valid JSON.
*
* **Important:** when using JSON mode, you **must** also instruct the model to produce JSON
* yourself via a system or user message. Without this, the model may generate an unending
* stream of whitespace until the generation reaches the token limit, resulting in a
* long-running and seemingly "stuck" request. Also note that the message content may be
* partially cut off if `finish_reason="length"`, which indicates the generation exceeded
* `max_tokens` or the conversation exceeded the max context length.
*/
fun responseFormat(responseFormatJsonObject: ResponseFormatJsonObject) =
responseFormat(
AssistantResponseFormatOption.ofResponseFormatJsonObject(responseFormatJsonObject)
)

/**
* Specifies the format that the model must output. Compatible with
* [GPT-4o](https://platform.openai.com/docs/models#gpt-4o), [GPT-4
* Turbo](https://platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4), and all GPT-3.5
* Turbo models since `gpt-3.5-turbo-1106`.
*
* Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs
* which ensures the model will match your supplied JSON schema. Learn more in the
* [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
*
* Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the
* model generates is valid JSON.
*
* **Important:** when using JSON mode, you **must** also instruct the model to produce JSON
* yourself via a system or user message. Without this, the model may generate an unending
* stream of whitespace until the generation reaches the token limit, resulting in a
* long-running and seemingly "stuck" request. Also note that the message content may be
* partially cut off if `finish_reason="length"`, which indicates the generation exceeded
* `max_tokens` or the conversation exceeded the max context length.
*/
fun responseFormat(responseFormatJsonSchema: ResponseFormatJsonSchema) =
responseFormat(
AssistantResponseFormatOption.ofResponseFormatJsonSchema(responseFormatJsonSchema)
Expand Down
Loading
Loading