Skip to content

Commit 9a61c41

Browse files
authored
feat: openapi spec sync (#458)
* updated client events * updated server event * updated rate limit * updated session configuration * transctiption session configuration * udpates to realtime types * updated Item * updated realtime types * update examples/realtime with GA api * checkpoint: responses types updates * checkpoint for updated types * checkpoint for updates to responses types * updates for CreateResponse * add reponses apis * list input items * add get_input_token_counts for responses * implement ItemResource * types/responses dir * response streaming events * fix compilation * compiling example/responses * fix types * fix examples/responses-function-call * fix examples/responses-stream * update it to RealtimeResponse to distinguish from Response * avoid name conflicts * update realtime types * update realtime example * update names * updated realtime spec * RealtimeConversationItem * RealtimeConversationItem * updates for the spec * update types to match spec * types updated * update realtime types * match realtime client event to spec * update examples/realtime * match realtime server event type names with spec * match responses stream event names with spec * reusable type * updated readme
1 parent 6becdfc commit 9a61c41

File tree

27 files changed

+5129
-2922
lines changed

27 files changed

+5129
-2922
lines changed

async-openai/README.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
- [x] Models
3636
- [x] Moderations
3737
- [x] Organizations | Administration (partially implemented)
38-
- [x] Realtime (Beta) (partially implemented)
38+
- [x] Realtime GA (partially implemented)
3939
- [x] Responses (partially implemented)
4040
- [x] Uploads
4141
- [x] Videos
@@ -65,7 +65,6 @@ $Env:OPENAI_API_KEY='sk-...'
6565
## Realtime API
6666

6767
Only types for Realtime API are implemented, and can be enabled with feature flag `realtime`.
68-
These types were written before OpenAI released official specs.
6968

7069
## Image Generation Example
7170

@@ -179,8 +178,6 @@ To maintain quality of the project, a minimum of the following is a must for cod
179178
This project adheres to [Rust Code of Conduct](https://www.rust-lang.org/policies/code-of-conduct)
180179

181180
## Complimentary Crates
182-
183-
- [openai-func-enums](https://github.com/frankfralick/openai-func-enums) provides procedural macros that make it easier to use this library with OpenAI API's tool calling feature. It also provides derive macros you can add to existing [clap](https://github.com/clap-rs/clap) application subcommands for natural language use of command line tools. It also supports openai's [parallel tool calls](https://platform.openai.com/docs/guides/function-calling/parallel-function-calling) and allows you to choose between running multiple tool calls concurrently or own their own OS threads.
184181
- [async-openai-wasm](https://github.com/ifsheldon/async-openai-wasm) provides WASM support.
185182

186183
## License

async-openai/src/responses.rs

Lines changed: 71 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,15 @@
1+
use serde::Serialize;
2+
13
use crate::{
24
config::Config,
35
error::OpenAIError,
4-
types::responses::{CreateResponse, Response, ResponseStream},
6+
types::responses::{
7+
CreateResponse, DeleteResponse, Response, ResponseItemList, ResponseStream,
8+
TokenCountsBody, TokenCountsResource,
9+
},
510
Client,
611
};
712

8-
/// Given text input or a list of context items, the model will generate a response.
9-
///
10-
/// Related guide: [Responses](https://platform.openai.com/docs/api-reference/responses)
1113
pub struct Responses<'c, C: Config> {
1214
client: &'c Client<C>,
1315
}
@@ -18,7 +20,15 @@ impl<'c, C: Config> Responses<'c, C> {
1820
Self { client }
1921
}
2022

21-
/// Creates a model response for the given input.
23+
/// Creates a model response. Provide [text](https://platform.openai.com/docs/guides/text) or
24+
/// [image](https://platform.openai.com/docs/guides/images) inputs to generate
25+
/// [text](https://platform.openai.com/docs/guides/text) or
26+
/// [JSON](https://platform.openai.com/docs/guides/structured-outputs) outputs. Have the model call
27+
/// your own [custom code](https://platform.openai.com/docs/guides/function-calling) or use
28+
/// built-in [tools](https://platform.openai.com/docs/guides/tools) like
29+
/// [web search](https://platform.openai.com/docs/guides/tools-web-search)
30+
/// or [file search](https://platform.openai.com/docs/guides/tools-file-search) to use your own data
31+
/// as input for the model's response.
2232
#[crate::byot(
2333
T0 = serde::Serialize,
2434
R = serde::de::DeserializeOwned
@@ -52,4 +62,60 @@ impl<'c, C: Config> Responses<'c, C> {
5262
}
5363
Ok(self.client.post_stream("/responses", request).await)
5464
}
65+
66+
/// Retrieves a model response with the given ID.
67+
#[crate::byot(T0 = std::fmt::Display, T1 = serde::Serialize, R = serde::de::DeserializeOwned)]
68+
pub async fn retrieve<Q>(&self, response_id: &str, query: &Q) -> Result<Response, OpenAIError>
69+
where
70+
Q: Serialize + ?Sized,
71+
{
72+
self.client
73+
.get_with_query(&format!("/responses/{}", response_id), &query)
74+
.await
75+
}
76+
77+
/// Deletes a model response with the given ID.
78+
#[crate::byot(T0 = std::fmt::Display, R = serde::de::DeserializeOwned)]
79+
pub async fn delete(&self, response_id: &str) -> Result<DeleteResponse, OpenAIError> {
80+
self.client
81+
.delete(&format!("/responses/{}", response_id))
82+
.await
83+
}
84+
85+
/// Cancels a model response with the given ID. Only responses created with the
86+
/// `background` parameter set to `true` can be cancelled.
87+
/// [Learn more](https://platform.openai.com/docs/guides/background).
88+
#[crate::byot(T0 = std::fmt::Display, R = serde::de::DeserializeOwned)]
89+
pub async fn cancel(&self, response_id: &str) -> Result<Response, OpenAIError> {
90+
self.client
91+
.post(
92+
&format!("/responses/{}/cancel", response_id),
93+
serde_json::json!({}),
94+
)
95+
.await
96+
}
97+
98+
/// Returns a list of input items for a given response.
99+
#[crate::byot(T0 = std::fmt::Display, T1 = serde::Serialize, R = serde::de::DeserializeOwned)]
100+
pub async fn list_input_items<Q>(
101+
&self,
102+
response_id: &str,
103+
query: &Q,
104+
) -> Result<ResponseItemList, OpenAIError>
105+
where
106+
Q: Serialize + ?Sized,
107+
{
108+
self.client
109+
.get_with_query(&format!("/responses/{}/input_items", response_id), &query)
110+
.await
111+
}
112+
113+
/// Get input token counts
114+
#[crate::byot(T0 = serde::Serialize, R = serde::de::DeserializeOwned)]
115+
pub async fn get_input_token_counts(
116+
&self,
117+
request: TokenCountsBody,
118+
) -> Result<TokenCountsResource, OpenAIError> {
119+
self.client.post("/responses/input_tokens", request).await
120+
}
55121
}

async-openai/src/types/chat.rs

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -504,9 +504,14 @@ pub struct ResponseFormatJsonSchema {
504504
/// The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
505505
pub name: String,
506506
/// The schema for the response format, described as a JSON Schema object.
507+
/// Learn how to build JSON schemas [here](https://json-schema.org/).
507508
#[serde(skip_serializing_if = "Option::is_none")]
508509
pub schema: Option<serde_json::Value>,
509-
/// Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the `schema` field. Only a subset of JSON Schema is supported when `strict` is `true`. To learn more, read the [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
510+
/// Whether to enable strict schema adherence when generating the output.
511+
/// If set to true, the model will always follow the exact schema defined
512+
/// in the `schema` field. Only a subset of JSON Schema is supported when
513+
/// `strict` is `true`. To learn more, read the [Structured Outputs
514+
/// guide](https://platform.openai.com/docs/guides/structured-outputs).
510515
#[serde(skip_serializing_if = "Option::is_none")]
511516
pub strict: Option<bool>,
512517
}

async-openai/src/types/impls.rs

Lines changed: 6 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ use crate::{
1414
use bytes::Bytes;
1515

1616
use super::{
17-
responses::{CodeInterpreterContainer, Input, InputContent, Role as ResponsesRole},
17+
responses::{EasyInputContent, Role as ResponsesRole},
1818
AddUploadPartRequest, AudioInput, AudioResponseFormat, ChatCompletionFunctionCall,
1919
ChatCompletionFunctions, ChatCompletionNamedToolChoice, ChatCompletionRequestAssistantMessage,
2020
ChatCompletionRequestAssistantMessageContent, ChatCompletionRequestDeveloperMessage,
@@ -1047,50 +1047,26 @@ impl AsyncTryFrom<CreateVideoRequest> for reqwest::multipart::Form {
10471047

10481048
// end: types to multipart form
10491049

1050-
impl Default for Input {
1050+
impl Default for EasyInputContent {
10511051
fn default() -> Self {
10521052
Self::Text("".to_string())
10531053
}
10541054
}
10551055

1056-
impl Default for InputContent {
1057-
fn default() -> Self {
1058-
Self::TextInput("".to_string())
1059-
}
1060-
}
1061-
1062-
impl From<String> for Input {
1063-
fn from(value: String) -> Self {
1064-
Input::Text(value)
1065-
}
1066-
}
1067-
1068-
impl From<&str> for Input {
1069-
fn from(value: &str) -> Self {
1070-
Input::Text(value.to_owned())
1071-
}
1072-
}
1073-
10741056
impl Default for ResponsesRole {
10751057
fn default() -> Self {
10761058
Self::User
10771059
}
10781060
}
10791061

1080-
impl From<String> for InputContent {
1062+
impl From<String> for EasyInputContent {
10811063
fn from(value: String) -> Self {
1082-
Self::TextInput(value)
1064+
Self::Text(value)
10831065
}
10841066
}
10851067

1086-
impl From<&str> for InputContent {
1068+
impl From<&str> for EasyInputContent {
10871069
fn from(value: &str) -> Self {
1088-
Self::TextInput(value.to_owned())
1089-
}
1090-
}
1091-
1092-
impl Default for CodeInterpreterContainer {
1093-
fn default() -> Self {
1094-
CodeInterpreterContainer::Id("".to_string())
1070+
Self::Text(value.to_owned())
10951071
}
10961072
}

async-openai/src/types/mcp.rs

Lines changed: 137 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,137 @@
1+
use derive_builder::Builder;
2+
use serde::{Deserialize, Serialize};
3+
4+
use crate::error::OpenAIError;
5+
6+
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
7+
#[serde(rename_all = "snake_case")]
8+
pub enum McpToolConnectorId {
9+
ConnectorDropbox,
10+
ConnectorGmail,
11+
ConnectorGooglecalendar,
12+
ConnectorGoogledrive,
13+
ConnectorMicrosoftteams,
14+
ConnectorOutlookcalendar,
15+
ConnectorOutlookemail,
16+
ConnectorSharepoint,
17+
}
18+
19+
#[derive(Debug, Serialize, Deserialize, Clone, Builder, PartialEq, Default)]
20+
#[builder(
21+
name = "MCPToolArgs",
22+
pattern = "mutable",
23+
setter(into, strip_option),
24+
default
25+
)]
26+
#[builder(build_fn(error = "OpenAIError"))]
27+
pub struct MCPTool {
28+
/// A label for this MCP server, used to identify it in tool calls.
29+
pub server_label: String,
30+
31+
/// List of allowed tool names or a filter object.
32+
#[serde(skip_serializing_if = "Option::is_none")]
33+
pub allowed_tools: Option<MCPToolAllowedTools>,
34+
35+
/// An OAuth access token that can be used with a remote MCP server, either with a custom MCP
36+
/// server URL or a service connector. Your application must handle the OAuth authorization
37+
/// flow and provide the token here.
38+
#[serde(skip_serializing_if = "Option::is_none")]
39+
pub authorization: Option<String>,
40+
41+
/// Identifier for service connectors, like those available in ChatGPT. One of `server_url` or
42+
/// `connector_id` must be provided. Learn more about service connectors [here](https://platform.openai.com/docs/guides/tools-remote-mcp#connectors).
43+
///
44+
/// Currently supported `connector_id` values are:
45+
/// - Dropbox: `connector_dropbox`
46+
/// - Gmail: `connector_gmail`
47+
/// - Google Calendar: `connector_googlecalendar`
48+
/// - Google Drive: `connector_googledrive`
49+
/// - Microsoft Teams: `connector_microsoftteams`
50+
/// - Outlook Calendar: `connector_outlookcalendar`
51+
/// - Outlook Email: `connector_outlookemail`
52+
/// - SharePoint: `connector_sharepoint`
53+
#[serde(skip_serializing_if = "Option::is_none")]
54+
pub connector_id: Option<McpToolConnectorId>,
55+
56+
/// Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
57+
#[serde(skip_serializing_if = "Option::is_none")]
58+
pub headers: Option<serde_json::Value>,
59+
60+
/// Specify which of the MCP server's tools require approval.
61+
#[serde(skip_serializing_if = "Option::is_none")]
62+
pub require_approval: Option<MCPToolRequireApproval>,
63+
64+
/// Optional description of the MCP server, used to provide more context.
65+
#[serde(skip_serializing_if = "Option::is_none")]
66+
pub server_description: Option<String>,
67+
68+
/// The URL for the MCP server. One of `server_url` or `connector_id` must be provided.
69+
#[serde(skip_serializing_if = "Option::is_none")]
70+
pub server_url: Option<String>,
71+
}
72+
73+
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
74+
#[serde(untagged)]
75+
pub enum MCPToolAllowedTools {
76+
/// A string array of allowed tool names
77+
List(Vec<String>),
78+
/// A filter object to specify which tools are allowed.
79+
Filter(MCPToolFilter),
80+
}
81+
82+
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
83+
pub struct MCPToolFilter {
84+
/// Indicates whether or not a tool modifies data or is read-only.
85+
/// If an MCP server is annotated with [readOnlyHint](https://modelcontextprotocol.io/specification/2025-06-18/schema#toolannotations-readonlyhint),
86+
/// it will match this filter.
87+
#[serde(skip_serializing_if = "Option::is_none")]
88+
pub read_only: Option<bool>,
89+
/// List of allowed tool names.
90+
#[serde(skip_serializing_if = "Option::is_none")]
91+
pub tool_names: Option<Vec<String>>,
92+
}
93+
94+
/// Approval policy or filter for MCP tools.
95+
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
96+
#[serde(untagged)]
97+
pub enum MCPToolRequireApproval {
98+
/// Specify which of the MCP server's tools require approval. Can be
99+
/// `always`, `never`, or a filter object associated with tools
100+
/// that require approval.
101+
Filter(MCPToolApprovalFilter),
102+
/// Specify a single approval policy for all tools. One of `always` or
103+
/// `never`. When set to `always`, all tools will require approval. When
104+
/// set to `never`, all tools will not require approval.
105+
ApprovalSetting(MCPToolApprovalSetting),
106+
}
107+
108+
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
109+
#[serde(rename_all = "lowercase")]
110+
pub enum MCPToolApprovalSetting {
111+
Always,
112+
Never,
113+
}
114+
115+
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
116+
pub struct MCPToolApprovalFilter {
117+
/// A list of tools that always require approval.
118+
#[serde(skip_serializing_if = "Option::is_none")]
119+
pub always: Option<MCPToolFilter>,
120+
/// A list of tools that never require approval.
121+
#[serde(skip_serializing_if = "Option::is_none")]
122+
pub never: Option<MCPToolFilter>,
123+
}
124+
125+
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq)]
126+
pub struct MCPListToolsTool {
127+
/// The JSON schema describing the tool's input.
128+
pub input_schema: serde_json::Value,
129+
/// The name of the tool.
130+
pub name: String,
131+
/// Additional annotations about the tool.
132+
#[serde(skip_serializing_if = "Option::is_none")]
133+
pub annotations: Option<serde_json::Value>,
134+
/// The description of the tool.
135+
#[serde(skip_serializing_if = "Option::is_none")]
136+
pub description: Option<String>,
137+
}

async-openai/src/types/mod.rs

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ mod file;
1414
mod fine_tuning;
1515
mod image;
1616
mod invites;
17+
mod mcp;
1718
mod message;
1819
mod model;
1920
mod moderation;
@@ -46,6 +47,7 @@ pub use file::*;
4647
pub use fine_tuning::*;
4748
pub use image::*;
4849
pub use invites::*;
50+
pub use mcp::*;
4951
pub use message::*;
5052
pub use model::*;
5153
pub use moderation::*;

0 commit comments

Comments
 (0)