Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions .changeset/tame-deers-shake.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
"@openai/agents-core": patch
"@openai/agents-openai": patch
"@openai/agents-realtime": patch
---

Fix typos across repo
4 changes: 2 additions & 2 deletions docs/src/content/docs/extensions/twilio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ raw audio from a phone call to a WebSocket server. This set up can be used to co
[voice agents](/openai-agents-js/guides/voice-agents) to Twilio. You can use the default Realtime Session transport
in `websocket` mode to connect the events coming from Twilio to your Realtime Session. However,
this requires you to set the right audio format and adjust your own interruption timing as phone
calls will naturally introduce more latency than a web-based converstaion.
calls will naturally introduce more latency than a web-based conversation.

To improve the set up experience, we've created a dedicated transport layer that handles the
connection to Twilio for you, including handling interruptions and audio forwarding for you.
Expand Down Expand Up @@ -72,7 +72,7 @@ for more information on how to use the `RealtimeSession` with voice agents.

In order to receive all the necessary events and audio from Twilio, you should create your
`TwilioRealtimeTransportLayer` instance as soon as you have a reference to the WebSocket
connetion and immediately call `session.connect()` afterwards.
connection and immediately call `session.connect()` afterwards.

2. **Access the raw Twilio events.**

Expand Down
2 changes: 1 addition & 1 deletion docs/src/content/docs/guides/mcp.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ To stream incremental MCP results, pass `stream: true` when you run the `Agent`:

For sensitive operations you can require human approval of individual tool calls. Pass either `requireApproval: 'always'` or a fine‑grained object mapping tool names to `'never'`/`'always'`.

If you can programatically determine whether a tool call is safe, you can use the [`onApproval` callback](https://github.com/openai/openai-agents-js/blob/main/examples/mcp/hosted-mcp-on-approval.ts) to approve or reject the tool call. If you require human approval, you can use the same [human-in-the-loop (HITL) approach](/openai-agents-js/guides/human-in-the-loop/) using `interruptions` as for local function tools.
If you can programmatically determine whether a tool call is safe, you can use the [`onApproval` callback](https://github.com/openai/openai-agents-js/blob/main/examples/mcp/hosted-mcp-on-approval.ts) to approve or reject the tool call. If you require human approval, you can use the same [human-in-the-loop (HITL) approach](/openai-agents-js/guides/human-in-the-loop/) using `interruptions` as for local function tools.

<Code
lang="typescript"
Expand Down
2 changes: 1 addition & 1 deletion docs/src/content/docs/guides/results.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ There are two ways you can access the inputs for your next turn:

## Last agent

The `lastAgent` property contains the last agent that ran. Depending on your application, this is often useful for the next time the user inputs something. For example, if you have a frontline triage agent that hands off to a language-specific agent, you can store the last agent, and re-use it the next time the user messages the agent.
The `lastAgent` property contains the last agent that ran. Depending on your application, this is often useful for the next time the user inputs something. For example, if you have a frontline triage agent that hands off to a language-specific agent, you can store the last agent, and reuse it the next time the user messages the agent.

In streaming mode it can also be useful to access the `currentAgent` property that's mapping to the current agent that is running.

Expand Down
3 changes: 1 addition & 2 deletions docs/src/content/docs/guides/voice-agents/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ import thinClientExample from '../../../../../../examples/docs/voice-agents/thin

4. **Create a session**

Unlike a regular agent, a Voice Agent is continously running and listening inside a `RealtimeSession` that handles the conversation and connection to the model over time. This session will also handle the audio processing, interruptions, and a lot of the other lifecycle functionality we will cover later on.
Unlike a regular agent, a Voice Agent is continuously running and listening inside a `RealtimeSession` that handles the conversation and connection to the model over time. This session will also handle the audio processing, interruptions, and a lot of the other lifecycle functionality we will cover later on.

```typescript
import { RealtimeSession } from '@openai/agents-realtime';
Expand Down Expand Up @@ -113,7 +113,6 @@ import thinClientExample from '../../../../../../examples/docs/voice-agents/thin
From here you can start designing and building your own voice agent. Voice agents include a lot of the same features as regular agents, but have some of their own unique features.

- Learn how to give your voice agent:

- [Tools](/openai-agents-js/guides/voice-agents/build#tools)
- [Handoffs](/openai-agents-js/guides/voice-agents/build#handoffs)
- [Guardrails](/openai-agents-js/guides/voice-agents/build#guardrails)
Expand Down
2 changes: 1 addition & 1 deletion examples/docs/results/historyLoop.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ const agent = new Agent({
});

let history: AgentInputItem[] = [
// intial message
// initial message
user('Are we there yet?'),
];

Expand Down
2 changes: 1 addition & 1 deletion examples/handoffs/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ async function main() {
result = await run(secondAgent, [
...result.history,
{
content: 'I live in New York City. Whats the population of the city?',
content: "I live in New York City. What's the population of the city?",
role: 'user',
},
]);
Expand Down
2 changes: 1 addition & 1 deletion examples/tools/web-search.ts
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ async function main() {
const messages = result.history;
messages.push({
role: 'user',
content: 'search the web for more details of the highlighed player.',
content: 'search the web for more details of the highlighted player.',
});

const result2 = await run(agent, messages);
Expand Down
16 changes: 8 additions & 8 deletions packages/agents-core/src/agent.ts
Original file line number Diff line number Diff line change
Expand Up @@ -34,30 +34,30 @@ export type ToolUseBehaviorFlags = 'run_llm_again' | 'stop_on_first_tool';
export type ToolsToFinalOutputResult =
| {
/**
* Wether this is the final output. If `false`, the LLM will run again and receive the tool call output
* Whether this is the final output. If `false`, the LLM will run again and receive the tool call output
*/
isFinalOutput: false;
/**
* Wether the agent was interrupted by a tool approval. If `true`, the LLM will run again and receive the tool call output
* Whether the agent was interrupted by a tool approval. If `true`, the LLM will run again and receive the tool call output
*/
isInterrupted: undefined;
}
| {
isFinalOutput: false;
/**
* Wether the agent was interrupted by a tool approval. If `true`, the LLM will run again and receive the tool call output
* Whether the agent was interrupted by a tool approval. If `true`, the LLM will run again and receive the tool call output
*/
isInterrupted: true;
interruptions: RunToolApprovalItem[];
}
| {
/**
* Wether this is the final output. If `false`, the LLM will run again and receive the tool call output
* Whether this is the final output. If `false`, the LLM will run again and receive the tool call output
*/
isFinalOutput: true;

/**
* Wether the agent was interrupted by a tool approval. If `true`, the LLM will run again and receive the tool call output
* Whether the agent was interrupted by a tool approval. If `true`, the LLM will run again and receive the tool call output
*/
isInterrupted: undefined;

Expand Down Expand Up @@ -212,7 +212,7 @@ export interface AgentConfiguration<
* This lets you configure how tool use is handled.
* - run_llm_again: The default behavior. Tools are run, and then the LLM receives the results
* and gets to respond.
* - stop_on_first_tool: The output of the frist tool call is used as the final output. This means
* - stop_on_first_tool: The output of the first tool call is used as the final output. This means
* that the LLM does not process the result of the tool call.
* - A list of tool names: The agent will stop running if any of the tools in the list are called.
* The final output will be the output of the first matching tool call. The LLM does not process
Expand All @@ -227,7 +227,7 @@ export interface AgentConfiguration<
toolUseBehavior: ToolUseBehavior;

/**
* Wether to reset the tool choice to the default value after a tool has been called. Defaults
* Whether to reset the tool choice to the default value after a tool has been called. Defaults
* to `true`. This ensures that the agent doesn't enter an infinite loop of tool usage.
*/
resetToolChoice: boolean;
Expand Down Expand Up @@ -383,7 +383,7 @@ export class Agent<
}

/**
* Ouput schema name
* Output schema name.
*/
get outputSchemaName(): string {
if (this.outputType === 'text') {
Expand Down
2 changes: 1 addition & 1 deletion packages/agents-core/src/extensions/handoffFilters.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ const TOOL_TYPES = new Set([
]);

/**
* Filters out all tool items: file search, web serach and function calls+output
* Filters out all tool items: file search, web search and function calls+output
* @param handoffInputData
* @returns
*/
Expand Down
2 changes: 1 addition & 1 deletion packages/agents-core/src/handoff.ts
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ export class Handoff<
/**
* The function that invokes the handoff. The parameters passed are:
* 1. The handoff run context
* 2. The arugments from the LLM, as a JSON string. Empty string if inputJsonSchema is empty.
* 2. The arguments from the LLM, as a JSON string. Empty string if inputJsonSchema is empty.
*
* Must return an agent
*/
Expand Down
2 changes: 1 addition & 1 deletion packages/agents-core/src/run.ts
Original file line number Diff line number Diff line change
Expand Up @@ -767,7 +767,7 @@ export class Runner extends RunHooks<any, AgentOutputType<unknown>> {

if (!finalResponse) {
throw new ModelBehaviorError(
'Model did not procude a final response!',
'Model did not produce a final response!',
result.state,
);
}
Expand Down
4 changes: 2 additions & 2 deletions packages/agents-core/src/runImplementation.ts
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ export function processModelResponse<TContext>(
});
if (!mcpServerTool.providerData.on_approval) {
// When onApproval function exists, it confirms the approval right after this.
// Thus, this approval item must be appended only for the next turn interrpution patterns.
// Thus, this approval item must be appended only for the next turn interruption patterns.
items.push(approvalItem);
}
}
Expand Down Expand Up @@ -956,7 +956,7 @@ export async function executeHandoffCalls<

if (runHandoffs.length > 1) {
// multiple handoffs. Ignoring all but the first one by adding reject responses for those
const outputMessage = 'Multiple handoffs detected, ignorning this one.';
const outputMessage = 'Multiple handoffs detected, ignoring this one.';
for (let i = 1; i < runHandoffs.length; i++) {
newStepItems.push(
new RunToolCallOutputItem(
Expand Down
4 changes: 2 additions & 2 deletions packages/agents-core/src/tool.ts
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ export type FunctionToolResult<
}
| {
/**
* Indiciates that the tool requires approval before it can be called.
* Indicates that the tool requires approval before it can be called.
*/
type: 'function_approval';
/**
Expand All @@ -252,7 +252,7 @@ export type FunctionToolResult<
}
| {
/**
* Indiciates that the tool requires approval before it can be called.
* Indicates that the tool requires approval before it can be called.
*/
type: 'hosted_mcp_tool_approval';
/**
Expand Down
2 changes: 1 addition & 1 deletion packages/agents-core/src/tracing/processor.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ type Span = TSpan<any>;
*/
export interface TracingProcessor {
/**
* Called when the trace processor should start procesing traces.
* Called when the trace processor should start processing traces.
* Only available if the processor is performing tasks like exporting traces in a loop to start
* the loop
*/
Expand Down
2 changes: 1 addition & 1 deletion packages/agents-core/src/types/protocol.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ import { z } from '@openai/zod/v3';
// ----------------------------

/**
* Every item in the protocol provides a `providerData` field to accomodate custom functionality
* Every item in the protocol provides a `providerData` field to accommodate custom functionality
* or new fields
*/
export const SharedBase = z.object({
Expand Down
2 changes: 1 addition & 1 deletion packages/agents-openai/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@

### Patch Changes

- adeb218: Ignore empty tool list when callling LLM
- adeb218: Ignore empty tool list when calling LLM
- cbd4deb: feat: handle unknown hosted tools in responses model
- Updated dependencies [544ed4b]
- @openai/[email protected]
Expand Down
2 changes: 1 addition & 1 deletion packages/agents-realtime/src/clientMessages.ts
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ export type RealtimeTurnDetectionConfigAsIs = {
threshold?: number;
};

// The Realtime API accepts snake_cased keys, so when using this, this SDK coverts the keys to snake_case ones before passing it to the API
// The Realtime API accepts snake_cased keys, so when using this, this SDK converts the keys to snake_case ones before passing it to the API.
export type RealtimeTurnDetectionConfigCamelCase = {
type?: 'semantic_vad' | 'server_vad';
createResponse?: boolean;
Expand Down
2 changes: 1 addition & 1 deletion packages/agents-realtime/src/openaiRealtimeBase.ts
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ export const DEFAULT_OPENAI_REALTIME_MODEL: OpenAIRealtimeModels =
'gpt-4o-realtime-preview';

/**
* The default session config that gets send over during session connection unless overriden
* The default session config that gets send over during session connection unless overridden
* by the user.
*/
export const DEFAULT_OPENAI_REALTIME_SESSION_CONFIG: Partial<RealtimeSessionConfig> =
Expand Down