Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions docs/ja/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,11 +89,43 @@ class MyHooks(RunHooks):
print(f"{agent.name} → {u.requests} requests, {u.total_tokens} total tokens")
```

## フックから会話履歴を編集する

`RunContextWrapper` には `message_history` も含まれており、フックから会話を直接読み書きできます。

- `get_messages()` は元の入力・モデル出力・保留中の挿入を含む完全な履歴を `ResponseInputItem` のリストとして返します。
- `add_message(agent=..., message=...)` は任意のメッセージ(文字列、辞書、または `ResponseInputItem` のリスト)をキューに追加します。追加されたメッセージは即座に LLM への入力に連結され、実行結果やストリームイベントでは `InjectedInputItem` として公開されます。
- `override_next_turn(messages)` は次の LLM 呼び出しに送信される履歴全体を置き換えます。ガードレールや外部レビュー後に履歴を書き換えたい場合に使用できます。

```python
class BroadcastHooks(RunHooks):
def __init__(self, reviewer_name: str):
self.reviewer_name = reviewer_name

async def on_llm_start(
self,
context: RunContextWrapper,
agent: Agent,
_instructions: str | None,
_input_items: list[TResponseInputItem],
) -> None:
context.message_history.add_message(
agent=agent,
message={
"role": "user",
"content": f"{self.reviewer_name}: 先に付録のデータを引用してください。",
},
)
```

> **注意:** `conversation_id` または `previous_response_id` を指定して実行している場合、履歴はサーバー側のスレッドで管理されるため、そのランでは `message_history.override_next_turn()` を使用できません。

## API リファレンス

詳細な API ドキュメントは次を参照してください:

- [`Usage`][agents.usage.Usage] - 使用状況トラッキングのデータ構造
- [`RequestUsage`][agents.usage.RequestUsage] - リクエストごとの使用状況の詳細
- [`RunContextWrapper`][agents.run.RunContextWrapper] - 実行コンテキストから使用状況にアクセス
- [`MessageHistory`][agents.run_context.MessageHistory] - フックから会話履歴を閲覧・編集
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect module reference in documentation link. The link references [agents.run_context.MessageHistory] but based on the code structure, MessageHistory is defined in src/agents/message_history.py, so the reference should be [agents.message_history.MessageHistory].

Suggested change
- [`MessageHistory`][agents.run_context.MessageHistory] - フックから会話履歴を閲覧・編集
- [`MessageHistory`][agents.message_history.MessageHistory] - フックから会話履歴を閲覧・編集

Copilot uses AI. Check for mistakes.
- [`RunHooks`][agents.run.RunHooks] - 使用状況トラッキングのライフサイクルにフック
32 changes: 32 additions & 0 deletions docs/ko/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,11 +89,43 @@ class MyHooks(RunHooks):
print(f"{agent.name} → {u.requests} requests, {u.total_tokens} total tokens")
```

## 훅에서 대화 기록 수정하기

`RunContextWrapper`에는 `message_history`도 포함되어 있어 훅에서 현재 대화를 바로 읽거나 수정할 수 있습니다.

- `get_messages()`는 원본 입력, 모델 출력, 보류 중인 삽입 항목을 모두 포함한 전체 기록을 `ResponseInputItem` 리스트로 반환합니다.
- `add_message(agent=..., message=...)`는 사용자 정의 메시지(문자열, 딕셔너리 또는 `ResponseInputItem` 리스트)를 큐에 추가합니다. 추가된 메시지는 즉시 LLM 입력에 이어 붙고 실행 결과/스트림 이벤트에서는 `InjectedInputItem`으로 노출됩니다.
- `override_next_turn(messages)`는 다음 LLM 호출의 입력 전체를 교체할 때 사용합니다. 가드레일이나 외부 검토 결과에 따라 히스토리를 다시 작성해야 할 때 유용합니다.

```python
class BroadcastHooks(RunHooks):
def __init__(self, reviewer_name: str):
self.reviewer_name = reviewer_name

async def on_llm_start(
self,
context: RunContextWrapper,
agent: Agent,
_instructions: str | None,
_input_items: list[TResponseInputItem],
) -> None:
context.message_history.add_message(
agent=agent,
message={
"role": "user",
"content": f"{self.reviewer_name}: 답변 전에 부록 데이터를 인용하세요.",
},
)
```

> **참고:** `conversation_id` 또는 `previous_response_id`와 함께 실행하는 경우 서버 측 대화 스레드가 입력을 관리하므로 해당 런에서는 `message_history.override_next_turn()`을 사용할 수 없습니다.

## API 레퍼런스

자세한 API 문서는 다음을 참조하세요:

- [`Usage`][agents.usage.Usage] - 사용량 추적 데이터 구조
- [`RequestUsage`][agents.usage.RequestUsage] - 요청별 사용량 세부 정보
- [`RunContextWrapper`][agents.run.RunContextWrapper] - 실행 컨텍스트에서 사용량 접근
- [`MessageHistory`][agents.run_context.MessageHistory] - 훅에서 대화 기록을 조회/편집
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect module reference in documentation link. The link references [agents.run_context.MessageHistory] but based on the code structure, MessageHistory is defined in src/agents/message_history.py, so the reference should be [agents.message_history.MessageHistory].

Suggested change
- [`MessageHistory`][agents.run_context.MessageHistory] - 훅에서 대화 기록을 조회/편집
- [`MessageHistory`][agents.message_history.MessageHistory] - 훅에서 대화 기록을 조회/편집

Copilot uses AI. Check for mistakes.
- [`RunHooks`][agents.run.RunHooks] - 사용량 추적 수명 주기에 훅 연결
39 changes: 39 additions & 0 deletions docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,11 +85,50 @@ class MyHooks(RunHooks):
print(f"{agent.name} → {u.requests} requests, {u.total_tokens} total tokens")
```

## Modifying chat history in hooks

`RunContextWrapper` also exposes `message_history`, giving hooks a mutable view of the
conversation:

- `get_messages()` returns the full transcript (original input, model outputs, and pending
injections) as a list of `ResponseInputItem` dictionaries.
- `add_message(agent=..., message=...)` queues custom messages (string, dict, or list of
`ResponseInputItem`s). Pending messages are appended to the current LLM input immediately and are
emitted as `InjectedInputItem`s in the run result or stream events.
- `override_next_turn(messages)` replaces the entire input for the upcoming LLM call. Use this to
rewrite history after a guardrail or external reviewer intervenes.

```python
class BroadcastHooks(RunHooks):
def __init__(self, reviewer_name: str):
self.reviewer_name = reviewer_name

async def on_llm_start(
self,
context: RunContextWrapper,
agent: Agent,
_instructions: str | None,
_input_items: list[TResponseInputItem],
) -> None:
context.message_history.add_message(
agent=agent,
message={
"role": "user",
"content": f"{self.reviewer_name}: Please cite the appendix before answering.",
},
)
```

> **Note:** When running with `conversation_id` or `previous_response_id`, overrides are managed by
> the server-side conversation thread and `message_history.override_next_turn()` is disabled for
> that run.

## API Reference

For detailed API documentation, see:

- [`Usage`][agents.usage.Usage] - Usage tracking data structure
- [`RequestUsage`][agents.usage.RequestUsage] - Per-request usage details
- [`RunContextWrapper`][agents.run.RunContextWrapper] - Access usage from run context
- [`MessageHistory`][agents.run_context.MessageHistory] - Inspect or edit the conversation from hooks
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect module reference in documentation link. The link references [agents.run_context.MessageHistory] but based on the code structure, MessageHistory is defined in src/agents/message_history.py, so the reference should be [agents.message_history.MessageHistory].

Suggested change
- [`MessageHistory`][agents.run_context.MessageHistory] - Inspect or edit the conversation from hooks
- [`MessageHistory`][agents.message_history.MessageHistory] - Inspect or edit the conversation from hooks

Copilot uses AI. Check for mistakes.
- [`RunHooks`][agents.run.RunHooks] - Hook into usage tracking lifecycle
32 changes: 32 additions & 0 deletions docs/zh/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,11 +89,43 @@ class MyHooks(RunHooks):
print(f"{agent.name} → {u.requests} requests, {u.total_tokens} total tokens")
```

## 在钩子中修改对话历史

`RunContextWrapper` 还暴露了 `message_history`,允许钩子直接读取或修改当前会话:

- `get_messages()` 以 `ResponseInputItem` 列表的形式返回完整对话(原始输入、模型输出以及所有待插入消息)。
- `add_message(agent=..., message=...)` 将自定义消息(字符串、字典或 `ResponseInputItem` 列表)加入队列。消息会立即追加到本次 LLM 调用的输入,并作为 `InjectedInputItem` 出现在运行结果或流式事件中。
- `override_next_turn(messages)` 用自定义内容完全替换下一次 LLM 调用的输入,适用于在守护程序或人工审核后重写上下文的场景。

```python
class BroadcastHooks(RunHooks):
def __init__(self, reviewer_name: str):
self.reviewer_name = reviewer_name

async def on_llm_start(
self,
context: RunContextWrapper,
agent: Agent,
_instructions: str | None,
_input_items: list[TResponseInputItem],
) -> None:
context.message_history.add_message(
agent=agent,
message={
"role": "user",
"content": f"{self.reviewer_name}: 回答前请先引用附录中的数据。",
},
)
```

> **注意:** 当运行时指定了 `conversation_id` 或 `previous_response_id` 时,会话由服务器端线程维护,此时无法调用 `message_history.override_next_turn()`。

## API 参考

如需详细的 API 文档,请参见:

- [`Usage`][agents.usage.Usage] - 用量跟踪数据结构
- [`RequestUsage`][agents.usage.RequestUsage] - 按请求的用量详情
- [`RunContextWrapper`][agents.run.RunContextWrapper] - 从运行上下文访问用量
- [`MessageHistory`][agents.run_context.MessageHistory] - 在钩子中查看或编辑对话
Copy link

Copilot AI Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect module reference in documentation link. The link references [agents.run_context.MessageHistory] but based on the code structure, MessageHistory is defined in src/agents/message_history.py, so the reference should be [agents.message_history.MessageHistory].

Suggested change
- [`MessageHistory`][agents.run_context.MessageHistory] - 在钩子中查看或编辑对话
- [`MessageHistory`][agents.message_history.MessageHistory] - 在钩子中查看或编辑对话

Copilot uses AI. Check for mistakes.
- [`RunHooks`][agents.run.RunHooks] - 接入用量跟踪的生命周期
4 changes: 4 additions & 0 deletions src/agents/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@
from .items import (
HandoffCallItem,
HandoffOutputItem,
InjectedInputItem,
ItemHelpers,
MessageOutputItem,
ModelResponse,
Expand All @@ -66,6 +67,7 @@
SessionABC,
SQLiteSession,
)
from .message_history import MessageHistory
from .model_settings import ModelSettings
from .models.interface import Model, ModelProvider, ModelTracing
from .models.multi_provider import MultiProvider
Expand Down Expand Up @@ -276,6 +278,7 @@ def enable_verbose_stdout_logging():
"RunItem",
"HandoffCallItem",
"HandoffOutputItem",
"InjectedInputItem",
"ToolCallItem",
"ToolCallOutputItem",
"ReasoningItem",
Expand All @@ -287,6 +290,7 @@ def enable_verbose_stdout_logging():
"SQLiteSession",
"OpenAIConversationsSession",
"RunContextWrapper",
"MessageHistory",
"TContext",
"RunErrorDetails",
"RunResult",
Expand Down
Loading
Loading