Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions src/oss/langchain/middleware/built-in.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,16 @@ const agent = createAgent({
<Accordion title="Configuration options">

:::python
<Tip>
The `fraction` conditions for `trigger` and `keep` (shown below) rely on a chat model's [profile data](/oss/langchain/models#model-profiles) if using `langchain>=1.1`. If data are not available, use another condition or specify manually:
```python
custom_profile = {
"max_input_tokens": 100_000,
# ...
}
model = init_chat_model("...", profile=custom_profile)
```
</Tip>
<ParamField body="model" type="string | BaseChatModel" required>
Model for generating summaries. Can be a model identifier string (e.g., `'openai:gpt-4o-mini'`) or a `BaseChatModel` instance. See @[`init_chat_model`][init_chat_model(model)] for more information.
</ParamField>
Expand Down
82 changes: 82 additions & 0 deletions src/oss/langchain/models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -1190,6 +1190,88 @@ LangChain supports all major model providers, including OpenAI, Anthropic, Googl

## Advanced topics

### Model profiles

<Warning> This is a beta feature. The format of model profiles is subject to change. </Warning>

<Info> Model profiles require `langchain>=1.1`. </Info>

:::python
LangChain chat models expose supported features and capabilities through a `.profile` attribute:
```python
model.profile
# {
# "max_input_tokens": 400000,
# "image_inputs": True,
# "reasoning_output": True,
# "tool_calling": True,
# ...
# }
```
Refer to the full set of fields in the [API reference](https://reference.langchain.com/python/langchain_core/language_models/).

Much of the model profile data is powered by the [models.dev](https://github.com/sst/models.dev) project, an open source initiative that provides model capability data. These data are augmented with additional fields for purposes of use with LangChain. These augmentations are kept aligned with the upstream project as it evolves.

Model profile data allow applications to work around model capabilities dynamically. For example:
1. [Summarization middleware](/oss/langchain/middleware/built-in#summarization) can trigger summarization based on a model's context window size.
2. [Structured output](/oss/langchain/structured-output) strategies in `create_agent` can be inferred automatically (e.g., by checking support for native structured output features).
3. Model inputs can be gated based on supported [modalities](#multimodal) and maximum input tokens.

#### Updating or overwriting profile data
Model profile data can be changed if it is missing, stale, or incorrect.

**Option 1 (quick fix)**

You can instantiate a chat model with any valid profile:
```python
custom_profile = {
"max_input_tokens": 100_000,
"tool_calling": True,
"structured_output": True,
# ...
}
model = init_chat_model("...", profile=custom_profile)
```

The `profile` is also a regular `dict` and can be updated in place. If the model instance is shared, consider using
```python
new_profile = model.profile | {"key": "value"}
model.model_copy(update={"profile": new_profile})
```
to avoid mutating shared state.

**Option 2 (fix data upstream)**

The primary source for the data is the [models.dev](https://models.dev/) project. These data are merged with additional fields and overrides in LangChain [integration packages](/oss/python/integrations/providers/overview) and are shipped with those packages.

Model profile data can be updated through the following process:
1. (If needed) update the source data at [models.dev](https://models.dev/) through a pull request to its [repository on Github](https://github.com/sst/models.dev).
2. (If needed) update additional fields and overrides in `langchain_<package>/data/profile_augmentations.toml` through a pull request to the LangChain [integration package](/oss/python/integrations/providers/overview)`.
3. Use the [langchain-model-profiles](https://pypi.org/project/langchain-model-profiles/) CLI tool to pull the latest data from [models.dev](https://models.dev/), merge in the augmentations and update the profile data:

```bash
pip install langchain-model-profiles
```
```bash
langchain-profiles refresh --provider <provider> --data-dir <data_dir>
```
That command will:
- Download the latest data for `<provider>` from models.dev
- Merge in augmentations from `profile_augmentations.toml` in `<data_dir>`
- Write the merged profiles to `profiles.py` in `<data_dir>`

Example, from [libs/partners/anthropic](https://github.com/langchain-ai/langchain/tree/master/libs/partners/anthropic) in the LangChain monorepo:
```bash
uv run --with langchain-model-profiles --provider anthropic --data-dir langchain_anthropic/data
```

:::

:::js
LangChain chat models expose supported features and capabilities through a `.profile` accessor.

:::

### Multimodal

Certain models can process and return non-textual data such as images, audio, and video. You can pass non-textual data to a model by providing [content blocks](/oss/langchain/messages#message-content).
Expand Down
16 changes: 14 additions & 2 deletions src/oss/langchain/structured-output.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,20 @@ Controls how the agent returns structured data:
- **`None`**: No structured output

When a schema type is provided directly, LangChain automatically chooses:
- `ProviderStrategy` for models supporting native structured output (e.g. [OpenAI](/oss/integrations/providers/openai), [Grok](/oss/integrations/providers/xai))
- `ToolStrategy` for all other models
- `ProviderStrategy` for models supporting native structured output (e.g. [OpenAI](/oss/integrations/providers/openai), [Anthropic](/oss/integrations/providers/anthropic), or [Grok](/oss/integrations/providers/xai)).
- `ToolStrategy` for all other models.

<Tip>
Support for native structured output features is read dynamically from the model's [profile data](/oss/langchain/models#model-profiles) if using `langchain>=1.1`. If data are not available, use another condition or specify manually:
```python
custom_profile = {
"structured_output": True,
# ...
}
model = init_chat_model("...", profile=custom_profile)
```
If tools are specified, the model must support simultaneous use of tools and structured output.
</Tip>

The structured response is returned in the `structured_response` key of the agent's final state.
:::
Expand Down