From caa67674aefc47aa84c0cfe9fe5b8f7b35ee28ca Mon Sep 17 00:00:00 2001 From: Chester Curme Date: Fri, 21 Nov 2025 10:29:17 -0500 Subject: [PATCH 1/7] start on profiles section --- src/oss/langchain/models.mdx | 83 ++++++++++++++++++++++++++++++++++++ 1 file changed, 83 insertions(+) diff --git a/src/oss/langchain/models.mdx b/src/oss/langchain/models.mdx index fabb163826..1e249a5a3c 100644 --- a/src/oss/langchain/models.mdx +++ b/src/oss/langchain/models.mdx @@ -1190,6 +1190,89 @@ LangChain supports all major model providers, including OpenAI, Anthropic, Googl ## Advanced topics +### Model profiles + + + Model profiles require `langchain>=1.1`. + + +:::python +LangChain chat models expose supported features and capabilities through a `.profile` attribute: +```python +model.profile +# { +# "max_input_tokens": 400000, +# "image_inputs": True, +# "reasoning_output": True, +# "tool_calling": True, +# ... +# } +``` +Refer to the full set of fields in the [API reference](https://reference.langchain.com/python/langchain_core/language_models/). + +Much of the model profile data is powered by the [models.dev](https://github.com/sst/models.dev) project, an open source initiative that provides model capability data. These data are augmented with additional fields for purposes of use with LangChain. These augmentations are kept aligned with the upstream project as it evolves. + +Model profile data allow applications to work around model capabilities dynamically. For example: +1. [Summarization middleware](/oss/langchain/middleware/built-in#summarization) can trigger summarization based on a model's context window size. +2. [Structured output](/oss/langchain/structured-output) strategies in `create_agent` can be inferred automatically (e.g., by checking support for native structured output features). +3. Model inputs can be gated based on supported [modalities](#multimodal) and maximum input tokens. + + +Model profile data can be changed if it is missing, stale, or incorrect. + +**Option 1 (quick fix)** + +You can instantiate a chat model with any valid profile: +```python +custom_profile = { + "max_input_tokens": 100_000, + "tool_calling": True, + "structured_output": True, + # ... +} +model = init_chat_model("...", profile=custom_profile) +``` + +The `profile` is also a regular `dict` and can be updated in place. If the model instance is shared, consider using +```python +new_profile = model.profile | {"key": "value"} +model.model_copy(update={"profile": new_profile}) +``` +to avoid mutating shared state. + +**Option 2 (fix data upstream)** + +The primary source for the data is the [models.dev](https://models.dev/) project. These data are merged with additional fields and overrides in LangChain [integration packages](/oss/python/integrations/providers/overview) and are shipped with those packages. + +Model profile data can be updated through the following process: +1. (If needed) update the source data at [models.dev](https://models.dev/) through a pull request to its [repository on Github](https://github.com/sst/models.dev). +2. (If needed) update additional fields and overrides in `langchain_/data/profile_augmentations.toml` through a pull request to the LangChain [integration package](/oss/python/integrations/providers/overview)`. +3. Use the [langchain-model-profiles](https://pypi.org/project/langchain-model-profiles/) CLI tool to pull the latest data from [models.dev](https://models.dev/), merge in the augmentations and update the profile data: + +```bash +pip install langchain-model-profiles +``` +```bash +langchain-profiles refresh --provider --data-dir +``` +That command will: +- Download the latest data for `` from models.dev +- Merge in augmentations from `profile_augmentations.toml` in `` +- Write the merged profiles to `profiles.py` in `` + +Example, from [libs/partners/anthropic](https://github.com/langchain-ai/langchain/tree/master/libs/partners/anthropic) in the LangChain monorepo: +```bash +uv run --with langchain-model-profiles --provider anthropic --data-dir langchain_anthropic/data +``` + + +::: + +:::js +LangChain chat models expose supported features and capabilities through a `.profile` accessor: + +::: + ### Multimodal Certain models can process and return non-textual data such as images, audio, and video. You can pass non-textual data to a model by providing [content blocks](/oss/langchain/messages#message-content). From 8d65164085a5050c986b08fdd6428b1ec4cbf272 Mon Sep 17 00:00:00 2001 From: Chester Curme Date: Fri, 21 Nov 2025 10:29:29 -0500 Subject: [PATCH 2/7] add callout to summarization middleware --- src/oss/langchain/middleware/built-in.mdx | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/src/oss/langchain/middleware/built-in.mdx b/src/oss/langchain/middleware/built-in.mdx index 16b68a1f5c..ce1eb1e193 100644 --- a/src/oss/langchain/middleware/built-in.mdx +++ b/src/oss/langchain/middleware/built-in.mdx @@ -96,6 +96,16 @@ const agent = createAgent({ :::python + +The `fraction` conditions below rely on a chat model's [profile data](/oss/langchain/models#model-profiles) if using `langchain>=1.1`. If data are not available, use another condition or specify manually: +```python +custom_profile = { + "max_input_tokens": 100_000, + # ... +} +model = init_chat_model("...", profile=custom_profile) +``` + Model for generating summaries. Can be a model identifier string (e.g., `'openai:gpt-4o-mini'`) or a `BaseChatModel` instance. See @[`init_chat_model`][init_chat_model(model)] for more information. From f500ca9f09ebcd0dbcd19da701c80fdb924d59d4 Mon Sep 17 00:00:00 2001 From: Chester Curme Date: Fri, 21 Nov 2025 10:37:37 -0500 Subject: [PATCH 3/7] add callout to response_format --- src/oss/langchain/structured-output.mdx | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/src/oss/langchain/structured-output.mdx b/src/oss/langchain/structured-output.mdx index 71eb3c34a6..9589dabd5f 100644 --- a/src/oss/langchain/structured-output.mdx +++ b/src/oss/langchain/structured-output.mdx @@ -29,8 +29,20 @@ Controls how the agent returns structured data: - **`None`**: No structured output When a schema type is provided directly, LangChain automatically chooses: -- `ProviderStrategy` for models supporting native structured output (e.g. [OpenAI](/oss/integrations/providers/openai), [Grok](/oss/integrations/providers/xai)) -- `ToolStrategy` for all other models +- `ProviderStrategy` for models supporting native structured output (e.g. [OpenAI](/oss/integrations/providers/openai), [Anthropic](/oss/integrations/providers/anthropic), or [Grok](/oss/integrations/providers/xai)). +- `ToolStrategy` for all other models. + + +Support for native structured output features is read dynamically from the model's [profile data](/oss/langchain/models#model-profiles) if using `langchain>=1.1`. If data are not available, use another condition or specify manually: +```python +custom_profile = { + "structured_output": True, + # ... +} +model = init_chat_model("...", profile=custom_profile) +``` +If tools are specified, the model must support simultaneous use of tools and structured output. + The structured response is returned in the `structured_response` key of the agent's final state. ::: From 14811713d66dd80b6f9a6c248824dd5d93cee48c Mon Sep 17 00:00:00 2001 From: Chester Curme Date: Fri, 21 Nov 2025 10:41:12 -0500 Subject: [PATCH 4/7] nit --- src/oss/langchain/models.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/oss/langchain/models.mdx b/src/oss/langchain/models.mdx index 1e249a5a3c..7244f1fec6 100644 --- a/src/oss/langchain/models.mdx +++ b/src/oss/langchain/models.mdx @@ -1269,7 +1269,7 @@ uv run --with langchain-model-profiles --provider anthropic --data-dir langchain ::: :::js -LangChain chat models expose supported features and capabilities through a `.profile` accessor: +LangChain chat models expose supported features and capabilities through a `.profile` accessor. ::: From ed46afadf4a245fed4cd332592320c256f904ef4 Mon Sep 17 00:00:00 2001 From: Chester Curme Date: Fri, 21 Nov 2025 14:46:19 -0500 Subject: [PATCH 5/7] add beta callout --- src/oss/langchain/models.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/oss/langchain/models.mdx b/src/oss/langchain/models.mdx index 7244f1fec6..f38faf08b5 100644 --- a/src/oss/langchain/models.mdx +++ b/src/oss/langchain/models.mdx @@ -1192,9 +1192,9 @@ LangChain supports all major model providers, including OpenAI, Anthropic, Googl ### Model profiles - - Model profiles require `langchain>=1.1`. - + This is a beta feature. The format of model profiles is subject to change. + + Model profiles require `langchain>=1.1`. :::python LangChain chat models expose supported features and capabilities through a `.profile` attribute: From 8d74bdba3b3cd9f295bcd08a57121c3fb6fddefd Mon Sep 17 00:00:00 2001 From: Chester Curme Date: Fri, 21 Nov 2025 16:25:55 -0500 Subject: [PATCH 6/7] cr --- src/oss/langchain/middleware/built-in.mdx | 2 +- src/oss/langchain/models.mdx | 3 +-- 2 files changed, 2 insertions(+), 3 deletions(-) diff --git a/src/oss/langchain/middleware/built-in.mdx b/src/oss/langchain/middleware/built-in.mdx index ce1eb1e193..16167469cf 100644 --- a/src/oss/langchain/middleware/built-in.mdx +++ b/src/oss/langchain/middleware/built-in.mdx @@ -97,7 +97,7 @@ const agent = createAgent({ :::python -The `fraction` conditions below rely on a chat model's [profile data](/oss/langchain/models#model-profiles) if using `langchain>=1.1`. If data are not available, use another condition or specify manually: +The `fraction` conditions for `trigger` and `keep` (shown below) rely on a chat model's [profile data](/oss/langchain/models#model-profiles) if using `langchain>=1.1`. If data are not available, use another condition or specify manually: ```python custom_profile = { "max_input_tokens": 100_000, diff --git a/src/oss/langchain/models.mdx b/src/oss/langchain/models.mdx index f38faf08b5..b38032260e 100644 --- a/src/oss/langchain/models.mdx +++ b/src/oss/langchain/models.mdx @@ -1217,7 +1217,7 @@ Model profile data allow applications to work around model capabilities dynamica 2. [Structured output](/oss/langchain/structured-output) strategies in `create_agent` can be inferred automatically (e.g., by checking support for native structured output features). 3. Model inputs can be gated based on supported [modalities](#multimodal) and maximum input tokens. - +#### Updating or overwriting profile data Model profile data can be changed if it is missing, stale, or incorrect. **Option 1 (quick fix)** @@ -1264,7 +1264,6 @@ Example, from [libs/partners/anthropic](https://github.com/langchain-ai/langchai ```bash uv run --with langchain-model-profiles --provider anthropic --data-dir langchain_anthropic/data ``` - ::: From fc59e886f30283310c7502ca1a93fd22d5c347c8 Mon Sep 17 00:00:00 2001 From: Hunter Lovell Date: Fri, 21 Nov 2025 17:34:53 -0700 Subject: [PATCH 7/7] add js model profile docs --- src/oss/langchain/middleware/built-in.mdx | 12 +++++++ src/oss/langchain/models.mdx | 44 ++++++++++++++++++++++- src/oss/langchain/structured-output.mdx | 37 ++++++++++++------- 3 files changed, 80 insertions(+), 13 deletions(-) diff --git a/src/oss/langchain/middleware/built-in.mdx b/src/oss/langchain/middleware/built-in.mdx index 16167469cf..50e4495416 100644 --- a/src/oss/langchain/middleware/built-in.mdx +++ b/src/oss/langchain/middleware/built-in.mdx @@ -158,6 +158,18 @@ model = init_chat_model("...", profile=custom_profile) ::: :::js + +The `fraction` conditions for `trigger` and `keep` (shown below) rely on a chat model's [profile data](/oss/langchain/models#model-profiles) if using `langchain@1.1.0`. If data are not available, use another condition or specify manually: +```typescript +const customProfile: ModelProfile = { + maxInputTokens: 100_000, + // ... +} +model = await initChatModel("...", { + profile: customProfile, +}); +``` + Model for generating summaries. Can be a model identifier string (e.g., `'openai:gpt-4o-mini'`) or a `BaseChatModel` instance. diff --git a/src/oss/langchain/models.mdx b/src/oss/langchain/models.mdx index b38032260e..5d9a0d128d 100644 --- a/src/oss/langchain/models.mdx +++ b/src/oss/langchain/models.mdx @@ -1268,7 +1268,49 @@ uv run --with langchain-model-profiles --provider anthropic --data-dir langchain ::: :::js -LangChain chat models expose supported features and capabilities through a `.profile` accessor. +LangChain chat models expose supported features and capabilities through a `.profile` property: +```typescript +model.profile; +// { +// maxInputTokens: 400000, +// imageInputs: true, +// reasoningOutput: true, +// toolCalling: true, +// ... +// } +``` +Refer to the full set of fields in the [API reference](https://reference.langchain.com/javascript/interfaces/_langchain_core.language_models_profile.ModelProfile.html). + +Much of the model profile data is powered by the [models.dev](https://github.com/sst/models.dev) project, an open source initiative that provides model capability data. These data are augmented with additional fields for purposes of use with LangChain. These augmentations are kept aligned with the upstream project as it evolves. + +Model profile data allow applications to work around model capabilities dynamically. For example: +1. [Summarization middleware](/oss/langchain/middleware/built-in#summarization) can trigger summarization based on a model's context window size. +2. [Structured output](/oss/langchain/structured-output) strategies in `createAgent` can be inferred automatically (e.g., by checking support for native structured output features). +3. Model inputs can be gated based on supported [modalities](#multimodal) and maximum input tokens. + +#### Updating or overwriting profile data +Model profile data can be changed if it is missing, stale, or incorrect. + +**Option 1 (quick fix)** + +You can instantiate a chat model with any valid profile: +```typescript +const customProfile = { + maxInputTokens: 100_000, + toolCalling: true, + structuredOutput: true, + // ... +}; +const model = initChatModel("...", { profile: customProfile }); +``` + +**Option 2 (fix data upstream)** + +The primary source for the data is the [models.dev](https://models.dev/) project. These data are merged with additional fields and overrides in LangChain [integration packages](/oss/javascript/integrations/providers/overview) and are shipped with those packages. + +Model profile data can be updated through the following process: +1. (If needed) update the source data at [models.dev](https://models.dev/) through a pull request to its [repository on Github](https://github.com/sst/models.dev). +2. (If needed) update additional fields and overrides in `langchain-/profiles.toml` through a pull request to the LangChain [integration package](/oss/javascript/integrations/providers/overview). ::: diff --git a/src/oss/langchain/structured-output.mdx b/src/oss/langchain/structured-output.mdx index 9589dabd5f..a372f33221 100644 --- a/src/oss/langchain/structured-output.mdx +++ b/src/oss/langchain/structured-output.mdx @@ -64,22 +64,35 @@ const agent = createAgent({ ``` ## Response Format - Controls how the agent returns structured data. You can provide either a Zod object or JSON schema. By default, the agent uses a tool calling strategy, in which the output is created by an additional tool call. Certain models support native structured output, in which case the agent will use that strategy instead. - You can control the behavior by wrapping `ResponseFormat` in a `toolStrategy` or `providerStrategy` function call: +Controls how the agent returns structured data. You can provide either a Zod object or JSON schema. By default, the agent uses a tool calling strategy, in which the output is created by an additional tool call. Certain models support native structured output, in which case the agent will use that strategy instead. - ```ts - import { toolStrategy, providerStrategy } from "langchain"; +You can control the behavior by wrapping `ResponseFormat` in a `toolStrategy` or `providerStrategy` function call: - const agent = createAgent({ - // use a provider strategy if supported by the model - responseFormat: providerStrategy(z.object({ ... })) - // or enforce a tool strategy - responseFormat: toolStrategy(z.object({ ... })) - }) - ``` +```ts +import { toolStrategy, providerStrategy } from "langchain"; + +const agent = createAgent({ + // use a provider strategy if supported by the model + responseFormat: providerStrategy(z.object({ ... })) + // or enforce a tool strategy + responseFormat: toolStrategy(z.object({ ... })) +}) +``` - The structured response is returned in the `structuredResponse` key of the agent's final state. +The structured response is returned in the `structuredResponse` key of the agent's final state. + + +Support for native structured output features is read dynamically from the model's [profile data](/oss/langchain/models#model-profiles) if using `langchain>=1.1`. If data are not available, use another condition or specify manually: +```typescript +const customProfile: ModelProfile = { + structuredOutput: true, + // ... +} +const model = await initChatModel("...", { profile: customProfile }); +``` +If tools are specified, the model must support simultaneous use of tools and structured output. + ::: ## Provider strategy