diff --git a/src/oss/langchain/middleware/built-in.mdx b/src/oss/langchain/middleware/built-in.mdx index 16b68a1f5c..16167469cf 100644 --- a/src/oss/langchain/middleware/built-in.mdx +++ b/src/oss/langchain/middleware/built-in.mdx @@ -96,6 +96,16 @@ const agent = createAgent({ :::python + +The `fraction` conditions for `trigger` and `keep` (shown below) rely on a chat model's [profile data](/oss/langchain/models#model-profiles) if using `langchain>=1.1`. If data are not available, use another condition or specify manually: +```python +custom_profile = { + "max_input_tokens": 100_000, + # ... +} +model = init_chat_model("...", profile=custom_profile) +``` + Model for generating summaries. Can be a model identifier string (e.g., `'openai:gpt-4o-mini'`) or a `BaseChatModel` instance. See @[`init_chat_model`][init_chat_model(model)] for more information. diff --git a/src/oss/langchain/models.mdx b/src/oss/langchain/models.mdx index fabb163826..b38032260e 100644 --- a/src/oss/langchain/models.mdx +++ b/src/oss/langchain/models.mdx @@ -1190,6 +1190,88 @@ LangChain supports all major model providers, including OpenAI, Anthropic, Googl ## Advanced topics +### Model profiles + + This is a beta feature. The format of model profiles is subject to change. + + Model profiles require `langchain>=1.1`. + +:::python +LangChain chat models expose supported features and capabilities through a `.profile` attribute: +```python +model.profile +# { +# "max_input_tokens": 400000, +# "image_inputs": True, +# "reasoning_output": True, +# "tool_calling": True, +# ... +# } +``` +Refer to the full set of fields in the [API reference](https://reference.langchain.com/python/langchain_core/language_models/). + +Much of the model profile data is powered by the [models.dev](https://github.com/sst/models.dev) project, an open source initiative that provides model capability data. These data are augmented with additional fields for purposes of use with LangChain. These augmentations are kept aligned with the upstream project as it evolves. + +Model profile data allow applications to work around model capabilities dynamically. For example: +1. [Summarization middleware](/oss/langchain/middleware/built-in#summarization) can trigger summarization based on a model's context window size. +2. [Structured output](/oss/langchain/structured-output) strategies in `create_agent` can be inferred automatically (e.g., by checking support for native structured output features). +3. Model inputs can be gated based on supported [modalities](#multimodal) and maximum input tokens. + +#### Updating or overwriting profile data +Model profile data can be changed if it is missing, stale, or incorrect. + +**Option 1 (quick fix)** + +You can instantiate a chat model with any valid profile: +```python +custom_profile = { + "max_input_tokens": 100_000, + "tool_calling": True, + "structured_output": True, + # ... +} +model = init_chat_model("...", profile=custom_profile) +``` + +The `profile` is also a regular `dict` and can be updated in place. If the model instance is shared, consider using +```python +new_profile = model.profile | {"key": "value"} +model.model_copy(update={"profile": new_profile}) +``` +to avoid mutating shared state. + +**Option 2 (fix data upstream)** + +The primary source for the data is the [models.dev](https://models.dev/) project. These data are merged with additional fields and overrides in LangChain [integration packages](/oss/python/integrations/providers/overview) and are shipped with those packages. + +Model profile data can be updated through the following process: +1. (If needed) update the source data at [models.dev](https://models.dev/) through a pull request to its [repository on Github](https://github.com/sst/models.dev). +2. (If needed) update additional fields and overrides in `langchain_/data/profile_augmentations.toml` through a pull request to the LangChain [integration package](/oss/python/integrations/providers/overview)`. +3. Use the [langchain-model-profiles](https://pypi.org/project/langchain-model-profiles/) CLI tool to pull the latest data from [models.dev](https://models.dev/), merge in the augmentations and update the profile data: + +```bash +pip install langchain-model-profiles +``` +```bash +langchain-profiles refresh --provider --data-dir +``` +That command will: +- Download the latest data for `` from models.dev +- Merge in augmentations from `profile_augmentations.toml` in `` +- Write the merged profiles to `profiles.py` in `` + +Example, from [libs/partners/anthropic](https://github.com/langchain-ai/langchain/tree/master/libs/partners/anthropic) in the LangChain monorepo: +```bash +uv run --with langchain-model-profiles --provider anthropic --data-dir langchain_anthropic/data +``` + +::: + +:::js +LangChain chat models expose supported features and capabilities through a `.profile` accessor. + +::: + ### Multimodal Certain models can process and return non-textual data such as images, audio, and video. You can pass non-textual data to a model by providing [content blocks](/oss/langchain/messages#message-content). diff --git a/src/oss/langchain/structured-output.mdx b/src/oss/langchain/structured-output.mdx index 71eb3c34a6..9589dabd5f 100644 --- a/src/oss/langchain/structured-output.mdx +++ b/src/oss/langchain/structured-output.mdx @@ -29,8 +29,20 @@ Controls how the agent returns structured data: - **`None`**: No structured output When a schema type is provided directly, LangChain automatically chooses: -- `ProviderStrategy` for models supporting native structured output (e.g. [OpenAI](/oss/integrations/providers/openai), [Grok](/oss/integrations/providers/xai)) -- `ToolStrategy` for all other models +- `ProviderStrategy` for models supporting native structured output (e.g. [OpenAI](/oss/integrations/providers/openai), [Anthropic](/oss/integrations/providers/anthropic), or [Grok](/oss/integrations/providers/xai)). +- `ToolStrategy` for all other models. + + +Support for native structured output features is read dynamically from the model's [profile data](/oss/langchain/models#model-profiles) if using `langchain>=1.1`. If data are not available, use another condition or specify manually: +```python +custom_profile = { + "structured_output": True, + # ... +} +model = init_chat_model("...", profile=custom_profile) +``` +If tools are specified, the model must support simultaneous use of tools and structured output. + The structured response is returned in the `structured_response` key of the agent's final state. :::