+ "details": "## Summary\n\nA serialization injection vulnerability exists in LangChain's `dumps()` and `dumpd()` functions. The functions do not escape dictionaries with `'lc'` keys when serializing free-form dictionaries. The `'lc'` key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.\n\n### Attack surface\n\nThe core vulnerability was in `dumps()` and `dumpd()`: these functions failed to escape user-controlled dictionaries containing `'lc'` keys. When this unescaped data was later deserialized via `load()` or `loads()`, the injected structures were treated as legitimate LangChain objects rather than plain user data.\n\nThis escaping bug enabled several attack vectors:\n\n1. **Injection via user data**: Malicious LangChain object structures could be injected through user-controlled fields like `metadata`, `additional_kwargs`, or `response_metadata`\n2. **Class instantiation within trusted namespaces**: Injected manifests could instantiate any `Serializable` subclass, but only within the pre-approved trusted namespaces (`langchain_core`, `langchain`, `langchain_community`). This includes classes with side effects in `__init__` (network calls, file operations, etc.). Note that namespace validation was already enforced before this patch, so arbitrary classes outside these trusted namespaces could not be instantiated.\n\n### Security hardening\n\nThis patch fixes the escaping bug in `dumps()` and `dumpd()` and introduces new restrictive defaults in `load()` and `loads()`: allowlist enforcement via `allowed_objects=\"core\"` (restricted to [serialization mappings](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/load/mapping.py)), `secrets_from_env` changed from `True` to `False`, and default Jinja2 template blocking via `init_validator`. These are breaking changes for some use cases.\n\n## Who is affected?\n\nApplications are vulnerable if they:\n\n1. **Use `astream_events(version=\"v1\")`** — The v1 implementation internally uses vulnerable serialization. Note: `astream_events(version=\"v2\")` is not vulnerable.\n2. **Use `Runnable.astream_log()`** — This method internally uses vulnerable serialization for streaming outputs.\n3. **Call `dumps()` or `dumpd()` on untrusted data, then deserialize with `load()` or `loads()`** — Trusting your own serialization output makes you vulnerable if user-controlled data (e.g., from LLM responses, metadata fields, or user inputs) contains `'lc'` key structures.\n4. **Deserialize untrusted data with `load()` or `loads()`** — Directly deserializing untrusted data that may contain injected `'lc'` structures.\n5. **Use `RunnableWithMessageHistory`** — Internal serialization in message history handling.\n6. **Use `InMemoryVectorStore.load()`** to deserialize untrusted documents.\n7. Load untrusted generations from cache using **`langchain-community` caches**.\n8. Load untrusted manifests from the LangChain Hub via **`hub.pull`**.\n9. Use **`StringRunEvaluatorChain`** on untrusted runs.\n10. Use **`create_lc_store`** or **`create_kv_docstore`** with untrusted documents.\n11. Use **`MultiVectorRetriever`** with byte stores containing untrusted documents.\n12. Use **`LangSmithRunChatLoader`** with runs containing untrusted messages.\n\nThe most common attack vector is through **LLM response fields** like `additional_kwargs` or `response_metadata`, which can be controlled via prompt injection and then serialized/deserialized in streaming operations.\n\n## Impact\n\nAttackers who control serialized data can extract environment variable secrets by injecting `{\"lc\": 1, \"type\": \"secret\", \"id\": [\"ENV_VAR\"]}` to load environment variables during deserialization (when `secrets_from_env=True`, which was the old default). They can also instantiate classes with controlled parameters by injecting constructor structures to instantiate any class within trusted namespaces with attacker-controlled parameters, potentially triggering side effects such as network calls or file operations.\n\nKey severity factors:\n\n- Affects the serialization path - applications trusting their own serialization output are vulnerable\n- Enables secret extraction when combined with `secrets_from_env=True` (the old default)\n- LLM responses in `additional_kwargs` can be controlled via prompt injection\n\n## Exploit example\n\n```python\nfrom langchain_core.load import dumps, load\nimport os\n\n# Attacker injects secret structure into user-controlled data\nattacker_dict = {\n \"user_data\": {\n \"lc\": 1,\n \"type\": \"secret\",\n \"id\": [\"OPENAI_API_KEY\"]\n }\n}\n\nserialized = dumps(attacker_dict) # Bug: does NOT escape the 'lc' key\n\nos.environ[\"OPENAI_API_KEY\"] = \"sk-secret-key-12345\"\ndeserialized = load(serialized, secrets_from_env=True)\n\nprint(deserialized[\"user_data\"]) # \"sk-secret-key-12345\" - SECRET LEAKED!\n\n```\n\n## Security hardening changes (breaking changes)\n\nThis patch introduces three breaking changes to `load()` and `loads()`:\n\n1. **New `allowed_objects` parameter** (defaults to `'core'`): Enforces allowlist of classes that can be deserialized. The `'all'` option corresponds to the list of objects [specified in `mappings.py`](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/load/mapping.py) while the `'core'` option limits to objects within `langchain_core`. We recommend that users explicitly specify which objects they want to allow for serialization/deserialization.\n2. **`secrets_from_env` default changed from `True` to `False`**: Disables automatic secret loading from environment\n3. **New `init_validator` parameter** (defaults to `default_init_validator`): Blocks Jinja2 templates by default\n\n## Migration guide\n\n### No changes needed for most users\n\nIf you're deserializing standard LangChain types (messages, documents, prompts, trusted partner integrations like `ChatOpenAI`, `ChatAnthropic`, etc.), your code will work without changes:\n\n```python\nfrom langchain_core.load import load\n\n# Uses default allowlist from serialization mappings\nobj = load(serialized_data)\n\n```\n\n### For custom classes\n\nIf you're deserializing custom classes not in the serialization mappings, add them to the allowlist:\n\n```python\nfrom langchain_core.load import load\nfrom my_package import MyCustomClass\n\n# Specify the classes you need\nobj = load(serialized_data, allowed_objects=[MyCustomClass])\n```\n\n### For Jinja2 templates\n\nJinja2 templates are now blocked by default because they can execute arbitrary code. If you need Jinja2 templates, pass `init_validator=None`:\n\n```python\nfrom langchain_core.load import load\nfrom langchain_core.prompts import PromptTemplate\n\nobj = load(\n serialized_data,\n allowed_objects=[PromptTemplate],\n init_validator=None\n)\n\n```\n\n> [!WARNING]\n> Only disable `init_validator` if you trust the serialized data. Jinja2 templates can execute arbitrary Python code.\n\n### For secrets from environment\n\n`secrets_from_env` now defaults to `False`. If you need to load secrets from environment variables:\n\n```python\nfrom langchain_core.load import load\n\nobj = load(serialized_data, secrets_from_env=True)\n```\n\n\n## Credits\n\n* Dumps bug was reported by @yardenporat\n* Changes for security hardening due to findings from @0xn3va and @VladimirEliTokarev",
0 commit comments