Replies: 2 comments 7 replies
-
@codekiln Hello! I'm here to assist you with your issue. To correctly identify or test configurable IDs when using Here's an example of how you can define and test configurable IDs: from langchain_core.runnables.utils import ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption
# Define some configurable fields
field1 = ConfigurableField(id="field1", name="Field 1", description="This is field 1")
field2 = ConfigurableFieldSingleOption(id="field2", options={"opt1": 1, "opt2": 2}, default="opt1", name="Field 2")
field3 = ConfigurableFieldMultiOption(id="field3", options={"optA": "A", "optB": "B"}, default=["optA"], name="Field 3")
# Function to test if a given ID is in the list of configurable fields
def is_configurable_id(configurable_fields, test_id):
return any(field.id == test_id for field in configurable_fields)
# List of configurable fields
configurable_fields = [field1, field2, field3]
# Test the function
test_id = "field2"
print(is_configurable_id(configurable_fields, test_id)) # Output: True
test_id = "field4"
print(is_configurable_id(configurable_fields, test_id)) # Output: False In addition, you can refer to the unit tests provided in LangChain's test suite to see how configurable fields are tested in practice. For example, the def test_configurable_fields() -> None:
fake_llm = FakeListLLM(responses=["a"]) # str -> List[List[str]]
assert fake_llm.invoke("...") == "a"
fake_llm_configurable = fake_llm.configurable_fields(
responses=ConfigurableField(
id="llm_responses",
name="LLM Responses",
description="A list of fake responses for this LLM",
)
)
assert fake_llm_configurable.invoke("...") == "a"
assert fake_llm_configurable.config_schema().schema() == {
"title": "RunnableConfigurableFieldsConfig",
"type": "object",
"properties": {"configurable": {"$ref": "#/definitions/Configurable"}},
"definitions": {
"Configurable": {
"title": "Configurable",
"type": "object",
"properties": {
"llm_responses": {
"title": "LLM Responses",
"description": "A list of fake responses for this LLM",
"default": ["a"],
"type": "array",
"items": {"type": "string"},
}
},
}
},
}
fake_llm_configured = fake_llm_configurable.with_config(
configurable={"llm_responses": ["b"]}
)
assert fake_llm_configured.invoke("...") == "b"
prompt = PromptTemplate.from_template("Hello, {name}!")
assert prompt.invoke({"name": "John"}) == StringPromptValue(text="Hello, John!")
prompt_configurable = prompt.configurable_fields(
template=ConfigurableField(
id="prompt_template",
name="Prompt Template",
description="The prompt template for this chain",
)
)
assert prompt_configurable.invoke({"name": "John"}) == StringPromptValue(
text="Hello, John!"
) This example shows how to create configurable fields for a fake LLM and a prompt template, and how to test their configurations. This approach ensures that you can correctly identify and test configurable IDs [1][2]. Additionally, the from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI
model = ChatOpenAI(temperature=0).configurable_fields(
temperature=ConfigurableField(
id="temperature",
name="LLM Temperature",
description="The temperature of the LLM",
)
)
# This creates a RunnableConfigurableFields for a chat model.
# When invoking the created RunnableSequence, you can pass in the
# value for your ConfigurableField's id which in this case
# will be change in temperature
prompt = PromptTemplate.from_template("Pick a random number above {x}")
chain = prompt | model
chain.invoke({"x": 0})
chain.invoke({"x": 0}, config={"configurable": {"temperature": 0.9}}) This example demonstrates how to dynamically configure a chat model's temperature using The |
Beta Was this translation helpful? Give feedback.
-
I haven't had time to create a detailed bug report yet, and I don't quite have time for that now, but I now need to write everything out before I forget; it's been about two weeks. I stumbled on a behavior that was for me an astonishing behavior if not also a genuine bug: RunnableLambda.deps only looks for non-locals, so any module-level chains that are referenced for routing in the function passed to the My team only detected this difference because a configurable chain was working with a particular configuration in our chain integration tests, but wouldn't work with the same configuration in our langserve endpoint integration tests. The reason why ended up being that APIHandler.config_keys parameter ends up getting used in The workaround we adopted was to use
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
UPDATE - see please see detailed pre-bug report below
Checked other resources
Commit to Help
Example Code
Description
RunnableLambda
which routes between different chains, each of which may be an instance of a chain with aConfigurableField
declared on a call to.configurable_fields(...)
, e.g. aRunnableConfigurableFields
.<chain>.config_specs
as a way to test in my unit tests whether a chain has the expected configurables. Now that I've introduced routing with aRunnableLambda
, I've discovered that it breaks these tests, because I can no longer detect from the chain reference which fields are configurable inside of the routed chains.invoke()
does returnNuno Campos
, even though the chain itself does not contain the key for the configurable.So as a result, I have these questions:
System Info
(langchainy) ~/dev/langchain-playground git:[main]
pip freeze | grep langchain
langchain==0.2.6
langchain-aws==0.1.11
langchain-community==0.2.6
langchain-core==0.2.22
langchain-openai==0.1.14
langchain-text-splitters==0.2.2
-e git+ssh://[email protected]/langchain-ai/langgraph.git@5232ea260606b0a5cdba2cf31bacdd4ab15d5a2c#egg=langgraph
Beta Was this translation helpful? Give feedback.
All reactions