You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/oss/2-add-tools.mdx
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ title: Add tools
4
4
To handle queries that your chatbot can't answer "from memory", integrate a web search tool. The chatbot can use this tool to find relevant information and provide better responses.
5
5
6
6
<Note>
7
-
This tutorial builds on [Build a basic chatbot](./1-build-basic-chatbot).
7
+
This tutorial builds on [Build a basic chatbot](/oss/1-build-basic-chatbot).
8
8
</Note>
9
9
10
10
## Prerequisites
@@ -152,11 +152,11 @@ The results are page summaries our chat bot can use to answer questions:
152
152
## 4. Define the graph
153
153
154
154
:::python
155
-
For the `StateGraph` you created in the [first tutorial](./1-build-basic-chatbot), add `bind_tools` on the LLM. This lets the LLM know the correct JSON format to use if it wants to use the search engine.
155
+
For the `StateGraph` you created in the [first tutorial](/oss/1-build-basic-chatbot), add `bind_tools` on the LLM. This lets the LLM know the correct JSON format to use if it wants to use the search engine.
156
156
:::
157
157
158
158
:::js
159
-
For the `StateGraph` you created in the [first tutorial](./1-build-basic-chatbot), add `bindTools` on the LLM. This lets the LLM know the correct JSON format to use if it wants to use the search engine.
159
+
For the `StateGraph` you created in the [first tutorial](/oss/1-build-basic-chatbot), add `bindTools` on the LLM. This lets the LLM know the correct JSON format to use if it wants to use the search engine.
160
160
:::
161
161
162
162
Let's first select our LLM:
@@ -698,4 +698,4 @@ To inspect all the steps your agent just took, check out this [LangSmith trace](
Copy file name to clipboardExpand all lines: src/oss/3-add-memory.mdx
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
---
2
2
title: Add memory
3
3
---
4
-
The chatbot can now [use tools](./2-add-tools) to answer user questions, but it does not remember the context of previous interactions. This limits its ability to have coherent, multi-turn conversations.
4
+
The chatbot can now [use tools](/oss/2-add-tools) to answer user questions, but it does not remember the context of previous interactions. This limits its ability to have coherent, multi-turn conversations.
5
5
6
6
LangGraph solves this problem through **persistent checkpointing**. If you provide a `checkpointer` when compiling the graph and a `thread_id` when calling your graph, LangGraph automatically saves the state after each step. When you invoke the graph again using the same `thread_id`, the graph loads its saved state, allowing the chatbot to pick up where it left off.
7
7
8
8
We will see later that **checkpointing** is _much_ more powerful than simple chat memory - it lets you save and resume complex state at any time for error recovery, human-in-the-loop workflows, time travel interactions, and more. But first, let's add checkpointing to enable multi-turn conversations.
9
9
10
10
<Note>
11
-
This tutorial builds on [Add tools](./2-add-tools).
11
+
This tutorial builds on [Add tools](/oss/2-add-tools).
12
12
</Note>
13
13
14
14
## 1. Create a `MemorySaver` checkpointer
@@ -414,4 +414,4 @@ const graph = new StateGraph(State)
414
414
415
415
## Next steps
416
416
417
-
In the next tutorial, you will [add human-in-the-loop to the chatbot](./4-human-in-the-loop) to handle situations where it may need guidance or verification before proceeding.
417
+
In the next tutorial, you will [add human-in-the-loop to the chatbot](/oss/4-human-in-the-loop) to handle situations where it may need guidance or verification before proceeding.
Agents can be unreliable and may need human input to successfully accomplish tasks. Similarly, for some actions, you may want to require human approval before running to ensure that everything is running as intended.
5
5
6
-
LangGraph's [persistence](../../concepts/persistence) layer supports **human-in-the-loop** workflows, allowing execution to pause and resume based on user feedback. The primary interface to this functionality is the [`interrupt`](../../how-tos/human_in_the_loop/add-human-in-the-loop) function. Calling `interrupt` inside a node will pause execution. Execution can be resumed, together with new input from a human, by passing in a [Command](../../concepts/low_level#command).
6
+
LangGraph's [persistence](.././persistence) layer supports **human-in-the-loop** workflows, allowing execution to pause and resume based on user feedback. The primary interface to this functionality is the [`interrupt`](.././human_in_the_loop/add-human-in-the-loop) function. Calling `interrupt` inside a node will pause execution. Execution can be resumed, together with new input from a human, by passing in a [Command](.././low_level#command).
7
7
8
8
:::python
9
-
`interrupt` is ergonomically similar to Python's built-in `input()`, [with some caveats](../../how-tos/human_in_the_loop/add-human-in-the-loop).
9
+
`interrupt` is ergonomically similar to Python's built-in `input()`, [with some caveats](.././human_in_the_loop/add-human-in-the-loop).
10
10
:::
11
11
12
12
:::js
13
-
`interrupt` is ergonomically similar to Node.js's built-in `readline.question()` function, [with some caveats](../../how-tos/human_in_the_loop/add-human-in-the-loop).
14
-
`interrupt` is ergonomically similar to Node.js's built-in `readline.question()` function, [with some caveats](../../how-tos/human_in_the_loop/add-human-in-the-loop).
13
+
`interrupt` is ergonomically similar to Node.js's built-in `readline.question()` function, [with some caveats](.././human_in_the_loop/add-human-in-the-loop).
14
+
`interrupt` is ergonomically similar to Node.js's built-in `readline.question()` function, [with some caveats](.././human_in_the_loop/add-human-in-the-loop).
15
15
:::
16
16
17
17
<Note>
@@ -150,7 +150,7 @@ async function chatbot(state: z.infer<typeof MessagesZodState>) {
150
150
:::
151
151
152
152
<Tip>
153
-
For more information and examples of human-in-the-loop workflows, see [Human-in-the-loop](../../concepts/human_in_the_loop).
153
+
For more information and examples of human-in-the-loop workflows, see [Human-in-the-loop](.././human_in_the_loop).
154
154
</Tip>
155
155
156
156
## 2. Compile the graph
@@ -353,7 +353,7 @@ snapshot.next;
353
353
returnhuman_response["data"]
354
354
```
355
355
356
-
Similar to Python's built-in `input()` function, calling `interrupt` inside the tool will pause execution. Progress is persisted based on the [checkpointer](../../concepts/persistence#checkpointer-libraries); so if it is persisting with Postgres, it can resume at any time as long as the database is alive. In this example, it is persisting with the in-memory checkpointer and can resume any time if the Python kernel is running.
356
+
Similar to Python's built-in `input()` function, calling `interrupt` inside the tool will pause execution. Progress is persisted based on the [checkpointer](.././persistence#checkpointer-libraries); so if it is persisting with Postgres, it can resume at any time as long as the database is alive. In this example, it is persisting with the in-memory checkpointer and can resume any time if the Python kernel is running.
357
357
:::
358
358
359
359
:::js
@@ -394,12 +394,12 @@ snapshot.next;
394
394
395
395
```
396
396
397
-
Calling`interrupt`insidethetoolwillpauseexecution. Progressispersistedbasedonthe [checkpointer](../../concepts/persistence#checkpointer-libraries); soifitispersistingwithPostgres, it can resume at any time aslongasthedatabaseisalive. Inthisexample, it is persisting with the in-memory checkpointer and can resume any time if the JavaScript runtime is running.
397
+
Calling`interrupt`insidethetoolwillpauseexecution. Progressispersistedbasedonthe [checkpointer](.././persistence#checkpointer-libraries); soifitispersistingwithPostgres, it can resume at any time aslongasthedatabaseisalive. Inthisexample, it is persisting with the in-memory checkpointer and can resume any time if the JavaScript runtime is running.
398
398
:::
399
399
400
400
## 5.Resumeexecution
401
401
402
-
Toresumeexecution, pass a [`Command`](../../concepts/low_level#command) object containing data expected by the tool. The format of this data can be customized based on needs.
402
+
Toresumeexecution, pass a [`Command`](.././low_level#command) object containing data expected by the tool. The format of this data can be customized based on needs.
Copy file name to clipboardExpand all lines: src/oss/5-customize-state.mdx
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ title: Customize state
4
4
In this tutorial, you will add additional fields to the state to define complex behavior without relying on the message list. The chatbot will use its search tool to find specific information and forward them to a human for review.
5
5
6
6
<Note>
7
-
This tutorial builds on [Add human-in-the-loop controls](./4-human-in-the-loop).
7
+
This tutorial builds on [Add human-in-the-loop controls](/oss/4-human-in-the-loop).
8
8
</Note>
9
9
10
10
## 1. Add keys to the state
@@ -49,7 +49,7 @@ Adding this information to the state makes it easily accessible by other graph n
49
49
## 2. Update the state inside the tool
50
50
51
51
:::python
52
-
Now, populate the state keys inside of the `human_assistance` tool. This allows a human to review the information before it is stored in the state. Use [`Command`](../../concepts/low_level#using-inside-tools) to issue a state update from inside the tool.
52
+
Now, populate the state keys inside of the `human_assistance` tool. This allows a human to review the information before it is stored in the state. Use [`Command`](.././low_level#using-inside-tools) to issue a state update from inside the tool.
53
53
54
54
```python
55
55
from langchain_core.messages import ToolMessage
@@ -97,7 +97,7 @@ def human_assistance(
97
97
:::
98
98
99
99
:::js
100
-
Now, populate the state keys inside of the `humanAssistance` tool. This allows a human to review the information before it is stored in the state. Use [`Command`](../../concepts/low_level#using-inside-tools) to issue a state update from inside the tool.
100
+
Now, populate the state keys inside of the `humanAssistance` tool. This allows a human to review the information before it is stored in the state. Use [`Command`](.././low_level#using-inside-tools) to issue a state update from inside the tool.
Manual state updates will [generate a trace](https://smith.langchain.com/public/7ebb7827-378d-49fe-9f6c-5df0e90086c8/r) in LangSmith. If desired, they can also be used to [control human-in-the-loop workflows](../../how-tos/human_in_the_loop/add-human-in-the-loop). Use of the `interrupt` function is generally recommended instead, as it allows data to be transmitted in a human-in-the-loop interaction independently of state updates.
457
+
Manual state updates will [generate a trace](https://smith.langchain.com/public/7ebb7827-378d-49fe-9f6c-5df0e90086c8/r) in LangSmith. If desired, they can also be used to [control human-in-the-loop workflows](.././human_in_the_loop/add-human-in-the-loop). Use of the `interrupt` function is generally recommended instead, as it allows data to be transmitted in a human-in-the-loop interaction independently of state updates.
458
458
459
459
**Congratulations!** You've added custom keys to the state to facilitate a more complex workflow, and learned how to generate state updates from inside tools.
460
460
@@ -651,4 +651,4 @@ const graph = new StateGraph(State)
651
651
652
652
## Next steps
653
653
654
-
There's one more concept to review before finishing the LangGraph basics tutorials: connecting `checkpointing` and `state updates` to [time travel](./6-time-travel).
654
+
There's one more concept to review before finishing the LangGraph basics tutorials: connecting `checkpointing` and `state updates` to [time travel](/oss/6-time-travel).
Copy file name to clipboardExpand all lines: src/oss/6-time-travel.mdx
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
---
2
2
title: Time travel
3
3
---
4
-
In a typical chatbot workflow, the user interacts with the bot one or more times to accomplish a task. [Memory](./3-add-memory) and a [human-in-the-loop](./4-human-in-the-loop) enable checkpoints in the graph state and control future responses.
4
+
In a typical chatbot workflow, the user interacts with the bot one or more times to accomplish a task. [Memory](/oss/3-add-memory) and a [human-in-the-loop](/oss/4-human-in-the-loop) enable checkpoints in the graph state and control future responses.
5
5
6
6
What if you want a user to be able to start from a previous response and explore a different outcome? Or what if you want users to be able to rewind your chatbot's work to fix mistakes or try a different strategy, something that is common in applications like autonomous software engineers?
7
7
8
8
You can create these types of experiences using LangGraph's built-in **time travel** functionality.
9
9
10
10
<Note>
11
-
This tutorial builds on [Customize state](./5-customize-state).
11
+
This tutorial builds on [Customize state](/oss/5-customize-state).
12
12
</Note>
13
13
14
14
## 1. Rewind your graph
@@ -602,6 +602,6 @@ The graph resumed execution from the `tools` node. You can tell this is the case
602
602
603
603
Take your LangGraph journey further by exploring deployment and advanced features:
604
604
605
-
* **[LangGraph Server quickstart](../../tutorials/langgraph-platform/local-server)**: Launch a LangGraph server locally and interact with it using the REST API and LangGraph Studio Web UI.
605
+
* **[LangGraph Server quickstart](../../langgraph-platform/local-server)**: Launch a LangGraph server locally and interact with it using the REST API and LangGraph Studio Web UI.
606
606
* **[LangGraph Platform quickstart](../../cloud/quick_start)**: Deploy your LangGraph app using LangGraph Platform.
607
-
* **[LangGraph Platform concepts](../../concepts/langgraph_platform)**: Understand the foundational concepts of the LangGraph Platform.
607
+
* **[LangGraph Platform concepts](../../langgraph-platform/concepts)**: Understand the foundational concepts of the LangGraph Platform.
A LangGraph [`StateGraph`](https://langchain-ai.github.io/langgraph/reference/graphs/#langgraph.graph.state.StateGraph) received concurrent updates to its state from multiple nodes to a state property that doesn't
5
5
support it.
6
6
7
-
One way this can occur is if you are using a [fanout](https://langchain-ai.github.io/langgraph/how-tos/map-reduce/)
7
+
One way this can occur is if you are using a [fanout](/oss/graph-api#map-reduce-and-the-send-api)
8
8
or other parallel execution in your graph and you have defined a graph like this:
Copy file name to clipboardExpand all lines: src/oss/MULTIPLE_SUBGRAPHS.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,4 +17,4 @@ The following may help resolve this error:
17
17
* If you don't need to interrupt/resume from a subgraph, pass `checkpointer: false` when compiling it like this: `.compile({ checkpointer: false })`
18
18
:::
19
19
20
-
* Don't imperatively call graphs multiple times in the same node, and instead use the [`Send`](https://langchain-ai.github.io/langgraph/concepts/low_level/#send) API.
20
+
* Don't imperatively call graphs multiple times in the same node, and instead use the [`Send`](/oss/low-level#send) API.
0 commit comments