Skip to content

Commit 748140e

Browse files
authored
code highlighting (#186)
1 parent 1048e7a commit 748140e

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+464
-1137
lines changed

src/langgraph-platform/add-auth-server.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ Now you'll upgrade your authentication to validate real JWT tokens from Supabase
8585

8686
Update `src/security/auth.py` to implement this:
8787

88-
```python hl_lines="8-9 20-30" title="src/security/auth.py"
88+
```python {highlight={8-9,20-30}} title="src/security/auth.py"
8989
import os
9090
import httpx
9191
from langgraph_sdk import Auth

src/langgraph-platform/add-human-in-the-loop.mdx

Lines changed: 15 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,8 @@ To review, edit, and approve tool calls in an agent or workflow, use LangGraph's
88

99
<Tabs>
1010
<Tab title="Python">
11-
```python
11+
```python {highlight={2,34}}
1212
from langgraph_sdk import get_client
13-
# highlight-next-line
1413
from langgraph_sdk.schema import Command
1514
client = get_client(url=<DEPLOYMENT_URL>)
1615

@@ -43,18 +42,17 @@ To review, edit, and approve tool calls in an agent or workflow, use LangGraph's
4342
print(await client.runs.wait(
4443
thread_id,
4544
assistant_id,
46-
# highlight-next-line
4745
command=Command(resume="Edited text") # (3)!
4846
))
4947
# > {'some_text': 'Edited text'}
50-
```
48+
```
5149

5250
1. The graph is invoked with some initial state.
5351
2. When the graph hits the interrupt, it returns an interrupt object with the payload and metadata.
5452
3. The graph is resumed with a `Command(resume=...)`, injecting the human's input and continuing execution.
5553
</Tab>
5654
<Tab title="JavaScript">
57-
```js
55+
```javascript {highlight={32}}
5856
import { Client } from "@langchain/langgraph-sdk";
5957
const client = new Client({ apiUrl: <DEPLOYMENT_URL> });
6058

@@ -86,11 +84,10 @@ To review, edit, and approve tool calls in an agent or workflow, use LangGraph's
8684
console.log(await client.runs.wait(
8785
threadID,
8886
assistantID,
89-
# highlight-next-line
9087
{ command: { resume: "Edited text" }} # (3)!
9188
));
9289
# > {'some_text': 'Edited text'}
93-
```
90+
```
9491

9592
1. The graph is invoked with some initial state.
9693
2. When the graph hits the interrupt, it returns an interrupt object with the payload and metadata.
@@ -138,21 +135,19 @@ To review, edit, and approve tool calls in an agent or workflow, use LangGraph's
138135
This is an example graph you can run in the LangGraph API server.
139136
See [LangGraph Platform quickstart](/langgraph-platform/deployment-quickstart) for more details.
140137

141-
```python
138+
```python {highlight={7,13}}
142139
from typing import TypedDict
143140
import uuid
144141

145142
from langgraph.checkpoint.memory import InMemorySaver
146143
from langgraph.constants import START
147144
from langgraph.graph import StateGraph
148-
# highlight-next-line
149145
from langgraph.types import interrupt, Command
150146

151147
class State(TypedDict):
152148
some_text: str
153149

154150
def human_node(state: State):
155-
# highlight-next-line
156151
value = interrupt( # (1)!
157152
{
158153
"text_to_revise": state["some_text"] # (2)!
@@ -169,7 +164,7 @@ To review, edit, and approve tool calls in an agent or workflow, use LangGraph's
169164
graph_builder.add_edge(START, "human_node")
170165

171166
graph = graph_builder.compile()
172-
```
167+
```
173168

174169
1. `interrupt(...)` pauses execution at `human_node`, surfacing the given payload to a human.
175170
2. Any JSON serializable value can be passed to the `interrupt` function. Here, a dict containing the text to revise.
@@ -180,9 +175,8 @@ To review, edit, and approve tool calls in an agent or workflow, use LangGraph's
180175

181176
<Tabs>
182177
<Tab title="Python">
183-
```python
178+
```python {highlight={2,34}}
184179
from langgraph_sdk import get_client
185-
# highlight-next-line
186180
from langgraph_sdk.schema import Command
187181
client = get_client(url=<DEPLOYMENT_URL>)
188182

@@ -215,18 +209,17 @@ To review, edit, and approve tool calls in an agent or workflow, use LangGraph's
215209
print(await client.runs.wait(
216210
thread_id,
217211
assistant_id,
218-
# highlight-next-line
219212
command=Command(resume="Edited text") # (3)!
220213
))
221214
# > {'some_text': 'Edited text'}
222-
```
215+
```
223216

224217
1. The graph is invoked with some initial state.
225218
2. When the graph hits the interrupt, it returns an interrupt object with the payload and metadata.
226219
3. The graph is resumed with a `Command(resume=...)`, injecting the human's input and continuing execution.
227220
</Tab>
228221
<Tab title="JavaScript">
229-
```js
222+
```javascript {highlight={32}}
230223
import { Client } from "@langchain/langgraph-sdk";
231224
const client = new Client({ apiUrl: <DEPLOYMENT_URL> });
232225

@@ -258,11 +251,10 @@ To review, edit, and approve tool calls in an agent or workflow, use LangGraph's
258251
console.log(await client.runs.wait(
259252
threadID,
260253
assistantID,
261-
# highlight-next-line
262254
{ command: { resume: "Edited text" }} # (3)!
263255
));
264256
# > {'some_text': 'Edited text'}
265-
```
257+
```
266258

267259
1. The graph is invoked with some initial state.
268260
2. When the graph hits the interrupt, it returns an interrupt object with the payload and metadata.
@@ -317,12 +309,9 @@ Static interrupts (also known as static breakpoints) are triggered either before
317309

318310
You can set static interrupts by specifying `interrupt_before` and `interrupt_after` at compile time:
319311

320-
```python
321-
# highlight-next-line
312+
```python {highlight={1,2,3}}
322313
graph = graph_builder.compile( # (1)!
323-
# highlight-next-line
324314
interrupt_before=["node_a"], # (2)!
325-
# highlight-next-line
326315
interrupt_after=["node_b", "node_c"], # (3)!
327316
)
328317
```
@@ -335,38 +324,32 @@ Alternatively, you can set static interrupts at run time:
335324

336325
<Tabs>
337326
<Tab title="Python">
338-
```python
339-
# highlight-next-line
327+
```python {highlight={1,5,6}}
340328
await client.runs.wait( # (1)!
341329
thread_id,
342330
assistant_id,
343331
inputs=inputs,
344-
# highlight-next-line
345332
interrupt_before=["node_a"], # (2)!
346-
# highlight-next-line
347333
interrupt_after=["node_b", "node_c"] # (3)!
348334
)
349-
```
335+
```
350336

351337
1. `client.runs.wait` is called with the `interrupt_before` and `interrupt_after` parameters. This is a run-time configuration and can be changed for every invocation.
352338
2. `interrupt_before` specifies the nodes where execution should pause before the node is executed.
353339
3. `interrupt_after` specifies the nodes where execution should pause after the node is executed.
354340
</Tab>
355341
<Tab title="JavaScript">
356-
```js
357-
// highlight-next-line
342+
```javascript {highlight={1,6,7}}
358343
await client.runs.wait( // (1)!
359344
threadID,
360345
assistantID,
361346
{
362347
input: input,
363-
# highlight-next-line
364348
interruptBefore: ["node_a"], // (2)!
365-
# highlight-next-line
366349
interruptAfter: ["node_b", "node_c"] // (3)!
367350
}
368351
)
369-
```
352+
```
370353

371354
1. `client.runs.wait` is called with the `interruptBefore` and `interruptAfter` parameters. This is a run-time configuration and can be changed for every invocation.
372355
2. `interruptBefore` specifies the nodes where execution should pause before the node is executed.

src/langgraph-platform/autogen-integration.mdx

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -126,9 +126,8 @@ display(Image(graph.get_graph().draw_mermaid_png()))
126126

127127
Before deploying to LangGraph Platform, you can test the graph locally:
128128

129-
```python
129+
```python {highlight={2,13}}
130130
# pass the thread ID to persist agent outputs for future interactions
131-
# highlight-next-line
132131
config = {"configurable": {"thread_id": "1"}}
133132

134133
for chunk in graph.stream(
@@ -140,7 +139,6 @@ for chunk in graph.stream(
140139
}
141140
]
142141
},
143-
# highlight-next-line
144142
config,
145143
):
146144
print(chunk)
@@ -167,7 +165,7 @@ To find numbers between 10 and 30 in the Fibonacci sequence, we can generate the
167165

168166
Since we're leveraging LangGraph's [persistence](/oss/persistence) features we can now continue the conversation using the same thread ID -- LangGraph will automatically pass previous history to the AutoGen agent:
169167

170-
```python
168+
```python {highlight={10}}
171169
for chunk in graph.stream(
172170
{
173171
"messages": [
@@ -177,7 +175,6 @@ for chunk in graph.stream(
177175
}
178176
]
179177
},
180-
# highlight-next-line
181178
config,
182179
):
183180
print(chunk)

src/langgraph-platform/custom-auth.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ To leverage custom authentication and access user-level metadata in your deploym
4141
1. This handler receives the request (headers, etc.), validates the user, and returns a dictionary with at least an identity field.
4242
2. You can add any custom fields you want (e.g., OAuth tokens, roles, org IDs, etc.).
4343
2. In your `langgraph.json`, add the path to your auth file:
44-
```json hl_lines="7-9"
44+
```json {highlight={7-9}}
4545
{
4646
"dependencies": ["."],
4747
"graphs": {

src/langgraph-platform/custom-lifespan.mdx

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ langgraph new --template=new-langgraph-project-python my_new_project
2323

2424
Once you have a LangGraph project, add the following app code:
2525

26-
```python
26+
```python {highlight={19}}
2727
# ./src/agent/webapp.py
2828
from contextlib import asynccontextmanager
2929
from fastapi import FastAPI
@@ -42,7 +42,6 @@ async def lifespan(app: FastAPI):
4242
# Clean up connections
4343
await engine.dispose()
4444

45-
# highlight-next-line
4645
app = FastAPI(lifespan=lifespan)
4746

4847
# ... can add custom routes if needed.

src/langgraph-platform/custom-middleware.mdx

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,12 +23,11 @@ langgraph new --template=new-langgraph-project-python my_new_project
2323

2424
Once you have a LangGraph project, add the following app code:
2525

26-
```python
26+
```python {highlight={5}}
2727
# ./src/agent/webapp.py
2828
from fastapi import FastAPI, Request
2929
from starlette.middleware.base import BaseHTTPMiddleware
3030

31-
# highlight-next-line
3231
app = FastAPI()
3332

3433
class CustomHeaderMiddleware(BaseHTTPMiddleware):

src/langgraph-platform/custom-routes.mdx

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,11 +20,10 @@ langgraph new --template=new-langgraph-project-python my_new_project
2020

2121
Once you have a LangGraph project, add the following app code:
2222

23-
```python
23+
```python {highlight={4}}
2424
# ./src/agent/webapp.py
2525
from fastapi import FastAPI
2626

27-
# highlight-next-line
2827
app = FastAPI()
2928

3029

src/langgraph-platform/human-in-the-loop-time-travel.mdx

Lines changed: 6 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -159,15 +159,14 @@ To time travel using the LangGraph Server API (via the LangGraph SDK):
159159

160160
<Tabs>
161161
<Tab title="Python">
162-
```python
162+
```python {highlight={4}}
163163
new_config = await client.threads.update_state(
164164
thread_id,
165165
{"topic": "chickens"},
166-
# highlight-next-line
167166
checkpoint_id=selected_state["checkpoint_id"]
168167
)
169168
print(new_config)
170-
```
169+
```
171170
</Tab>
172171
<Tab title="JavaScript">
173172
```js
@@ -199,30 +198,26 @@ To time travel using the LangGraph Server API (via the LangGraph SDK):
199198

200199
<Tabs>
201200
<Tab title="Python">
202-
```python
201+
```python {highlight={4,5}}
203202
await client.runs.wait(
204203
thread_id,
205204
assistant_id,
206-
# highlight-next-line
207205
input=None,
208-
# highlight-next-line
209206
checkpoint_id=new_config["checkpoint_id"]
210207
)
211-
```
208+
```
212209
</Tab>
213210
<Tab title="JavaScript">
214-
```js
211+
```javascript {highlight={5,6}}
215212
await client.runs.wait(
216213
threadID,
217214
assistantID,
218215
{
219-
// highlight-next-line
220216
input: null,
221-
// highlight-next-line
222217
checkpointId: newConfig["checkpoint_id"]
223218
}
224219
);
225-
```
220+
```
226221
</Tab>
227222
<Tab title="cURL">
228223
```bash

src/langgraph-platform/langgraph-basics/2-add-tools.mdx

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
8989

9090
We can now incorporate it into a `StateGraph`:
9191

92-
```python hl_lines="15"
92+
```python {highlight={14}}
9393
from typing import Annotated
9494

9595
from typing_extensions import TypedDict
@@ -103,7 +103,6 @@ class State(TypedDict):
103103
graph_builder = StateGraph(State)
104104

105105
# Modification: tell the LLM which tools it can call
106-
# highlight-next-line
107106
llm_with_tools = llm.bind_tools(tools)
108107

109108
def chatbot(state: State):
@@ -293,7 +292,7 @@ For ease of use, adjust your code to replace the following with LangGraph prebui
293292

294293
<ChatModelTabs/>
295294

296-
```python hl_lines="25 30"
295+
```python {highlight={25,30}}
297296
from typing import Annotated
298297

299298
from langchain_tavily import TavilySearch

src/langgraph-platform/langgraph-basics/3-add-memory.mdx

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -114,11 +114,10 @@ Of course, I remember your name, Will. I always try to pay attention to importan
114114

115115
Don't believe me? Try this using a different config.
116116

117-
```python
117+
```python {highlight={4}}
118118
# The only difference is we change the `thread_id` here to "2" instead of "1"
119119
events = graph.stream(
120120
{"messages": [{"role": "user", "content": user_input}]},
121-
# highlight-next-line
122121
{"configurable": {"thread_id": "2"}},
123122
stream_mode="values",
124123
)
@@ -168,7 +167,7 @@ from langchain.chat_models import init_chat_model
168167
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
169168
``` */}
170169

171-
```python hl_lines="36 37"
170+
```python {highlight={36,37}}
172171
from typing import Annotated
173172

174173
from langchain.chat_models import init_chat_model

0 commit comments

Comments
 (0)