You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Iterative improvements to OSS <> LangSmith (#1547)
This PR adds some clarification to the OSS docs for deployment and obs
in LangSmith. There is likely more work to be done here, but this PR
does the following:
- Updates the nav to make the workflow of going from dev to production
with LangSmith.
- Updates some of the intros to that content to contextualize LangChain
or LangGraph --> LangSmith.
- More links!
Copy file name to clipboardExpand all lines: src/oss/langchain/deploy.mdx
+5-4Lines changed: 5 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,17 +1,18 @@
1
1
---
2
-
title: Deploy
2
+
title: LangSmith Deployment
3
+
sidebarTitle: Deployment
3
4
---
4
5
5
6
importdeployfrom'/snippets/oss/deploy.mdx';
6
7
7
-
LangSmith is the fastest way to turn agents into production systems. Traditional hosting platforms are built for stateless, short-lived web apps, while LangGraph is **purpose-built for stateful, long-running agents**, so you can go from repo to reliable cloud deployment in minutes.
8
+
When you're ready to deploy your LangChain agent to production, LangSmith provides a managed hosting platform designed for agent workloads. Traditional hosting platforms are built for stateless, short-lived web applications, while LangGraph is **purpose-built for stateful, long-running agents** that require persistent state and background execution. LangSmith handles the infrastructure, scaling, and operational concerns so you can deploy directly from your repository.
8
9
9
10
## Prerequisites
10
11
11
12
Before you begin, ensure you have the following:
12
13
13
-
* A [GitHub account](https://github.com/)
14
-
* A [LangSmith account](https://smith.langchain.com/) (free to sign up)
14
+
- A [GitHub account](https://github.com/)
15
+
- A [LangSmith account](https://smith.langchain.com/) (free to sign up)
Observability is crucial for understanding how your agents behave in production. With LangChain's @[`create_agent`], you get built-in observability through [LangSmith](https://smith.langchain.com/) - a powerful platform for tracing, debugging, evaluating, and monitoring your LLM applications.
8
+
As you build and run agents with LangChain, you need visibility into how they behave: which [tools](/oss/langchain/tools) they call, what prompts they generate, and how they make decisions. LangChain agents built with @[`create_agent`] automatically support tracing through [LangSmith](/langsmith/home), a platform for capturing, debugging, evaluating, and monitoring LLM application behavior.
8
9
9
-
Traces capture every step your agent takes, from the initial user input to the final response, including all tool calls, model interactions, and decision points. This enables you to debug your agents, evaluate performance, and monitor usage.
10
+
[_Traces_](/langsmith/observability-concepts#traces) record every step of your agent's execution, from the initial user input to the final response, including all tool calls, model interactions, and decision points. This execution data helps you debug issues, evaluate performance across different inputs, and monitor usage patterns in production.
11
+
12
+
This guide shows you how to enable tracing for your LangChain agents and use LangSmith to analyze their execution.
10
13
11
14
## Prerequisites
12
15
13
16
Before you begin, ensure you have the following:
14
17
15
-
* A [LangSmith account](https://smith.langchain.com/) (free to sign up)
18
+
-**A LangSmith account**: Sign up (for free) or log in at [smith.langchain.com](https://smith.langchain.com).
19
+
-**A LangSmith API key**: Follow the [Create an API key](/langsmith/create-account-api-key#create-an-api-key) guide.
16
20
17
21
## Enable tracing
18
22
@@ -23,11 +27,7 @@ export LANGSMITH_TRACING=true
23
27
export LANGSMITH_API_KEY=<your-api-key>
24
28
```
25
29
26
-
<Info>
27
-
You can get your API key from your [LangSmith settings](https://smith.langchain.com/settings).
28
-
</Info>
29
-
30
-
## Quick start
30
+
## Quickstart
31
31
32
32
No extra code is needed to log a trace to LangSmith. Just run your agent code as you normally would:
Copy file name to clipboardExpand all lines: src/oss/langgraph/application-structure.mdx
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,13 +2,13 @@
2
2
title: Application structure
3
3
---
4
4
5
-
6
-
7
-
## Overview
8
-
9
5
A LangGraph application consists of one or more graphs, a configuration file (`langgraph.json`), a file that specifies dependencies, and an optional `.env` file that specifies environment variables.
10
6
11
-
This guide shows a typical structure of an application and shows how the required information to deploy an application using the LangSmith is specified.
7
+
This guide shows a typical structure of an application and shows you how to provide the required configuration to deploy an application with [LangSmith Deployment](/langsmith/deployments).
8
+
9
+
<Info>
10
+
LangSmith Deployment is a managed hosting platform for deploying and scaling LangGraph agents. It handles the infrastructure, scaling, and operational concerns so you can deploy your stateful, long-running agents directly from your repository. Learn more in the [Deployment documentation](/langsmith/deployments).
Copy file name to clipboardExpand all lines: src/oss/langgraph/deploy.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
1
---
2
-
title: Deploy
2
+
title: LangSmith Deployment
3
3
---
4
4
5
5
importdeployfrom'/snippets/oss/deploy.mdx';
6
6
7
-
LangSmith is the fastest way to turn agents into production systems. Traditional hosting platforms are built for stateless, short-lived web apps, while LangGraph is **purpose-built for stateful, long-running agents**, so you can go from repo to reliable cloud deployment in minutes.
7
+
When you're ready to deploy your agent to production, LangSmith provides a managed hosting platform designed for agent workloads. Traditional hosting platforms are built for stateless, short-lived web applications, while LangGraph is **purpose-built for stateful, long-running agents** that require persistent state and background execution. LangSmith handles the infrastructure, scaling, and operational concerns so you can deploy directly from your repository.
Copy file name to clipboardExpand all lines: src/oss/langgraph/overview.mdx
+13-3Lines changed: 13 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -117,9 +117,19 @@ LangGraph provides low-level supporting infrastructure for *any* long-running, s
117
117
118
118
While LangGraph can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools for building agents. To improve your LLM application development, pair LangGraph with:
119
119
120
-
*[LangSmith](http://www.langchain.com/langsmith) — Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
121
-
*[LangGraph](/oss/langgraph/overview) — Deploy and scale agents effortlessly with a purpose-built deployment platform for long running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [Studio](/langsmith/studio).
122
-
*[LangChain](/oss/langchain/overview) - Provides integrations and composable components to streamline LLM application development. Contains agent abstractions built on top of LangGraph.
Trace requests, evaluate outputs, and monitor deployments in one place. Prototype locally with LangGraph, then move to production with integrated observability and evaluation to build more reliable agent systems.
Deploy and scale agents effortlessly with a purpose-built deployment platform for long running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in Studio.
Copy file name to clipboardExpand all lines: src/snippets/oss/studio.mdx
+42-30Lines changed: 42 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,27 +1,23 @@
1
-
This guide will walk you through how to use **Studio**to visualize, interact, and debug your agent locally.
1
+
When building agents with LangChain locally, it's helpful to visualize what's happening inside your agent, interact with it in real-time, and debug issues as they occur. **LangSmith Studio**is a free visual interface for developing and testing your LangChain agents from your local machine.
2
2
3
-
Studio is our free-to-use, powerful agent IDE that integrates with [LangSmith](/langsmith/home)to enable tracing, evaluation, and prompt engineering. See exactly how your agent thinks, trace every decision, and ship smarter, more reliable agents.
3
+
Studio connects to your locally running agent to show you each step your agent takes: the prompts sent to the model, tool calls and their results, and the final output. You can test different inputs, inspect intermediate states, and iterate on your agent's behavior without additional code or deployment.
This pages describes how to set up Studio with your local LangChain agent.
15
6
16
7
## Prerequisites
17
8
18
9
Before you begin, ensure you have the following:
19
-
* An API key for [LangSmith](https://smith.langchain.com/settings) (free to sign up)
20
10
21
-
## Setup local Agent server
11
+
-**A LangSmith account**: Sign up (for free) or log in at [smith.langchain.com](https://smith.langchain.com).
12
+
-**A LangSmith API key**: Follow the [Create an API key](/langsmith/create-account-api-key#create-an-api-key) guide.
13
+
- If you don't want data [traced](/langsmith/observability-concepts#traces) to LangSmith, set `LANGSMITH_TRACING=false` in your application's `.env` file. With tracing disabled, no data leaves your local server.
14
+
15
+
## Set up local Agent server
22
16
23
17
### 1. Install the LangGraph CLI
24
18
19
+
The [LangGraph CLI](/langsmith/cli) provides a local development server (also called [Agent Server](/langsmith/agent-server)) that connects your agent to Studio.
20
+
25
21
:::python
26
22
```shell
27
23
# Python >= 3.11 is required.
@@ -37,7 +33,7 @@ npx @langchain/langgraph-cli
37
33
38
34
### 2. Prepare your agent
39
35
40
-
We'll use the following simple agent as an example:
36
+
If you already have a LangChain agent, you can use it directly. This example uses a simple email agent:
41
37
42
38
```python title="agent.py"
43
39
from langchain.agents import create_agent
@@ -62,10 +58,10 @@ agent = create_agent(
62
58
63
59
### 3. Environment variables
64
60
65
-
Create a `.env` file in the root of your project and fill in the necessary API keys. We'll need to set the `LANGSMITH_API_KEY` environment variable to the API key you get from [LangSmith](https://smith.langchain.com/settings).
61
+
Studio requires a LangSmith API key to connect your local agent. Create a `.env` file in the root of your project and add your API key from [LangSmith](https://smith.langchain.com/settings).
66
62
67
63
<Warning>
68
-
Be sure not to commit your `.env` to version control systems such as Git!
64
+
Ensure your `.env`file is not committed to version control, such as Git.
69
65
</Warning>
70
66
71
67
```bash .env
@@ -74,7 +70,7 @@ LANGSMITH_API_KEY=lsv2...
74
70
75
71
### 4. Create a LangGraph config file
76
72
77
-
Inside your app's directory, create a configuration file `langgraph.json`:
73
+
The LangGraph CLI uses a configuration file to locate your agent and manage dependencies. Create a `langgraph.json` file in your app's directory:
78
74
79
75
```json title="langgraph.json"
80
76
{
@@ -86,13 +82,13 @@ Inside your app's directory, create a configuration file `langgraph.json`:
86
82
}
87
83
```
88
84
89
-
@[`create_agent`] automatically returns a compiled LangGraph graph that we can pass to the `graphs` key in our configuration file.
85
+
The @[`create_agent`]function automatically returns a compiled LangGraph graph, which is what the `graphs` key expects in the configuration file.
90
86
91
87
<Info>
92
-
See the [LangGraph configuration file reference](/langsmith/cli#configuration-file) for detailed explanations of each key in the JSON object of the configuration file.
88
+
For detailed explanations of each key in the JSON object of the configuration file, refer to the [LangGraph configuration file reference](/langsmith/cli#configuration-file).
93
89
</Info>
94
90
95
-
So far, our project structure looks like this:
91
+
At this point, the project structure will look like this:
96
92
97
93
```bash
98
94
my-app/
@@ -105,7 +101,7 @@ my-app/
105
101
### 5. Install dependencies
106
102
107
103
:::python
108
-
In the root of your new LangGraph app, install the dependencies:
104
+
Install your project dependencies from the root directory:
109
105
110
106
<CodeGroup>
111
107
```shell pip
@@ -125,7 +121,7 @@ yarn install
125
121
126
122
### 6. View your agent in Studio
127
123
128
-
Start your Agent server:
124
+
Start the development server to connect your agent to Studio:
129
125
130
126
:::python
131
127
```shell
@@ -143,18 +139,34 @@ npx @langchain/langgraph-cli dev
143
139
Safari blocks `localhost` connections to Studio. To work around this, run the above command with `--tunnel` to access Studio via a secure tunnel.
144
140
</Warning>
145
141
146
-
Your agent will be accessible via API (`http://127.0.0.1:2024`) and the Studio UI `https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024`:
142
+
Once the server is running, your agent is accessible both via API at `http://127.0.0.1:2024` and through the Studio UI at`https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024`:
147
143
148
144
<Frame>
149
145

150
146
</Frame>
151
147
152
-
Studio makes each step of your agent easily observable. Replay any input and inspect the exact prompt, tool arguments, return values, and token/latency metrics. If a tool throws an exception, Studio records it with surrounding state so you can spend less time debugging.
148
+
With Studio connected to your local agent, you can iterate quickly on your agent's behavior. Run a test input, inspect the full execution trace including prompts, tool arguments, return values, and token/latency metrics. When something goes wrong, Studio captures exceptions with the surrounding state to help you understand what happened.
153
149
154
-
Keep your dev server running, edit prompts or tool signatures, and watch Studio hot-reload. Re-run the conversation thread from any step to verify behavior changes. See [Manage threads](/langsmith/use-studio#edit-thread-history) for more details.
150
+
The development server supports hot-reloading—make changes to prompts or tool signatures in your code, and Studio reflects them immediately. Re-run conversation threads from any step to test your changes without starting over. This workflow scales from simple single-tool agents to complex multi-node graphs.
155
151
156
-
As your agent grows, the same view scales from a single-tool demo to multi-node graphs, keeping decisions legible and reproducible.
152
+
For more information on how to run Studio, refer to the following guides in the [LangSmith docs](/langsmith/home):
157
153
158
-
<Tip>
159
-
For an in-depth look at Studio, check out the [overview page](/langsmith/studio).
0 commit comments