Skip to content

Commit 066e3fb

Browse files
committed
docs: Consolidate experiment content for LS nav
1 parent bc0a081 commit 066e3fb

11 files changed

+529
-136
lines changed

src/docs.json

Lines changed: 399 additions & 0 deletions
Large diffs are not rendered by default.

src/langsmith/analyze-single-experiment.mdx

Lines changed: 0 additions & 94 deletions
This file was deleted.

src/langsmith/bind-evaluator-to-dataset.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,5 +50,5 @@ def perform_eval(run, example):
5050

5151
## Next steps
5252

53-
* Analyze your experiment results in the [experiments tab](/langsmith/analyze-single-experiment)
53+
* Analyze your experiment results in the [experiments tab](/langsmith/work-with-experiments)
5454
* Compare your experiment results in the [comparison view](/langsmith/compare-experiment-results)

src/langsmith/download-experiment-results-as-csv.mdx

Lines changed: 0 additions & 13 deletions
This file was deleted.

src/langsmith/evaluate-pairwise.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Note that you should choose a feedback key that is distinct from standard feedba
7878
The following example uses [a prompt](https://smith.langchain.com/hub/langchain-ai/pairwise-evaluation-2) which asks the LLM to decide which is better between two AI assistant responses. It uses structured output to parse the AI's response: 0, 1, or 2.
7979

8080
<Info>
81-
In the Python example below, we are pulling [this structured prompt](https://smith.langchain.com/hub/langchain-ai/pairwise-evaluation-2) from the [LangChain Hub](/langsmith/langchain-hub) and using it with a LangChain chat model wrapper.
81+
In the Python example below, we are pulling [this structured prompt](https://smith.langchain.com/hub/langchain-ai/pairwise-evaluation-2) from the [LangChain Hub](/langsmith/manage-prompts) and using it with a LangChain chat model wrapper.
8282

8383
**Usage of LangChain is totally optional.** To illustrate this point, the TypeScript example uses the OpenAI SDK directly.
8484
</Info>

src/langsmith/evaluation-overview.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ Learn [how run pairwise evaluations](/langsmith/evaluate-pairwise).
122122

123123
## Experiment
124124

125-
Each time we evaluate an application on a dataset, we are conducting an experiment. An experiment contains the results of running a specific version of your application on the dataset. To understand how to use the LangSmith experiment view, see [how to analyze experiment results](/langsmith/analyze-single-experiment).
125+
Each time we evaluate an application on a dataset, we are conducting an experiment. An experiment contains the results of running a specific version of your application on the dataset. To understand how to use the LangSmith experiment view, see [how to analyze experiment results](/langsmith/work-with-experiments).
126126

127127
![Experiment view](/langsmith/images/experiment-view.png)
128128

src/langsmith/home.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ The quality and development speed of AI applications depends on high-quality eva
4040

4141
* Get started by [creating your first evaluation](/langsmith/run-evaluation-from-prompt-playground).
4242
* Quickly assess the performance of your application using our [off-the-shelf evaluators](https://docs.smith.langchain.com/langsmith/prebuilt-evaluators) as a starting point.
43-
* [Analyze results](/langsmith/analyze-single-experiment) of evaluations in the LangSmith UI and [compare results](https://docs.smith.langchain.com/langsmith/compare-experiment-results) over time.
43+
* [Analyze results](/langsmith/work-with-experiments) of evaluations in the LangSmith UI and [compare results](https://docs.smith.langchain.com/langsmith/compare-experiment-results) over time.
4444
* Easily collect [human feedback](/langsmith/annotation-queues) on your data to improve your application.
4545

4646
## Prompt Engineering

src/langsmith/manage-datasets-in-application.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ In order to create and manage splits in the app, you can select some examples in
121121

122122
### Edit example metadata
123123

124-
You can add metadata to your examples by clicking on an example and then clicking "Edit" on the top righthand side of the popover. From this page, you can update/delete existing metadata, or add new metadata. You may use this to store information about your examples, such as tags or version info, which you can then [group by](/langsmith/analyze-single-experiment#group-results-by-metadata) when analyzing experiment results or [filter by](/langsmith/manage-datasets-programmatically#list-examples-by-metadata) when you call `list_examples` in the SDK.
124+
You can add metadata to your examples by clicking on an example and then clicking "Edit" on the top righthand side of the popover. From this page, you can update/delete existing metadata, or add new metadata. You may use this to store information about your examples, such as tags or version info, which you can then [group by](/langsmith/work-with-experiments#group-results-by-metadata) when analyzing experiment results or [filter by](/langsmith/manage-datasets-programmatically#list-examples-by-metadata) when you call `list_examples` in the SDK.
125125

126126
![Add Metadata](/langsmith/images/add-metadata.gif)
127127

src/langsmith/manage-prompts-programmatically.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -214,7 +214,7 @@ Similar to pushing a prompt, you can also pull a prompt as a RunnableSequence of
214214
</Tab>
215215
</Tabs>
216216

217-
When pulling a prompt, you can also specify a specific commit hash or [commit tag](/langsmith/prompt-tags) to pull a specific version of the prompt.
217+
When pulling a prompt, you can also specify a specific commit hash or [commit tag](/langsmith/manage-prompts) to pull a specific version of the prompt.
218218

219219
<Tabs>
220220
<Tab title="Python">

src/langsmith/renaming-experiment.mdx

Lines changed: 0 additions & 23 deletions
This file was deleted.

0 commit comments

Comments
 (0)