Skip to content

Commit c6ffac3

Browse files
authored
refactor: mdx lint (#32282)
1 parent a07d2c5 commit c6ffac3

File tree

254 files changed

+1007
-1011
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

254 files changed

+1007
-1011
lines changed

docs/docs/additional_resources/arxiv_references.mdx

Lines changed: 41 additions & 42 deletions
Large diffs are not rendered by default.

docs/docs/changes/changelog/core.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,4 @@
77
- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead.
88
- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead.
99
- `BaseLLM` methods `__call__`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
10-
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.
10+
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.

docs/docs/changes/changelog/langchain.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,4 +90,4 @@ Deprecated classes and methods will be removed in 0.2.0
9090
| OpenAIMultiFunctionsAgent | create_openai_tools_agent | Use LCEL builder over a class |
9191
| SelfAskWithSearchAgent | create_self_ask_with_search | Use LCEL builder over a class |
9292
| StructuredChatAgent | create_structured_chat_agent | Use LCEL builder over a class |
93-
| XMLAgent | create_xml_agent | Use LCEL builder over a class |
93+
| XMLAgent | create_xml_agent | Use LCEL builder over a class |

docs/docs/concepts/agents.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,8 @@ Please see the following resources for more information:
1111

1212
## Legacy agent concept: AgentExecutor
1313

14-
LangChain previously introduced the `AgentExecutor` as a runtime for agents.
15-
While it served as an excellent starting point, its limitations became apparent when dealing with more sophisticated and customized agents.
14+
LangChain previously introduced the `AgentExecutor` as a runtime for agents.
15+
While it served as an excellent starting point, its limitations became apparent when dealing with more sophisticated and customized agents.
1616
As a result, we're gradually phasing out `AgentExecutor` in favor of more flexible solutions in LangGraph.
1717

1818
### Transitioning from AgentExecutor to LangGraph

docs/docs/concepts/callbacks.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,4 +70,4 @@ This is a common reason why you may fail to see events being emitted from custom
7070
runnables or tools.
7171
:::
7272

73-
For specifics on how to use callbacks, see the [relevant how-to guides here](/docs/how_to/#callbacks).
73+
For specifics on how to use callbacks, see the [relevant how-to guides here](/docs/how_to/#callbacks).

docs/docs/concepts/chat_history.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ A full conversation often involves a combination of two patterns of alternating
2626

2727
Since chat models have a maximum limit on input size, it's important to manage chat history and trim it as needed to avoid exceeding the [context window](/docs/concepts/chat_models/#context-window).
2828

29-
While processing chat history, it's essential to preserve a correct conversation structure.
29+
While processing chat history, it's essential to preserve a correct conversation structure.
3030

3131
Key guidelines for managing chat history:
3232

docs/docs/concepts/chat_models.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ If the input exceeds the context window, the model may not be able to process th
127127
The size of the input is measured in [tokens](/docs/concepts/tokens) which are the unit of processing that the model uses.
128128

129129
## Advanced topics
130-
130+
131131
### Rate-limiting
132132

133133
Many chat model providers impose a limit on the number of requests that can be made in a given time period.

docs/docs/concepts/embedding_models.mdx

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,9 @@ Embedding models can also be [multimodal](/docs/concepts/multimodality) though s
1515

1616
Imagine being able to capture the essence of any text - a tweet, document, or book - in a single, compact representation.
1717
This is the power of embedding models, which lie at the heart of many retrieval systems.
18-
Embedding models transform human language into a format that machines can understand and compare with speed and accuracy.
18+
Embedding models transform human language into a format that machines can understand and compare with speed and accuracy.
1919
These models take text as input and produce a fixed-length array of numbers, a numerical fingerprint of the text's semantic meaning.
20-
Embeddings allow search system to find relevant documents not just based on keyword matches, but on semantic understanding.
20+
Embeddings allow search system to find relevant documents not just based on keyword matches, but on semantic understanding.
2121

2222
## Key concepts
2323

@@ -27,16 +27,16 @@ Embeddings allow search system to find relevant documents not just based on keyw
2727

2828
(2) **Measure similarity**: Embedding vectors can be compared using simple mathematical operations.
2929

30-
## Embedding
30+
## Embedding
3131

32-
### Historical context
32+
### Historical context
3333

34-
The landscape of embedding models has evolved significantly over the years.
35-
A pivotal moment came in 2018 when Google introduced [BERT (Bidirectional Encoder Representations from Transformers)](https://www.nvidia.com/en-us/glossary/bert/).
34+
The landscape of embedding models has evolved significantly over the years.
35+
A pivotal moment came in 2018 when Google introduced [BERT (Bidirectional Encoder Representations from Transformers)](https://www.nvidia.com/en-us/glossary/bert/).
3636
BERT applied transformer models to embed text as a simple vector representation, which lead to unprecedented performance across various NLP tasks.
37-
However, BERT wasn't optimized for generating sentence embeddings efficiently.
37+
However, BERT wasn't optimized for generating sentence embeddings efficiently.
3838
This limitation spurred the creation of [SBERT (Sentence-BERT)](https://www.sbert.net/examples/training/sts/README.html), which adapted the BERT architecture to generate semantically rich sentence embeddings, easily comparable via similarity metrics like cosine similarity, dramatically reduced the computational overhead for tasks like finding similar sentences.
39-
Today, the embedding model ecosystem is diverse, with numerous providers offering their own implementations.
39+
Today, the embedding model ecosystem is diverse, with numerous providers offering their own implementations.
4040
To navigate this variety, researchers and practitioners often turn to benchmarks like the Massive Text Embedding Benchmark (MTEB) [here](https://huggingface.co/blog/mteb) for objective comparisons.
4141

4242
:::info[Further reading]
@@ -93,9 +93,9 @@ LangChain offers many embedding model integrations which you can find [on the em
9393

9494
## Measure similarity
9595

96-
Each embedding is essentially a set of coordinates, often in a high-dimensional space.
96+
Each embedding is essentially a set of coordinates, often in a high-dimensional space.
9797
In this space, the position of each point (embedding) reflects the meaning of its corresponding text.
98-
Just as similar words might be close to each other in a thesaurus, similar concepts end up close to each other in this embedding space.
98+
Just as similar words might be close to each other in a thesaurus, similar concepts end up close to each other in this embedding space.
9999
This allows for intuitive comparisons between different pieces of text.
100100
By reducing text to these numerical representations, we can use simple mathematical operations to quickly measure how alike two pieces of text are, regardless of their original length or structure.
101101
Some common similarity metrics include:
@@ -118,7 +118,7 @@ def cosine_similarity(vec1, vec2):
118118

119119
similarity = cosine_similarity(query_result, document_result)
120120
print("Cosine Similarity:", similarity)
121-
```
121+
```
122122

123123
:::info[Further reading]
124124

@@ -127,4 +127,4 @@ print("Cosine Similarity:", similarity)
127127
* See Pinecone's [blog post](https://www.pinecone.io/learn/vector-similarity/) on similarity metrics.
128128
* See OpenAI's [FAQ](https://platform.openai.com/docs/guides/embeddings/faq) on what similarity metric to use with OpenAI embeddings.
129129

130-
:::
130+
:::

docs/docs/concepts/evaluation.mdx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,4 +14,3 @@ This process is vital for building reliable applications.
1414
- It allows you to track results over time and automatically run your evaluators on a schedule or as part of CI/Code
1515

1616
To learn more, check out [this LangSmith guide](https://docs.smith.langchain.com/concepts/evaluation).
17-

docs/docs/concepts/example_selectors.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,4 +17,4 @@ Sometimes these examples are hardcoded into the prompt, but for more advanced si
1717

1818
## Related resources
1919

20-
* [Example selector how-to guides](/docs/how_to/#example-selectors)
20+
* [Example selector how-to guides](/docs/how_to/#example-selectors)

0 commit comments

Comments
 (0)