Skip to content

Commit a02ad3d

Browse files
authored
docs: formatting cleanup (#32188)
* formatting cleaning * make `init_chat_model` more prominent in list of guides
1 parent 0c4054a commit a02ad3d

File tree

8 files changed

+87
-100
lines changed

8 files changed

+87
-100
lines changed

docs/docs/concepts/architecture.mdx

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,7 @@ LangChain is a framework that consists of a number of packages.
2020

2121
This package contains base abstractions for different components and ways to compose them together.
2222
The interfaces for core components like chat models, vector stores, tools and more are defined here.
23-
No third-party integrations are defined here.
24-
The dependencies are very lightweight.
23+
**No third-party integrations are defined here.** The dependencies are kept purposefully very lightweight.
2524

2625
## langchain
2726

docs/docs/how_to/chat_models_universal_init.ipynb

Lines changed: 22 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
"metadata": {},
2626
"outputs": [],
2727
"source": [
28-
"%pip install -qU langchain>=0.2.8 langchain-openai langchain-anthropic langchain-google-vertexai"
28+
"%pip install -qU langchain langchain-openai langchain-anthropic langchain-google-genai"
2929
]
3030
},
3131
{
@@ -38,7 +38,7 @@
3838
},
3939
{
4040
"cell_type": "code",
41-
"execution_count": 2,
41+
"execution_count": 5,
4242
"id": "79e14913-803c-4382-9009-5c6af3d75d35",
4343
"metadata": {
4444
"execution": {
@@ -49,45 +49,26 @@
4949
}
5050
},
5151
"outputs": [
52-
{
53-
"name": "stderr",
54-
"output_type": "stream",
55-
"text": [
56-
"/var/folders/4j/2rz3865x6qg07tx43146py8h0000gn/T/ipykernel_95293/571506279.py:4: LangChainBetaWarning: The function `init_chat_model` is in beta. It is actively being worked on, so the API may change.\n",
57-
" gpt_4o = init_chat_model(\"gpt-4o\", model_provider=\"openai\", temperature=0)\n"
58-
]
59-
},
60-
{
61-
"name": "stdout",
62-
"output_type": "stream",
63-
"text": [
64-
"GPT-4o: I'm an AI created by OpenAI, and I don't have a personal name. How can I assist you today?\n",
65-
"\n"
66-
]
67-
},
68-
{
69-
"name": "stdout",
70-
"output_type": "stream",
71-
"text": [
72-
"Claude Opus: My name is Claude. It's nice to meet you!\n",
73-
"\n"
74-
]
75-
},
7652
{
7753
"name": "stdout",
7854
"output_type": "stream",
7955
"text": [
80-
"Gemini 1.5: I am a large language model, trained by Google. \n",
56+
"GPT-4o: I’m called ChatGPT. How can I assist you today?\n",
8157
"\n",
82-
"I don't have a name like a person does. You can call me Bard if you like! 😊 \n",
58+
"Claude Opus: My name is Claude. It's nice to meet you!\n",
8359
"\n",
60+
"Gemini 2.5: I do not have a name. I am a large language model, trained by Google.\n",
8461
"\n"
8562
]
8663
}
8764
],
8865
"source": [
8966
"from langchain.chat_models import init_chat_model\n",
9067
"\n",
68+
"# Don't forget to set your environment variables for the API keys of the respective providers!\n",
69+
"# For example, you can set them in your terminal or in a .env file:\n",
70+
"# export OPENAI_API_KEY=\"your_openai_api_key\"\n",
71+
"\n",
9172
"# Returns a langchain_openai.ChatOpenAI instance.\n",
9273
"gpt_4o = init_chat_model(\"gpt-4o\", model_provider=\"openai\", temperature=0)\n",
9374
"# Returns a langchain_anthropic.ChatAnthropic instance.\n",
@@ -96,13 +77,13 @@
9677
")\n",
9778
"# Returns a langchain_google_vertexai.ChatVertexAI instance.\n",
9879
"gemini_15 = init_chat_model(\n",
99-
" \"gemini-1.5-pro\", model_provider=\"google_vertexai\", temperature=0\n",
80+
" \"gemini-2.5-pro\", model_provider=\"google_genai\", temperature=0\n",
10081
")\n",
10182
"\n",
10283
"# Since all model integrations implement the ChatModel interface, you can use them in the same way.\n",
10384
"print(\"GPT-4o: \" + gpt_4o.invoke(\"what's your name\").content + \"\\n\")\n",
10485
"print(\"Claude Opus: \" + claude_opus.invoke(\"what's your name\").content + \"\\n\")\n",
105-
"print(\"Gemini 1.5: \" + gemini_15.invoke(\"what's your name\").content + \"\\n\")"
86+
"print(\"Gemini 2.5: \" + gemini_15.invoke(\"what's your name\").content + \"\\n\")"
10687
]
10788
},
10889
{
@@ -117,7 +98,7 @@
11798
},
11899
{
119100
"cell_type": "code",
120-
"execution_count": 3,
101+
"execution_count": null,
121102
"id": "0378ccc6-95bc-4d50-be50-fccc193f0a71",
122103
"metadata": {
123104
"execution": {
@@ -131,7 +112,7 @@
131112
"source": [
132113
"gpt_4o = init_chat_model(\"gpt-4o\", temperature=0)\n",
133114
"claude_opus = init_chat_model(\"claude-3-opus-20240229\", temperature=0)\n",
134-
"gemini_15 = init_chat_model(\"gemini-1.5-pro\", temperature=0)"
115+
"gemini_15 = init_chat_model(\"gemini-2.5-pro\", temperature=0)"
135116
]
136117
},
137118
{
@@ -146,7 +127,7 @@
146127
},
147128
{
148129
"cell_type": "code",
149-
"execution_count": 4,
130+
"execution_count": 7,
150131
"id": "6c037f27-12d7-4e83-811e-4245c0e3ba58",
151132
"metadata": {
152133
"execution": {
@@ -160,10 +141,10 @@
160141
{
161142
"data": {
162143
"text/plain": [
163-
"AIMessage(content=\"I'm an AI created by OpenAI, and I don't have a personal name. How can I assist you today?\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 23, 'prompt_tokens': 11, 'total_tokens': 34}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': 'fp_25624ae3a5', 'finish_reason': 'stop', 'logprobs': None}, id='run-b41df187-4627-490d-af3c-1c96282d3eb0-0', usage_metadata={'input_tokens': 11, 'output_tokens': 23, 'total_tokens': 34})"
144+
"AIMessage(content='I’m called ChatGPT. How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 11, 'total_tokens': 24, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_07871e2ad8', 'id': 'chatcmpl-BwCyyBpMqn96KED6zPhLm4k9SQMiQ', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--fada10c3-4128-406c-b83d-a850d16b365f-0', usage_metadata={'input_tokens': 11, 'output_tokens': 13, 'total_tokens': 24, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})"
164145
]
165146
},
166-
"execution_count": 4,
147+
"execution_count": 7,
167148
"metadata": {},
168149
"output_type": "execute_result"
169150
}
@@ -178,7 +159,7 @@
178159
},
179160
{
180161
"cell_type": "code",
181-
"execution_count": 5,
162+
"execution_count": 8,
182163
"id": "321e3036-abd2-4e1f-bcc6-606efd036954",
183164
"metadata": {
184165
"execution": {
@@ -192,10 +173,10 @@
192173
{
193174
"data": {
194175
"text/plain": [
195-
"AIMessage(content=\"My name is Claude. It's nice to meet you!\", additional_kwargs={}, response_metadata={'id': 'msg_01Fx9P74A7syoFkwE73CdMMY', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 11, 'output_tokens': 15}}, id='run-a0fd2bbd-3b7e-46bf-8d69-a48c7e60b03c-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26})"
176+
"AIMessage(content=\"My name is Claude. It's nice to meet you!\", additional_kwargs={}, response_metadata={'id': 'msg_01VDGrG9D6yefanbBG9zPJrc', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 11, 'output_tokens': 15, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-5-sonnet-20240620'}, id='run--f0156087-debf-4b4b-9aaa-f3328a81ef92-0', usage_metadata={'input_tokens': 11, 'output_tokens': 15, 'total_tokens': 26, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}})"
196177
]
197178
},
198-
"execution_count": 5,
179+
"execution_count": 8,
199180
"metadata": {},
200181
"output_type": "execute_result"
201182
}
@@ -394,9 +375,9 @@
394375
],
395376
"metadata": {
396377
"kernelspec": {
397-
"display_name": "poetry-venv-2",
378+
"display_name": "langchain",
398379
"language": "python",
399-
"name": "poetry-venv-2"
380+
"name": "python3"
400381
},
401382
"language_info": {
402383
"codemirror_mode": {
@@ -408,7 +389,7 @@
408389
"name": "python",
409390
"nbconvert_exporter": "python",
410391
"pygments_lexer": "ipython3",
411-
"version": "3.11.9"
392+
"version": "3.10.16"
412393
}
413394
},
414395
"nbformat": 4,

docs/docs/how_to/index.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,8 @@ These are the core building blocks you can use when building applications.
3434
[Chat Models](/docs/concepts/chat_models) are newer forms of language models that take messages in and output a message.
3535
See [supported integrations](/docs/integrations/chat/) for details on getting started with chat models from a specific provider.
3636

37+
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
38+
- [How to: work with local models](/docs/how_to/local_llms)
3739
- [How to: do function/tool calling](/docs/how_to/tool_calling)
3840
- [How to: get models to return structured output](/docs/how_to/structured_output)
3941
- [How to: cache model responses](/docs/how_to/chat_model_caching)
@@ -48,8 +50,6 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
4850
- [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot)
4951
- [How to: bind model-specific formatted tools](/docs/how_to/tools_model_specific)
5052
- [How to: force a specific tool call](/docs/how_to/tool_choice)
51-
- [How to: work with local models](/docs/how_to/local_llms)
52-
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
5353
- [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/)
5454

5555
### Messages

docs/docs/how_to/local_llms.ipynb

Lines changed: 16 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -13,24 +13,24 @@
1313
"\n",
1414
"This has at least two important benefits:\n",
1515
"\n",
16-
"1. `Privacy`: Your data is not sent to a third party, and it is not subject to the terms of service of a commercial service\n",
17-
"2. `Cost`: There is no inference fee, which is important for token-intensive applications (e.g., [long-running simulations](https://twitter.com/RLanceMartin/status/1691097659262820352?s=20), summarization)\n",
16+
"1. **Privacy**: Your data is not sent to a third party, and it is not subject to the terms of service of a commercial service\n",
17+
"2. **Cost**: There is no inference fee, which is important for token-intensive applications (e.g., [long-running simulations](https://twitter.com/RLanceMartin/status/1691097659262820352?s=20), summarization)\n",
1818
"\n",
1919
"## Overview\n",
2020
"\n",
2121
"Running an LLM locally requires a few things:\n",
2222
"\n",
23-
"1. `Open-source LLM`: An open-source LLM that can be freely modified and shared \n",
24-
"2. `Inference`: Ability to run this LLM on your device w/ acceptable latency\n",
23+
"1. **Open-source LLM**: An open-source LLM that can be freely modified and shared \n",
24+
"2. **Inference**: Ability to run this LLM on your device w/ acceptable latency\n",
2525
"\n",
2626
"### Open-source LLMs\n",
2727
"\n",
2828
"Users can now gain access to a rapidly growing set of [open-source LLMs](https://cameronrwolfe.substack.com/p/the-history-of-open-source-llms-better). \n",
2929
"\n",
3030
"These LLMs can be assessed across at least two dimensions (see figure):\n",
3131
" \n",
32-
"1. `Base model`: What is the base-model and how was it trained?\n",
33-
"2. `Fine-tuning approach`: Was the base-model fine-tuned and, if so, what [set of instructions](https://cameronrwolfe.substack.com/p/beyond-llama-the-power-of-open-llms#%C2%A7alpaca-an-instruction-following-llama-model) was used?\n",
32+
"1. **Base model**: What is the base-model and how was it trained?\n",
33+
"2. **Fine-tuning approach**: Was the base-model fine-tuned and, if so, what [set of instructions](https://cameronrwolfe.substack.com/p/beyond-llama-the-power-of-open-llms#%C2%A7alpaca-an-instruction-following-llama-model) was used?\n",
3434
"\n",
3535
"![Image description](../../static/img/OSS_LLM_overview.png)\n",
3636
"\n",
@@ -51,8 +51,8 @@
5151
"\n",
5252
"In general, these frameworks will do a few things:\n",
5353
"\n",
54-
"1. `Quantization`: Reduce the memory footprint of the raw model weights\n",
55-
"2. `Efficient implementation for inference`: Support inference on consumer hardware (e.g., CPU or laptop GPU)\n",
54+
"1. **Quantization**: Reduce the memory footprint of the raw model weights\n",
55+
"2. **Efficient implementation for inference**: Support inference on consumer hardware (e.g., CPU or laptop GPU)\n",
5656
"\n",
5757
"In particular, see [this excellent post](https://finbarr.ca/how-is-llama-cpp-possible/) on the importance of quantization.\n",
5858
"\n",
@@ -679,11 +679,17 @@
679679
"\n",
680680
"In general, use cases for local LLMs can be driven by at least two factors:\n",
681681
"\n",
682-
"* `Privacy`: private data (e.g., journals, etc) that a user does not want to share \n",
683-
"* `Cost`: text preprocessing (extraction/tagging), summarization, and agent simulations are token-use-intensive tasks\n",
682+
"* **Privacy**: private data (e.g., journals, etc) that a user does not want to share \n",
683+
"* **Cost**: text preprocessing (extraction/tagging), summarization, and agent simulations are token-use-intensive tasks\n",
684684
"\n",
685685
"In addition, [here](https://blog.langchain.dev/using-langsmith-to-support-fine-tuning-of-open-source-llms/) is an overview on fine-tuning, which can utilize open-source LLMs."
686686
]
687+
},
688+
{
689+
"cell_type": "markdown",
690+
"id": "14c2c170",
691+
"metadata": {},
692+
"source": []
687693
}
688694
],
689695
"metadata": {

libs/core/langchain_core/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
and more are defined here. The universal invocation protocol (Runnables) along with
55
a syntax for combining components (LangChain Expression Language) are also defined here.
66
7-
No third-party integrations are defined here. The dependencies are kept purposefully
7+
**No third-party integrations are defined here.** The dependencies are kept purposefully
88
very lightweight.
99
"""
1010

0 commit comments

Comments
 (0)