Skip to content

Commit d40e340

Browse files
authored
chore: attribute package change versions (#33854)
Needed to disambiguate for within inherited docs
1 parent 9a09ed0 commit d40e340

File tree

30 files changed

+228
-169
lines changed

30 files changed

+228
-169
lines changed

libs/core/langchain_core/callbacks/usage.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ class UsageMetadataCallbackHandler(BaseCallbackHandler):
4343
'input_token_details': {'cache_read': 0, 'cache_creation': 0}}}
4444
```
4545
46-
!!! version-added "Added in version 0.3.49"
46+
!!! version-added "Added in `langchain-core` 0.3.49"
4747
4848
"""
4949

@@ -134,7 +134,7 @@ def get_usage_metadata_callback(
134134
}
135135
```
136136
137-
!!! version-added "Added in version 0.3.49"
137+
!!! version-added "Added in `langchain-core` 0.3.49"
138138
139139
"""
140140
usage_metadata_callback_var: ContextVar[UsageMetadataCallbackHandler | None] = (

libs/core/langchain_core/indexing/api.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -298,7 +298,7 @@ def index(
298298
For the time being, documents are indexed using their hashes, and users
299299
are not able to specify the uid of the document.
300300
301-
!!! warning "Behavior changed in 0.3.25"
301+
!!! warning "Behavior changed in `langchain-core` 0.3.25"
302302
Added `scoped_full` cleanup mode.
303303
304304
!!! warning
@@ -349,7 +349,7 @@ def index(
349349
key_encoder: Hashing algorithm to use for hashing the document content and
350350
metadata. Options include "blake2b", "sha256", and "sha512".
351351
352-
!!! version-added "Added in version 0.3.66"
352+
!!! version-added "Added in `langchain-core` 0.3.66"
353353
354354
key_encoder: Hashing algorithm to use for hashing the document.
355355
If not provided, a default encoder using SHA-1 will be used.
@@ -366,7 +366,7 @@ def index(
366366
method of the `VectorStore` or the upsert method of the DocumentIndex.
367367
For example, you can use this to specify a custom vector_field:
368368
upsert_kwargs={"vector_field": "embedding"}
369-
!!! version-added "Added in version 0.3.10"
369+
!!! version-added "Added in `langchain-core` 0.3.10"
370370
371371
Returns:
372372
Indexing result which contains information about how many documents
@@ -636,7 +636,7 @@ async def aindex(
636636
For the time being, documents are indexed using their hashes, and users
637637
are not able to specify the uid of the document.
638638
639-
!!! warning "Behavior changed in 0.3.25"
639+
!!! warning "Behavior changed in `langchain-core` 0.3.25"
640640
Added `scoped_full` cleanup mode.
641641
642642
!!! warning
@@ -687,7 +687,7 @@ async def aindex(
687687
key_encoder: Hashing algorithm to use for hashing the document content and
688688
metadata. Options include "blake2b", "sha256", and "sha512".
689689
690-
!!! version-added "Added in version 0.3.66"
690+
!!! version-added "Added in `langchain-core` 0.3.66"
691691
692692
key_encoder: Hashing algorithm to use for hashing the document.
693693
If not provided, a default encoder using SHA-1 will be used.
@@ -704,7 +704,7 @@ async def aindex(
704704
method of the `VectorStore` or the upsert method of the DocumentIndex.
705705
For example, you can use this to specify a custom vector_field:
706706
upsert_kwargs={"vector_field": "embedding"}
707-
!!! version-added "Added in version 0.3.10"
707+
!!! version-added "Added in `langchain-core` 0.3.10"
708708
709709
Returns:
710710
Indexing result which contains information about how many documents

libs/core/langchain_core/language_models/_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,7 @@ def _normalize_messages(
139139
directly; this may change in the future
140140
- LangChain v0 standard content blocks for backward compatibility
141141
142-
!!! warning "Behavior changed in 1.0.0"
142+
!!! warning "Behavior changed in `langchain-core` 1.0.0"
143143
In previous versions, this function returned messages in LangChain v0 format.
144144
Now, it returns messages in LangChain v1 format, which upgraded chat models now
145145
expect to receive when passing back in message history. For backward

libs/core/langchain_core/language_models/base.py

Lines changed: 32 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -195,15 +195,22 @@ def generate_prompt(
195195
type (e.g., pure text completion models vs chat models).
196196
197197
Args:
198-
prompts: List of `PromptValue` objects. A `PromptValue` is an object that
199-
can be converted to match the format of any language model (string for
200-
pure text generation models and `BaseMessage` objects for chat models).
201-
stop: Stop words to use when generating. Model output is cut off at the
202-
first occurrence of any of these substrings.
203-
callbacks: `Callbacks` to pass through. Used for executing additional
204-
functionality, such as logging or streaming, throughout generation.
205-
**kwargs: Arbitrary additional keyword arguments. These are usually passed
206-
to the model provider API call.
198+
prompts: List of `PromptValue` objects.
199+
200+
A `PromptValue` is an object that can be converted to match the format
201+
of any language model (string for pure text generation models and
202+
`BaseMessage` objects for chat models).
203+
stop: Stop words to use when generating.
204+
205+
Model output is cut off at the first occurrence of any of these
206+
substrings.
207+
callbacks: `Callbacks` to pass through.
208+
209+
Used for executing additional functionality, such as logging or
210+
streaming, throughout generation.
211+
**kwargs: Arbitrary additional keyword arguments.
212+
213+
These are usually passed to the model provider API call.
207214
208215
Returns:
209216
An `LLMResult`, which contains a list of candidate `Generation` objects for
@@ -232,15 +239,22 @@ async def agenerate_prompt(
232239
type (e.g., pure text completion models vs chat models).
233240
234241
Args:
235-
prompts: List of `PromptValue` objects. A `PromptValue` is an object that
236-
can be converted to match the format of any language model (string for
237-
pure text generation models and `BaseMessage` objects for chat models).
238-
stop: Stop words to use when generating. Model output is cut off at the
239-
first occurrence of any of these substrings.
240-
callbacks: `Callbacks` to pass through. Used for executing additional
241-
functionality, such as logging or streaming, throughout generation.
242-
**kwargs: Arbitrary additional keyword arguments. These are usually passed
243-
to the model provider API call.
242+
prompts: List of `PromptValue` objects.
243+
244+
A `PromptValue` is an object that can be converted to match the format
245+
of any language model (string for pure text generation models and
246+
`BaseMessage` objects for chat models).
247+
stop: Stop words to use when generating.
248+
249+
Model output is cut off at the first occurrence of any of these
250+
substrings.
251+
callbacks: `Callbacks` to pass through.
252+
253+
Used for executing additional functionality, such as logging or
254+
streaming, throughout generation.
255+
**kwargs: Arbitrary additional keyword arguments.
256+
257+
These are usually passed to the model provider API call.
244258
245259
Returns:
246260
An `LLMResult`, which contains a list of candidate `Generation` objects for

libs/core/langchain_core/language_models/chat_models.py

Lines changed: 24 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -332,7 +332,7 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
332332
[`langchain-openai`](https://pypi.org/project/langchain-openai)) can also use this
333333
field to roll out new content formats in a backward-compatible way.
334334
335-
!!! version-added "Added in version 1.0"
335+
!!! version-added "Added in `langchain-core` 1.0"
336336
337337
"""
338338

@@ -845,16 +845,21 @@ def generate(
845845
846846
Args:
847847
messages: List of list of messages.
848-
stop: Stop words to use when generating. Model output is cut off at the
849-
first occurrence of any of these substrings.
850-
callbacks: `Callbacks` to pass through. Used for executing additional
851-
functionality, such as logging or streaming, throughout generation.
848+
stop: Stop words to use when generating.
849+
850+
Model output is cut off at the first occurrence of any of these
851+
substrings.
852+
callbacks: `Callbacks` to pass through.
853+
854+
Used for executing additional functionality, such as logging or
855+
streaming, throughout generation.
852856
tags: The tags to apply.
853857
metadata: The metadata to apply.
854858
run_name: The name of the run.
855859
run_id: The ID of the run.
856-
**kwargs: Arbitrary additional keyword arguments. These are usually passed
857-
to the model provider API call.
860+
**kwargs: Arbitrary additional keyword arguments.
861+
862+
These are usually passed to the model provider API call.
858863
859864
Returns:
860865
An `LLMResult`, which contains a list of candidate `Generations` for each
@@ -963,16 +968,21 @@ async def agenerate(
963968
964969
Args:
965970
messages: List of list of messages.
966-
stop: Stop words to use when generating. Model output is cut off at the
967-
first occurrence of any of these substrings.
968-
callbacks: `Callbacks` to pass through. Used for executing additional
969-
functionality, such as logging or streaming, throughout generation.
971+
stop: Stop words to use when generating.
972+
973+
Model output is cut off at the first occurrence of any of these
974+
substrings.
975+
callbacks: `Callbacks` to pass through.
976+
977+
Used for executing additional functionality, such as logging or
978+
streaming, throughout generation.
970979
tags: The tags to apply.
971980
metadata: The metadata to apply.
972981
run_name: The name of the run.
973982
run_id: The ID of the run.
974-
**kwargs: Arbitrary additional keyword arguments. These are usually passed
975-
to the model provider API call.
983+
**kwargs: Arbitrary additional keyword arguments.
984+
985+
These are usually passed to the model provider API call.
976986
977987
Returns:
978988
An `LLMResult`, which contains a list of candidate `Generations` for each
@@ -1629,7 +1639,7 @@ class AnswerWithJustification(BaseModel):
16291639
# }
16301640
```
16311641
1632-
!!! warning "Behavior changed in 0.2.26"
1642+
!!! warning "Behavior changed in `langchain-core` 0.2.26"
16331643
Added support for TypedDict class.
16341644
16351645
""" # noqa: E501

libs/core/langchain_core/language_models/llms.py

Lines changed: 66 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -651,9 +651,12 @@ def _generate(
651651
652652
Args:
653653
prompts: The prompts to generate from.
654-
stop: Stop words to use when generating. Model output is cut off at the
655-
first occurrence of any of the stop substrings.
656-
If stop tokens are not supported consider raising NotImplementedError.
654+
stop: Stop words to use when generating.
655+
656+
Model output is cut off at the first occurrence of any of these
657+
substrings.
658+
659+
If stop tokens are not supported consider raising `NotImplementedError`.
657660
run_manager: Callback manager for the run.
658661
659662
Returns:
@@ -671,9 +674,12 @@ async def _agenerate(
671674
672675
Args:
673676
prompts: The prompts to generate from.
674-
stop: Stop words to use when generating. Model output is cut off at the
675-
first occurrence of any of the stop substrings.
676-
If stop tokens are not supported consider raising NotImplementedError.
677+
stop: Stop words to use when generating.
678+
679+
Model output is cut off at the first occurrence of any of these
680+
substrings.
681+
682+
If stop tokens are not supported consider raising `NotImplementedError`.
677683
run_manager: Callback manager for the run.
678684
679685
Returns:
@@ -705,11 +711,14 @@ def _stream(
705711
706712
Args:
707713
prompt: The prompt to generate from.
708-
stop: Stop words to use when generating. Model output is cut off at the
709-
first occurrence of any of these substrings.
714+
stop: Stop words to use when generating.
715+
716+
Model output is cut off at the first occurrence of any of these
717+
substrings.
710718
run_manager: Callback manager for the run.
711-
**kwargs: Arbitrary additional keyword arguments. These are usually passed
712-
to the model provider API call.
719+
**kwargs: Arbitrary additional keyword arguments.
720+
721+
These are usually passed to the model provider API call.
713722
714723
Yields:
715724
Generation chunks.
@@ -731,11 +740,14 @@ async def _astream(
731740
732741
Args:
733742
prompt: The prompt to generate from.
734-
stop: Stop words to use when generating. Model output is cut off at the
735-
first occurrence of any of these substrings.
743+
stop: Stop words to use when generating.
744+
745+
Model output is cut off at the first occurrence of any of these
746+
substrings.
736747
run_manager: Callback manager for the run.
737-
**kwargs: Arbitrary additional keyword arguments. These are usually passed
738-
to the model provider API call.
748+
**kwargs: Arbitrary additional keyword arguments.
749+
750+
These are usually passed to the model provider API call.
739751
740752
Yields:
741753
Generation chunks.
@@ -846,10 +858,14 @@ def generate(
846858
847859
Args:
848860
prompts: List of string prompts.
849-
stop: Stop words to use when generating. Model output is cut off at the
850-
first occurrence of any of these substrings.
851-
callbacks: `Callbacks` to pass through. Used for executing additional
852-
functionality, such as logging or streaming, throughout generation.
861+
stop: Stop words to use when generating.
862+
863+
Model output is cut off at the first occurrence of any of these
864+
substrings.
865+
callbacks: `Callbacks` to pass through.
866+
867+
Used for executing additional functionality, such as logging or
868+
streaming, throughout generation.
853869
tags: List of tags to associate with each prompt. If provided, the length
854870
of the list must match the length of the prompts list.
855871
metadata: List of metadata dictionaries to associate with each prompt. If
@@ -859,8 +875,9 @@ def generate(
859875
length of the list must match the length of the prompts list.
860876
run_id: List of run IDs to associate with each prompt. If provided, the
861877
length of the list must match the length of the prompts list.
862-
**kwargs: Arbitrary additional keyword arguments. These are usually passed
863-
to the model provider API call.
878+
**kwargs: Arbitrary additional keyword arguments.
879+
880+
These are usually passed to the model provider API call.
864881
865882
Raises:
866883
ValueError: If prompts is not a list.
@@ -1116,10 +1133,14 @@ async def agenerate(
11161133
11171134
Args:
11181135
prompts: List of string prompts.
1119-
stop: Stop words to use when generating. Model output is cut off at the
1120-
first occurrence of any of these substrings.
1121-
callbacks: `Callbacks` to pass through. Used for executing additional
1122-
functionality, such as logging or streaming, throughout generation.
1136+
stop: Stop words to use when generating.
1137+
1138+
Model output is cut off at the first occurrence of any of these
1139+
substrings.
1140+
callbacks: `Callbacks` to pass through.
1141+
1142+
Used for executing additional functionality, such as logging or
1143+
streaming, throughout generation.
11231144
tags: List of tags to associate with each prompt. If provided, the length
11241145
of the list must match the length of the prompts list.
11251146
metadata: List of metadata dictionaries to associate with each prompt. If
@@ -1129,8 +1150,9 @@ async def agenerate(
11291150
length of the list must match the length of the prompts list.
11301151
run_id: List of run IDs to associate with each prompt. If provided, the
11311152
length of the list must match the length of the prompts list.
1132-
**kwargs: Arbitrary additional keyword arguments. These are usually passed
1133-
to the model provider API call.
1153+
**kwargs: Arbitrary additional keyword arguments.
1154+
1155+
These are usually passed to the model provider API call.
11341156
11351157
Raises:
11361158
ValueError: If the length of `callbacks`, `tags`, `metadata`, or
@@ -1410,12 +1432,16 @@ def _call(
14101432
14111433
Args:
14121434
prompt: The prompt to generate from.
1413-
stop: Stop words to use when generating. Model output is cut off at the
1414-
first occurrence of any of the stop substrings.
1415-
If stop tokens are not supported consider raising NotImplementedError.
1435+
stop: Stop words to use when generating.
1436+
1437+
Model output is cut off at the first occurrence of any of these
1438+
substrings.
1439+
1440+
If stop tokens are not supported consider raising `NotImplementedError`.
14161441
run_manager: Callback manager for the run.
1417-
**kwargs: Arbitrary additional keyword arguments. These are usually passed
1418-
to the model provider API call.
1442+
**kwargs: Arbitrary additional keyword arguments.
1443+
1444+
These are usually passed to the model provider API call.
14191445
14201446
Returns:
14211447
The model output as a string. SHOULD NOT include the prompt.
@@ -1436,12 +1462,16 @@ async def _acall(
14361462
14371463
Args:
14381464
prompt: The prompt to generate from.
1439-
stop: Stop words to use when generating. Model output is cut off at the
1440-
first occurrence of any of the stop substrings.
1441-
If stop tokens are not supported consider raising NotImplementedError.
1465+
stop: Stop words to use when generating.
1466+
1467+
Model output is cut off at the first occurrence of any of these
1468+
substrings.
1469+
1470+
If stop tokens are not supported consider raising `NotImplementedError`.
14421471
run_manager: Callback manager for the run.
1443-
**kwargs: Arbitrary additional keyword arguments. These are usually passed
1444-
to the model provider API call.
1472+
**kwargs: Arbitrary additional keyword arguments.
1473+
1474+
These are usually passed to the model provider API call.
14451475
14461476
Returns:
14471477
The model output as a string. SHOULD NOT include the prompt.

0 commit comments

Comments
 (0)