You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
`NvidiaDocumentEmbedder` enriches the metadata of documents with an embedding of their content.
27
+
`NvidiaDocumentEmbedder` enriches documents with an embedding of their content.
28
28
29
-
It can be used with self-hosted models with NVIDIA NIM or models hosted on the [NVIDIA API catalog](https://build.nvidia.com/explore/discover).
29
+
You can use this component with self-hosted models using NVIDIA NIM or models hosted on the [NVIDIA API Catalog](https://build.nvidia.com/explore/discover).
30
30
31
-
To embed a string, use the [`NvidiaTextEmbedder`](nvidiatextembedder.mdx).
31
+
To embed a string, use [`NvidiaTextEmbedder`](nvidiatextembedder.mdx).
32
32
33
33
## Usage
34
34
35
-
To start using `NvidiaDocumentEmbedder`, first, install the `nvidia-haystack` package:
35
+
To start using `NvidiaDocumentEmbedder`, install the `nvidia-haystack` package:
36
36
37
37
```shell
38
38
pip install nvidia-haystack
39
39
```
40
40
41
-
You can use the `NvidiaDocumentEmbedder` with all the embedder models available on the [NVIDIA API catalog](https://docs.api.nvidia.com/nim/reference) or using a model deployed with NVIDIA NIM. Follow the [Deploying Text Embedding Models](https://developer.nvidia.com/docs/nemo-microservices/embedding/source/deploy.html) guide to learn how to deploy the model you want on your infrastructure.
41
+
You can use `NvidiaDocumentEmbedder` with all the embedding models available on the [NVIDIA API Catalog](https://docs.api.nvidia.com/nim/reference) or with a model deployed using NVIDIA NIM. For more information, refer to [Deploying Text Embedding Models](https://developer.nvidia.com/docs/nemo-microservices/embedding/source/deploy.html).
42
42
43
43
### On its own
44
44
45
-
To use LLMs from the NVIDIA API catalog, you need to specify the correct `api_url` and your API key. You can get your API key directly from the [catalog website](https://build.nvidia.com/explore/discover).
45
+
To use models from the NVIDIA API Catalog, you need to specify the `api_url` and your API key. You can get your API key from the [NVIDIA API Catalog](https://build.nvidia.com/explore/discover).
46
46
47
-
The `NvidiaDocumentEmbedder`needs an Nvidia API key to work. It uses the `NVIDIA_API_KEY` environment variable by default. Otherwise, you can pass an API key at initialization with `api_key`, as in the following example.
47
+
`NvidiaDocumentEmbedder` uses the `NVIDIA_API_KEY` environment variable by default. Otherwise, you can pass an API key at initialization with the `api_key` parameter:
48
48
49
49
```python
50
+
from haystack import Document
50
51
from haystack.utils.auth import Secret
51
52
from haystack_integrations.components.embedders.nvidia import NvidiaDocumentEmbedder
52
53
54
+
documents = [
55
+
Document(content="A transformer is a deep learning architecture"),
56
+
Document(content="Large language models use transformer architectures"),
57
+
]
58
+
53
59
embedder = NvidiaDocumentEmbedder(
54
60
model="nvidia/nv-embedqa-e5-v5",
55
61
api_url="https://integrate.api.nvidia.com/v1",
56
62
api_key=Secret.from_token("<your-api-key>"),
57
63
)
58
64
embedder.warm_up()
59
65
60
-
result = embedder.run("A transformer is a deep learning architecture")
61
-
print(result["embedding"])
66
+
result = embedder.run(documents=documents)
67
+
print(result["documents"])
62
68
print(result["meta"])
63
69
```
64
70
65
-
To use a locally deployed model, you need to set the `api_url` to your localhost and unset your `api_key`.
71
+
To use a locally deployed model, set the `api_url` to your localhost and set `api_key` to `None`:
66
72
67
73
```python
74
+
from haystack import Document
68
75
from haystack_integrations.components.embedders.nvidia import NvidiaDocumentEmbedder
69
76
77
+
documents = [
78
+
Document(content="A transformer is a deep learning architecture"),
79
+
Document(content="Large language models use transformer architectures"),
80
+
]
81
+
70
82
embedder = NvidiaDocumentEmbedder(
71
83
model="nvidia/nv-embedqa-e5-v5",
72
-
api_url="http://0.0.0.0:9999/v1",
84
+
api_url="http://localhost:9999/v1",
73
85
api_key=None,
74
86
)
75
87
embedder.warm_up()
76
88
77
-
result = embedder.run("A transformer is a deep learning architecture")
78
-
print(result["embedding"])
89
+
result = embedder.run(documents=documents)
90
+
print(result["documents"])
79
91
print(result["meta"])
80
92
```
81
93
82
94
### In a pipeline
83
95
84
-
Here's an example of a RAG pipeline:
96
+
The following example shows how to use `NvidiaDocumentEmbedder` in a RAG pipeline:
85
97
86
98
```python
87
99
from haystack import Pipeline, Document
88
100
from haystack.document_stores.in_memory import InMemoryDocumentStore
@@ -26,25 +26,25 @@ This component transforms a string into a vector that captures its semantics usi
26
26
27
27
`NvidiaTextEmbedder` embeds a simple string (such as a query) into a vector.
28
28
29
-
It can be used with self-hosted models with NVIDIA NIM or models hosted on the [NVIDIA API catalog](https://build.nvidia.com/explore/discover).
29
+
You can use this component with self-hosted models using NVIDIA NIM or models hosted on the [NVIDIA API Catalog](https://build.nvidia.com/explore/discover).
30
30
31
-
To embed a list of documents, use the [`NvidiaDocumentEmbedder`](nvidiadocumentembedder.mdx), which enriches the document with the computed embedding, also known as vector.
31
+
To embed a list of documents, use [`NvidiaDocumentEmbedder`](nvidiadocumentembedder.mdx), which enriches each document with the computed embedding.
32
32
33
33
## Usage
34
34
35
-
To start using `NvidiaTextEmbedder`, first, install the `nvidia-haystack` package:
35
+
To start using `NvidiaTextEmbedder`, install the `nvidia-haystack` package:
36
36
37
37
```shell
38
38
pip install nvidia-haystack
39
39
```
40
40
41
-
You can use the `NvidiaTextEmbedder` with all the embedder models available on the [NVIDIA API catalog](https://docs.api.nvidia.com/nim/reference) or using a model deployed with NVIDIA NIM. Follow the [Deploying Text Embedding Models](https://developer.nvidia.com/docs/nemo-microservices/embedding/source/deploy.html) guide to learn how to deploy the model you want on your infrastructure.
41
+
You can use `NvidiaTextEmbedder` with all the embedding models available on the [NVIDIA API Catalog](https://docs.api.nvidia.com/nim/reference) or with a model deployed using NVIDIA NIM. For more information, refer to [Deploying Text Embedding Models](https://developer.nvidia.com/docs/nemo-microservices/embedding/source/deploy.html).
42
42
43
43
### On its own
44
44
45
-
To use LLMs from the NVIDIA API catalog, you need to specify the correct `api_url` and your API key. You can get your API key directly from the [catalog website](https://build.nvidia.com/explore/discover).
45
+
To use models from the NVIDIA API Catalog, you need to specify the `api_url` and your API key. You can get your API key from the [NVIDIA API Catalog](https://build.nvidia.com/explore/discover).
46
46
47
-
The `NvidiaTextEmbedder`needs an Nvidia API key to work. It uses the `NVIDIA_API_KEY` environment variable by default. Otherwise, you can pass an API key at initialization with `api_key`, as in the following example.
47
+
`NvidiaTextEmbedder` uses the `NVIDIA_API_KEY` environment variable by default. Otherwise, you can pass an API key at initialization with the `api_key` parameter:
48
48
49
49
```python
50
50
from haystack.utils.auth import Secret
@@ -62,14 +62,14 @@ print(result["embedding"])
62
62
print(result["meta"])
63
63
```
64
64
65
-
To use a locally deployed model, you need to set the `api_url` to your localhost and unset your `api_key`.
65
+
To use a locally deployed model, set the `api_url` to your localhost and set `api_key` to `None`:
66
66
67
67
```python
68
68
from haystack_integrations.components.embedders.nvidia import NvidiaTextEmbedder
69
69
70
70
embedder = NvidiaTextEmbedder(
71
71
model="nvidia/nv-embedqa-e5-v5",
72
-
api_url="http://0.0.0.0:9999/v1",
72
+
api_url="http://localhost:9999/v1",
73
73
api_key=None,
74
74
)
75
75
embedder.warm_up()
@@ -81,47 +81,57 @@ print(result["meta"])
81
81
82
82
### In a pipeline
83
83
84
-
Here's an example of a RAG pipeline:
84
+
The following example shows how to use `NvidiaTextEmbedder` in a RAG pipeline:
85
85
86
86
```python
87
87
from haystack import Pipeline, Document
88
88
from haystack.document_stores.in_memory import InMemoryDocumentStore
0 commit comments