-
Notifications
You must be signed in to change notification settings - Fork 6.5k
feat: nvidia triton embedding integration #19226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
vpcano
wants to merge
11
commits into
run-llama:main
Choose a base branch
from
vpcano:feature/triton-embedding
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 8 commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
f6a0c8c
nvidia triton embedding
f7b6c09
docs
82833f6
version fix
981bea5
Merge branch 'main' into feature/triton-embedding
b8a0284
triton embedding readme
cf10bec
fix triton embedding readme
da72bed
uv run make format
0b8392f
Merge branch 'main' into feature/triton-embedding
logan-markewich 051af0b
implement async calls
4a24ee9
Merge branch 'feature/triton-embedding' of https://github.com/vpcano/…
2cac6b2
Merge branch 'main' into feature/triton-embedding
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
::: llama_index.embeddings.nvidia_triton | ||
options: | ||
members: | ||
- NvidiaTritonEmbedding |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
153 changes: 153 additions & 0 deletions
153
llama-index-integrations/embeddings/llama-index-embeddings-nvidia-triton/.gitignore
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,153 @@ | ||
llama_index/_static | ||
.DS_Store | ||
# Byte-compiled / optimized / DLL files | ||
__pycache__/ | ||
*.py[cod] | ||
*$py.class | ||
|
||
# C extensions | ||
*.so | ||
|
||
# Distribution / packaging | ||
.Python | ||
bin/ | ||
build/ | ||
develop-eggs/ | ||
dist/ | ||
downloads/ | ||
eggs/ | ||
.eggs/ | ||
etc/ | ||
include/ | ||
lib/ | ||
lib64/ | ||
parts/ | ||
sdist/ | ||
share/ | ||
var/ | ||
wheels/ | ||
pip-wheel-metadata/ | ||
share/python-wheels/ | ||
*.egg-info/ | ||
.installed.cfg | ||
*.egg | ||
MANIFEST | ||
|
||
# PyInstaller | ||
# Usually these files are written by a python script from a template | ||
# before PyInstaller builds the exe, so as to inject date/other infos into it. | ||
*.manifest | ||
*.spec | ||
|
||
# Installer logs | ||
pip-log.txt | ||
pip-delete-this-directory.txt | ||
|
||
# Unit test / coverage reports | ||
htmlcov/ | ||
.tox/ | ||
.nox/ | ||
.coverage | ||
.coverage.* | ||
.cache | ||
nosetests.xml | ||
coverage.xml | ||
*.cover | ||
*.py,cover | ||
.hypothesis/ | ||
.pytest_cache/ | ||
.ruff_cache | ||
|
||
# Translations | ||
*.mo | ||
*.pot | ||
|
||
# Django stuff: | ||
*.log | ||
local_settings.py | ||
db.sqlite3 | ||
db.sqlite3-journal | ||
|
||
# Flask stuff: | ||
instance/ | ||
.webassets-cache | ||
|
||
# Scrapy stuff: | ||
.scrapy | ||
|
||
# Sphinx documentation | ||
docs/_build/ | ||
|
||
# PyBuilder | ||
target/ | ||
|
||
# Jupyter Notebook | ||
.ipynb_checkpoints | ||
notebooks/ | ||
|
||
# IPython | ||
profile_default/ | ||
ipython_config.py | ||
|
||
# pyenv | ||
.python-version | ||
|
||
# pipenv | ||
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. | ||
# However, in case of collaboration, if having platform-specific dependencies or dependencies | ||
# having no cross-platform support, pipenv may install dependencies that don't work, or not | ||
# install all needed dependencies. | ||
#Pipfile.lock | ||
|
||
# PEP 582; used by e.g. github.com/David-OConnor/pyflow | ||
__pypackages__/ | ||
|
||
# Celery stuff | ||
celerybeat-schedule | ||
celerybeat.pid | ||
|
||
# SageMath parsed files | ||
*.sage.py | ||
|
||
# Environments | ||
.env | ||
.venv | ||
env/ | ||
venv/ | ||
ENV/ | ||
env.bak/ | ||
venv.bak/ | ||
pyvenv.cfg | ||
|
||
# Spyder project settings | ||
.spyderproject | ||
.spyproject | ||
|
||
# Rope project settings | ||
.ropeproject | ||
|
||
# mkdocs documentation | ||
/site | ||
|
||
# mypy | ||
.mypy_cache/ | ||
.dmypy.json | ||
dmypy.json | ||
|
||
# Pyre type checker | ||
.pyre/ | ||
|
||
# Jetbrains | ||
.idea | ||
modules/ | ||
*.swp | ||
|
||
# VsCode | ||
.vscode | ||
|
||
# pipenv | ||
Pipfile | ||
Pipfile.lock | ||
|
||
# pyright | ||
pyrightconfig.json |
21 changes: 21 additions & 0 deletions
21
llama-index-integrations/embeddings/llama-index-embeddings-nvidia-triton/LICENSE
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
The MIT License | ||
|
||
Copyright (c) Jerry Liu | ||
|
||
Permission is hereby granted, free of charge, to any person obtaining a copy | ||
of this software and associated documentation files (the "Software"), to deal | ||
in the Software without restriction, including without limitation the rights | ||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
copies of the Software, and to permit persons to whom the Software is | ||
furnished to do so, subject to the following conditions: | ||
|
||
The above copyright notice and this permission notice shall be included in | ||
all copies or substantial portions of the Software. | ||
|
||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN | ||
THE SOFTWARE. |
17 changes: 17 additions & 0 deletions
17
llama-index-integrations/embeddings/llama-index-embeddings-nvidia-triton/Makefile
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
GIT_ROOT ?= $(shell git rev-parse --show-toplevel) | ||
|
||
help: ## Show all Makefile targets. | ||
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[33m%-30s\033[0m %s\n", $$1, $$2}' | ||
|
||
format: ## Run code autoformatters (black). | ||
pre-commit install | ||
git ls-files | xargs pre-commit run black --files | ||
|
||
lint: ## Run linters: pre-commit (black, ruff, codespell) and mypy | ||
pre-commit install && git ls-files | xargs pre-commit run --show-diff-on-failure --files | ||
|
||
test: ## Run tests via pytest. | ||
pytest tests | ||
|
||
watch-docs: ## Build and watch documentation. | ||
sphinx-autobuild docs/ docs/_build/html --open-browser --watch $(GIT_ROOT)/llama_index/ |
25 changes: 25 additions & 0 deletions
25
llama-index-integrations/embeddings/llama-index-embeddings-nvidia-triton/README.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
# LlamaIndex Embeddings Integration: Nvidia Triton | ||
|
||
This integration allows LlamaIndex to use embedding models hosted on a [Triton Inference Server](https://github.com/triton-inference-server/server). | ||
|
||
## Usage: | ||
|
||
```python | ||
from llama_index.embeddings.nvidia_triton import NvidiaTritonEmbedding | ||
|
||
embedding = NvidiaTritonEmbedding( | ||
model_name="text_embeddings", | ||
server_url="localhost:8000", | ||
client_kwargs={"ssl": False}, | ||
) | ||
|
||
print(embedding.get_text_embedding("hello world")) | ||
``` | ||
|
||
Parameters: | ||
|
||
- `model_name`: the name of the embedding model. | ||
- `server_url`: the URL to the Triton Inference Server, normally on the HTTP port. | ||
- `client_kwargs`: additional arguments to be passed to the `tritonclient.http.InferenceServerClient` instance, such us timeouts, ssl, etc. | ||
- `input_tensor_name`: the name of the tensor the embedding model expects the input to be. Default: `INPUT_TEXT`. | ||
- `output_tensor_name`: the name of the tensor the embedding model will serve the output embedding to. Default: `OUTPUT_EMBEDDINGS`. |
3 changes: 3 additions & 0 deletions
3
...ngs/llama-index-embeddings-nvidia-triton/llama_index/embeddings/nvidia_triton/__init__.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
from llama_index.embeddings.nvidia_triton.base import NvidiaTritonEmbedding | ||
|
||
__all__ = ["NvidiaTritonEmbedding"] |
132 changes: 132 additions & 0 deletions
132
...eddings/llama-index-embeddings-nvidia-triton/llama_index/embeddings/nvidia_triton/base.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,132 @@ | ||
from typing import Any, Dict, List, Optional | ||
|
||
from llama_index.core.base.embeddings.base import BaseEmbedding | ||
from llama_index.core.constants import DEFAULT_EMBED_BATCH_SIZE | ||
from llama_index.core.callbacks.base import CallbackManager | ||
from llama_index.core.bridge.pydantic import PrivateAttr | ||
|
||
import random | ||
import numpy as np | ||
from tritonclient.http import ( | ||
InferenceServerClient, | ||
InferInput, | ||
InferRequestedOutput, | ||
) | ||
|
||
DEFAULT_INPUT_TENSOR_NAME = "INPUT_TEXT" | ||
DEFAULT_OUTPUT_TENSOR_NAME = "OUTPUT_EMBEDDINGS" | ||
|
||
|
||
class NvidiaTritonEmbedding(BaseEmbedding): | ||
""" | ||
Nvidia Triton Embedding. | ||
|
||
This connector allows for llama_index to interact with embedding models hosted on a Triton | ||
inference server over HTTP. | ||
|
||
[Triton Inference Server Github](https://github.com/triton-inference-server/server) | ||
|
||
Examples: | ||
`pip install llama-index-embeddings-nvidia-triton` | ||
|
||
```python | ||
from llama_index.embeddings.nvidia_triton import NvidiaTritonEmbedding | ||
|
||
# Ensure a Triton server instance is running and provide the correct HTTP URL for your Triton server instance | ||
triton_url = "localhost:8000" | ||
|
||
# Instantiate the NvidiaTritonEmbedding class | ||
emb_client = NvidiaTritonEmbedding( | ||
server_url=triton_url, | ||
model_name="text_embeddings", | ||
) | ||
|
||
# Get a text embedding | ||
embedding = emb_client.get_text_embedding("hello world") | ||
print(f"Embedding for 'hello world': {embedding}") | ||
print(f"Embedding length: {len(embedding)}") | ||
``` | ||
|
||
""" | ||
|
||
_client: InferenceServerClient = PrivateAttr() | ||
_input_tensor_name: str = PrivateAttr() | ||
_output_tensor_name: str = PrivateAttr() | ||
|
||
def __init__( | ||
self, | ||
model_name: str, | ||
server_url: str = "localhost:8000", | ||
embed_batch_size: int = DEFAULT_EMBED_BATCH_SIZE, | ||
input_tensor_name: str = DEFAULT_INPUT_TENSOR_NAME, | ||
output_tensor_name: str = DEFAULT_OUTPUT_TENSOR_NAME, | ||
callback_manager: Optional[CallbackManager] = None, | ||
client_kwargs: Optional[Dict[str, Any]] = None, | ||
**kwargs: Any, | ||
) -> None: | ||
super().__init__( | ||
model_name=model_name, | ||
embed_batch_size=embed_batch_size, | ||
callback_manager=callback_manager, # type: ignore | ||
**kwargs, | ||
) | ||
|
||
self._client = InferenceServerClient( | ||
url=server_url, | ||
**client_kwargs or {}, | ||
) | ||
self._input_tensor_name = input_tensor_name | ||
self._output_tensor_name = output_tensor_name | ||
|
||
@classmethod | ||
def class_name(cls) -> str: | ||
return "NvidiaTritonEmbedding" | ||
|
||
def _get_query_embedding(self, query: str) -> List[float]: | ||
"""Get query embedding.""" | ||
return self.get_general_text_embeddings([query])[0] | ||
|
||
async def _aget_query_embedding(self, query: str) -> List[float]: | ||
"""The asynchronous version of _get_query_embedding.""" | ||
embs = await self.aget_general_text_embeddings([query]) | ||
return embs[0] | ||
|
||
def _get_text_embedding(self, text: str) -> List[float]: | ||
"""Get text embedding.""" | ||
return self.get_general_text_embeddings([text])[0] | ||
|
||
async def _aget_text_embedding(self, text: str) -> List[float]: | ||
"""Asynchronously get text embedding.""" | ||
embs = await self.aget_general_text_embeddings([text]) | ||
return embs[0] | ||
|
||
def _get_text_embeddings(self, texts: List[str]) -> List[List[float]]: | ||
"""Get text embeddings.""" | ||
return self.get_general_text_embeddings(texts) | ||
|
||
async def _aget_text_embeddings(self, texts: List[str]) -> List[List[float]]: | ||
"""Asynchronously get text embeddings.""" | ||
return await self.aget_general_text_embeddings(texts) | ||
|
||
def get_general_text_embeddings(self, texts: List[str]) -> List[List[float]]: | ||
"""Get Triton embedding.""" | ||
input_data = InferInput(self._input_tensor_name, [len(texts)], "BYTES") | ||
input_data.set_data_from_numpy(np.array(texts, dtype=np.object_)) | ||
output_data = InferRequestedOutput(self._output_tensor_name) | ||
request_id = str(random.randint(1, 9999999)) # nosec | ||
|
||
response = self._client.infer( | ||
model_name=self.model_name, | ||
inputs=[input_data], | ||
outputs=[output_data], | ||
request_id=request_id, | ||
) | ||
|
||
embeddings = response.as_numpy(self._output_tensor_name) | ||
if embeddings is None: | ||
raise ValueError("No embeddings returned from Triton server.") | ||
return [e.tolist() for e in embeddings] | ||
|
||
async def aget_general_text_embeddings(self, texts: List[str]) -> List[List[float]]: | ||
"""Asynchronously get Triton embedding.""" | ||
return self.get_general_text_embeddings(texts) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.