diff --git a/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/README.md b/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/README.md index c33af1645..f8f70e4d4 100644 --- a/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/README.md +++ b/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/README.md @@ -1,12 +1,12 @@ # RAG with OCI, LangChain, and VLLMs -This repository is a variant of the Retrieval Augmented Generation (RAG) tutorial available [here](https://github.com/oracle-devrel/technology-engineering/tree/main/ai-and-app-modernisation/ai-services/generative-ai-service/rag-genai). Instead of the OCI GenAI Service, it uses a local deployment of Mistral 7B Instruct v0.2 using a vLLM inference server powered by an NVIDIA A10 GPU. +This repository is a variant of the Retrieval Augmented Generation (RAG) tutorial available [here](https://github.com/oracle-devrel/technology-engineering/tree/main/ai-and-app-modernisation/ai-services/generative-ai-service/rag-genai). Instead of the OCI GenAI Service, it uses a local deployment of Mistral 7B Instruct v0.3 using a vLLM inference server powered by an NVIDIA A10 GPU. Reviewed: 23.05.2024 # When to use this asset? -To run the RAG tutorial with a local deployment of Mistral 7B Instruct v0.2 using a vLLM inference server powered by an NVIDIA A10 GPU. +To run the RAG tutorial with a local deployment of Mistral 7B Instruct v0.3 using a vLLM inference server powered by an NVIDIA A10 GPU. # How to use this asset? @@ -25,7 +25,7 @@ These are the components of the Python solution being used here: * **SitemapReader**: Asynchronous sitemap reader for the web (based on beautifulsoup). Reads pages from the web based on their sitemap.xml. Other data connectors are available (Snowflake, Twitter, Wikipedia, etc.). In this example, the site mapxml file is stored in an OCI bucket. * **QdrantClient**: Python client for the Qdrant vector search engine. -* **SentenceTransformerEmbeddings**: Sentence embeddings model object (from HuggingFace). Other options include Aleph Alpha, Cohere, MistralAI, SpaCy, etc. +* **HuggingFaceEmbeddings**: Sentence embeddings model object (from HuggingFace). Other options include Aleph Alpha, Cohere, MistralAI, SpaCy, etc. * **VLLM**: Fast and easy-to-use LLM inference server. * **Settings**: Bundle of commonly used resources used during the indexing and querying stage in a LlamaIndex pipeline/application. In this example, we use global configuration. * **QdrantVectorStore**: Vector store where embeddings and docs are stored within a Qdrant collection. @@ -82,23 +82,20 @@ For the sake of libraries and package compatibility, is highly recommended to up sudo apt-get update && sudo apt-get upgrade -y ``` -2. (*) Remove the current NVIDIA packages and replace them with the following versions. +2. (*) Install the latest NVIDIA drivers. ```bash - sudo apt purge nvidia* libnvidia* -y - sudo apt-get install -y cuda-drivers-545 - sudo apt-get install -y nvidia-kernel-open-545 - sudo apt-get install -y cuda-toolkit-12-3 + sudo apt install ubuntu-drivers-common + sudo ubuntu-drivers install --gpgpu nvidia:570-server + sudo apt install nvidia-utils-570-server + sudo reboot ``` -3. (*) We make sure that `nvidia-smi` is installed in our GPU instance. If it isn't, let's install it: +3. (*) We make sure that `nvidia-smi` is installed in our GPU instance: ```bash # run nvidia-smi nvidia-smi - # if not found, install it. - sudo apt install nvidia-utils-510 -y - sudo apt install nvidia-driver-535 nvidia-dkms-535 -y ``` 4. (*) After installation, we need to add the CUDA path to the PATH environment variable, to allow for NVCC (NVIDIA CUDA Compiler) is able to find the right CUDA executable for parallelizing and running code: @@ -146,10 +143,16 @@ For the sake of libraries and package compatibility, is highly recommended to up conda activate rag pip install packaging pip install -r requirements.txt - # requirements.txt can be found in `technology-engineering/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/` + # requirements.txt can be found in `technology-engineering/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/files` + ``` + +9. Install `gcc` compiler to be able to build PyTorch (in vllm): + + ```bash + sudo apt install -y gcc ``` -9. Finally, reboot the instance and reconnect via SSH. +10. Finally, reboot the instance and reconnect via SSH. ```bash ssh -i ubuntu@ @@ -158,10 +161,13 @@ For the sake of libraries and package compatibility, is highly recommended to up ## Running the solution -1. You can run an editable file with parameters to test one query by running: +1. You can run an editable file with parameters to test one query but first set a few more details, namely the `VLLM_WORKER_MULTIPROC_METHOD` environment variable and the `ipython` interactive terminal: ```bash - python rag-langchain-vllm-mistral.py + export VLLM_WORKER_MULTIPROC_METHOD="spawn" + conda install ipython + ipython + run rag-langchain-vllm-mistral.py ``` 2. If you want to run a batch of queries against Mistral with the vLLM engine, execute the following script (containing an editable list of queries): @@ -210,7 +216,7 @@ Instead of: from langchain_community.llms import VLLM llm = VLLM( - model="mistralai/Mistral-7B-v0.1", + model="mistralai/Mistral-7B-v0.3", ... vllm_kwargs={ ... @@ -226,7 +232,7 @@ from langchain_community.llms import VLLMOpenAI llm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", - model_name="mistralai/Mistral-7B-v0.1", + model_name="mistralai/Mistral-7B-v0.3", model_kwargs={ ... }, diff --git a/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/files/rag-langchain-vllm-mistral.py b/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/files/rag-langchain-vllm-mistral.py index 52f9bee17..c662af3a9 100644 --- a/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/files/rag-langchain-vllm-mistral.py +++ b/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/files/rag-langchain-vllm-mistral.py @@ -2,84 +2,86 @@ from llama_index.vector_stores.qdrant import QdrantVectorStore from llama_index.readers.web import SitemapReader from qdrant_client import QdrantClient -from langchain_community.embeddings import SentenceTransformerEmbeddings +from langchain_huggingface import HuggingFaceEmbeddings from langchain_community.llms import VLLM, VLLMOpenAI -loader = SitemapReader(html_to_text=True) -# Reads pages from the web based on their sitemap.xml. -# Other data connectors available. +if __name__ == '__main__': -documents = loader.load_data( - sitemap_url='https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frpj5kvxryk1/b/thisIsThePlace/o/latest.xml' -) -# for document in documents: -# print(document.metadata['Source']) + loader = SitemapReader(html_to_text=True) + # Reads pages from the web based on their sitemap.xml. + # Other data connectors available. -# local Docker-based instance of Qdrant -client = QdrantClient( - location=":memory:" -) + documents = loader.load_data( + sitemap_url='https://objectstorage.eu-frankfurt-1.oraclecloud.com/n/frpj5kvxryk1/b/thisIsThePlace/o/latest.xml' + ) -embeddings = SentenceTransformerEmbeddings( - model_name="all-MiniLM-L6-v2" -) + # local Docker-based instance of Qdrant + client = QdrantClient( + location=":memory:" + ) -# local instance of Mistral 7B v0.1 using vLLM inference server -# and FlashAttention backend for performance. Model is downloaded -# from HuggingFace (no accoutn needed). -llm = VLLM( - model="mistralai/Mistral-7B-Instruct-v0.2", - gpu_memory_utilization=0.95, - tensor_parallel_size=1, # inference distributed over X GPUs - trust_remote_code=True, # mandatory for hf model - max_new_tokens=128, - top_k=10, - top_p=0.95, - temperature=0.8, - vllm_kwargs={ - "swap_space": 1, - "gpu_memory_utilization": 0.95, - "max_model_len": 16384, # limitation due to unsufficient RAM - "enforce_eager": True, - }, -) + #embeddings = SentenceTransformerEmbeddings( + embeddings = HuggingFaceEmbeddings( + model_name="all-MiniLM-L6-v2" + ) -system_prompt="As a support engineer, your role is to leverage the information \ - in the context provided. Your task is to respond to queries based strictly \ - on the information available in the provided context. Do not create new \ - information under any circumstances. Refrain from repeating yourself. \ - Extract your response solely from the context mentioned above. \ - If the context does not contain relevant information for the question, \ - respond with 'How can I assist you with questions related to the document?" + # local instance of Mistral 7B v0.1 using vLLM inference server + # and FlashAttention backend for performance. Model is downloaded + # from HuggingFace (no accoutn needed). + llm = VLLM( + model="mistralai/Mistral-7B-Instruct-v0.3", + gpu_memory_utilization=0.95, + tensor_parallel_size=1, # inference distributed over X GPUs + trust_remote_code=True, # mandatory for hf model + max_new_tokens=128, + top_k=10, + top_p=0.95, + temperature=0.8, + vllm_kwargs={ + "tokenizer_mode": "mistral", + "swap_space": 1, + "gpu_memory_utilization": 0.95, + "max_model_len": 16384, # limitation due to unsufficient RAM + "enforce_eager": False, + }, + ) -Settings.llm = llm -Settings.embed_model = embeddings -Settings.chunk_size=1000 -Settings.chunk_overlap=100 -Settings.num_output = 256 -Settings.system_prompt=system_prompt + system_prompt="As a support engineer, your role is to leverage the information \ + in the context provided. Your task is to respond to queries based strictly \ + on the information available in the provided context. Do not create new \ + information under any circumstances. Refrain from repeating yourself. \ + Extract your response solely from the context mentioned above. \ + If the context does not contain relevant information for the question, \ + respond with 'How can I assist you with questions related to the document?" -vector_store = QdrantVectorStore( - client=client, - collection_name="ansh" -) + Settings.llm = llm + Settings.embed_model = embeddings + Settings.chunk_size=1000 + Settings.chunk_overlap=100 + Settings.num_output = 256 + Settings.system_prompt=system_prompt -storage_context = StorageContext.from_defaults( - vector_store=vector_store -) + vector_store = QdrantVectorStore( + client=client, + collection_name="ansh" + ) -index = VectorStoreIndex.from_documents( - documents, - storage_context=storage_context -) + storage_context = StorageContext.from_defaults( + vector_store=vector_store + ) -query_engine = index.as_query_engine(llm=llm) + index = VectorStoreIndex.from_documents( + documents, + storage_context=storage_context + ) -response = query_engine.query( - 'What are the document formats supported by the Vision service?' -) + query_engine = index.as_query_engine(llm=llm) -print("Response: ", response.response.strip()) -for key in response.metadata.keys(): - print("Source: ", response.metadata[key]['Source']) \ No newline at end of file + response = query_engine.query( + 'What are the document formats supported by the Vision service?' + ) + + print("Response: ", response.response.strip()) + for key in response.metadata.keys(): + print("Source: ", response.metadata[key]['Source']) \ No newline at end of file diff --git a/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/files/requirements.txt b/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/files/requirements.txt index 5d32083ea..bddb7eb09 100644 --- a/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/files/requirements.txt +++ b/cloud-infrastructure/ai-infra-gpu/ai-infrastructure/rag-langchain-vllm-mistral/files/requirements.txt @@ -1,178 +1,244 @@ -aiohttp==3.10.11 -aiosignal==1.3.1 -annotated-types==0.6.0 -anyio==4.3.0 +aiohappyeyeballs==2.6.1 +aiohttp==3.11.16 +aiosignal==1.3.2 +airportsdata==20250224 +annotated-types==0.7.0 +anyio==4.9.0 +astor==0.8.1 +asttokens @ file:///croot/asttokens_1743630435401/work async-timeout==4.0.3 -attrs==23.2.0 -beautifulsoup4==4.12.3 -certifi==2024.7.4 -charset-normalizer==3.3.2 +asyncpg==0.30.0 +attrs==25.3.0 +banks==2.1.1 +beautifulsoup4==4.13.3 +blake3==1.0.4 +cachetools==5.5.2 +certifi==2025.1.31 +charset-normalizer==3.4.1 chromedriver-autoinstaller==0.6.4 -click==8.1.7 -cloudpickle==3.0.0 -cmake==3.29.2 -cssselect==1.2.0 -dataclasses-json==0.6.4 -Deprecated==1.2.14 +click==8.1.8 +cloudpickle==3.1.1 +colorama==0.4.6 +compressed-tensors==0.9.2 +cssselect==1.3.0 +cupy-cuda12x==13.4.1 +dataclasses-json==0.6.7 +decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work +defusedxml==0.7.1 +Deprecated==1.2.18 +depyf==0.18.0 +dill==0.3.9 dirtyjson==1.0.8 diskcache==5.6.3 distro==1.9.0 -einops==0.7.0 -exceptiongroup==1.2.1 -fastapi==0.110.2 +dnspython==2.7.0 +einops==0.8.1 +email_validator==2.2.0 +exceptiongroup @ file:///croot/exceptiongroup_1706031385326/work +executing @ file:///opt/conda/conda-bld/executing_1646925071911/work +fastapi==0.115.12 +fastapi-cli==0.0.7 +fastrlock==0.8.3 feedfinder2==0.0.4 feedparser==6.0.11 -filelock==3.13.4 -flash-attn==2.5.7 -frozenlist==1.4.1 -fsspec==2024.3.1 -greenlet==3.0.3 -grpcio==1.62.2 -grpcio-tools==1.62.2 +filelock==3.18.0 +filetype==1.2.0 +frozenlist==1.5.0 +fsspec==2025.3.2 +gguf==0.10.0 +greenlet==3.1.1 +griffe==1.7.2 +grpcio==1.71.0 +grpcio-tools==1.71.0 h11==0.14.0 -h2==4.1.0 -hpack==4.0.0 -html2text==2020.1.16 -httpcore==1.0.5 -httptools==0.6.1 -httpx==0.27.0 -huggingface-hub==0.22.2 -hyperframe==6.0.1 -idna==3.7 +h2==4.2.0 +hf-xet==1.0.3 +hpack==4.1.0 +html2text==2024.2.26 +httpcore==1.0.7 +httptools==0.6.4 +httpx==0.28.1 +httpx-sse==0.4.0 +huggingface-hub==0.30.2 +hyperframe==6.1.0 +idna==3.10 +importlib_metadata==8.6.1 interegular==0.3.3 +ipython @ file:///croot/ipython_1734548052611/work +jedi @ file:///croot/jedi_1733987392413/work jieba3k==0.35.1 Jinja2==3.1.6 -joblib==1.4.0 +jiter==0.9.0 +joblib==1.4.2 jsonpatch==1.33 -jsonpointer==2.4 -jsonschema==4.21.1 -jsonschema-specifications==2023.12.1 -langchain==0.3.7 -langchain-community==0.3.0 -langchain-core==0.1.53 -langchain-text-splitters==0.0.1 -langsmith==0.1.51 -lark==1.1.9 -llama-hub==0.0.79.post1 -llama-index==0.12.9 -llama-index-agent-openai==0.2.3 -llama-index-cli==0.1.12 -llama-index-core==0.10.38 -llama-index-embeddings-langchain==0.1.2 -llama-index-embeddings-openai==0.1.9 -llama-index-indices-managed-llama-cloud==0.1.5 -llama-index-legacy==0.9.48 -llama-index-llms-anyscale==0.1.3 -llama-index-llms-langchain==0.1.3 -llama-index-llms-openai==0.1.16 -llama-index-multi-modal-llms-openai==0.1.5 -llama-index-program-openai==0.1.6 -llama-index-question-gen-openai==0.1.3 -llama-index-readers-file==0.1.19 -llama-index-readers-llama-parse==0.1.4 -llama-index-readers-web==0.1.10 -llama-index-vector-stores-qdrant==0.2.8 -llama-parse==0.4.2 -llamaindex-py-client==0.1.18 -llvmlite==0.42.0 -lm-format-enforcer==0.9.8 -lxml==5.2.1 -MarkupSafe==2.1.5 -marshmallow==3.21.1 +jsonpointer==3.0.0 +jsonschema==4.23.0 +jsonschema-specifications==2024.10.1 +langchain==0.3.23 +langchain-community==0.3.21 +langchain-core==0.3.51 +langchain-huggingface==0.1.2 +langchain-text-splitters==0.3.8 +langsmith==0.3.28 +lark==1.2.2 +llama-cloud==0.1.18 +llama-cloud-services==0.6.10 +llama-index==0.12.29 +llama-index-agent-openai==0.4.6 +llama-index-cli==0.4.1 +llama-index-core==0.12.30 +llama-index-embeddings-huggingface==0.5.3 +llama-index-embeddings-langchain==0.3.0 +llama-index-embeddings-openai==0.3.1 +llama-index-indices-managed-llama-cloud==0.6.11 +llama-index-llms-langchain==0.6.1 +llama-index-llms-openai==0.3.33 +llama-index-multi-modal-llms-openai==0.4.3 +llama-index-program-openai==0.3.1 +llama-index-question-gen-openai==0.3.0 +llama-index-readers-file==0.4.7 +llama-index-readers-llama-parse==0.4.0 +llama-index-readers-web==0.3.9 +llama-index-vector-stores-postgres==0.4.2 +llama-index-vector-stores-qdrant==0.6.0 +llama-parse==0.6.9 +llguidance==0.7.13 +llvmlite==0.44.0 +lm-format-enforcer==0.10.11 +lxml==5.3.2 +markdown-it-py==3.0.0 +MarkupSafe==3.0.2 +marshmallow==3.26.1 +matplotlib-inline @ file:///opt/conda/conda-bld/matplotlib-inline_1662014470464/work +mdurl==0.1.2 +mistral_common==1.5.4 mpmath==1.3.0 -msgpack==1.0.8 -multidict==6.0.5 +msgpack==1.1.0 +msgspec==0.19.0 +multidict==6.4.2 mypy-extensions==1.0.0 +nanobind==2.6.1 nest-asyncio==1.6.0 -networkx==3.3 +networkx==3.4.2 newspaper3k==0.2.8 -ninja==1.11.1.1 -nltk==3.9 -numba==0.59.1 -numpy==1.26.4 -nvidia-cublas-cu12==12.1.3.1 -nvidia-cuda-cupti-cu12==12.1.105 -nvidia-cuda-nvrtc-cu12==12.1.105 -nvidia-cuda-runtime-cu12==12.1.105 -nvidia-cudnn-cu12==8.9.2.26 -nvidia-cufft-cu12==11.0.2.54 -nvidia-curand-cu12==10.3.2.106 -nvidia-cusolver-cu12==11.4.5.107 -nvidia-cusparse-cu12==12.1.0.106 -nvidia-ml-py==12.550.52 -nvidia-nccl-cu12==2.19.3 +ninja==1.11.1.4 +nltk==3.9.1 +numba==0.61.0 +numpy==2.1.3 +nvidia-cublas-cu12==12.4.5.8 +nvidia-cuda-cupti-cu12==12.4.127 +nvidia-cuda-nvrtc-cu12==12.4.127 +nvidia-cuda-runtime-cu12==12.4.127 +nvidia-cudnn-cu12==9.1.0.70 +nvidia-cufft-cu12==11.2.1.3 +nvidia-curand-cu12==10.3.5.147 +nvidia-cusolver-cu12==11.6.1.9 +nvidia-cusparse-cu12==12.3.1.170 +nvidia-cusparselt-cu12==0.6.2 +nvidia-nccl-cu12==2.21.5 nvidia-nvjitlink-cu12==12.4.127 -nvidia-nvtx-cu12==12.1.105 -openai==1.23.6 -orjson==3.10.1 +nvidia-nvtx-cu12==12.4.127 +openai==1.72.0 +opencv-python-headless==4.11.0.86 +orjson==3.10.16 outcome==1.3.0.post0 -outlines==0.0.34 -packaging==23.2 -pandas==2.2.2 -pillow==10.3.0 -playwright==1.43.0 -portalocker==2.8.2 -prometheus_client==0.20.0 -protobuf==4.25.3 -psutil==5.9.8 +outlines==0.1.11 +outlines_core==0.1.26 +packaging==24.2 +pandas==2.2.3 +parso @ file:///croot/parso_1733963305961/work +partial-json-parser==0.2.1.1.post5 +pexpect @ file:///tmp/build/80754af9/pexpect_1605563209008/work +pgvector==0.4.0 +pillow==11.1.0 +platformdirs==4.3.7 +playwright==1.51.0 +portalocker==2.10.1 +prometheus-fastapi-instrumentator==7.1.0 +prometheus_client==0.21.1 +prompt-toolkit @ file:///croot/prompt-toolkit_1704404351921/work +propcache==0.3.1 +protobuf==5.29.4 +psutil==7.0.0 +psycopg2-binary==2.9.10 +ptyprocess @ file:///tmp/build/80754af9/ptyprocess_1609355006118/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl +pure-eval @ file:///opt/conda/conda-bld/pure_eval_1646925070566/work py-cpuinfo==9.0.0 -pyaml==23.12.0 -pydantic==2.7.1 -pydantic_core==2.18.2 -pyee==11.1.0 -pypdf==4.2.0 +pycountry==24.6.1 +pydantic==2.11.3 +pydantic-settings==2.8.1 +pydantic_core==2.33.1 +pyee==12.1.1 +Pygments @ file:///croot/pygments_1684279966437/work +pypdf==5.4.0 PySocks==1.7.1 python-dateutil==2.9.0.post0 -python-dotenv==1.0.1 -pytz==2024.1 -PyYAML==6.0.1 -qdrant-client==1.9.0 +python-dotenv==1.1.0 +python-json-logger==3.3.0 +python-multipart==0.0.20 +pytz==2025.2 +PyYAML==6.0.2 +pyzmq==26.4.0 +qdrant-client==1.13.3 ray==2.43.0 -referencing==0.35.0 -regex==2024.4.16 -requests==2.32.2 -requests-file==2.0.0 -retrying==1.3.4 -rpds-py==0.18.0 -safetensors==0.4.3 -scikit-learn==1.5.0 -scipy==1.13.0 -selenium==4.20.0 -sentence-transformers==2.7.0 +referencing==0.36.2 +regex==2024.11.6 +requests==2.32.3 +requests-file==2.1.0 +requests-toolbelt==1.0.0 +rich==14.0.0 +rich-toolkit==0.14.1 +rpds-py==0.24.0 +safetensors==0.5.3 +scikit-learn==1.6.1 +scipy==1.15.2 +selenium==4.31.0 +sentence-transformers==4.0.2 sentencepiece==0.2.0 sgmllib3k==1.0.0 -six==1.16.0 +shellingham==1.5.4 +six==1.17.0 sniffio==1.3.1 sortedcontainers==2.4.0 -soupsieve==2.5 -SQLAlchemy==2.0.29 -starlette==0.40.0 +soupsieve==2.6 +spider-client==0.0.27 +SQLAlchemy==2.0.40 +stack-data @ file:///opt/conda/conda-bld/stack_data_1646927590127/work +starlette==0.46.1 striprtf==0.0.26 -sympy==1.12 -tenacity==8.2.3 -threadpoolctl==3.4.0 -tiktoken==0.6.0 +sympy==1.13.1 +tenacity==9.1.2 +threadpoolctl==3.6.0 +tiktoken==0.9.0 tinysegmenter==0.3 -tldextract==5.1.2 -tokenizers==0.19.1 -torch==2.2.1 -torch-utils==0.1.2 -tqdm==4.66.3 -transformers==4.48.0 -trio==0.25.0 -trio-websocket==0.11.1 -triton==2.2.0 +tldextract==5.2.0 +tokenizers==0.21.1 +torch==2.6.0 +torchaudio==2.6.0 +torchvision==0.21.0 +tqdm==4.67.1 +traitlets @ file:///croot/traitlets_1718227057033/work +transformers==4.51.1 +trio==0.29.0 +trio-websocket==0.12.2 +triton==3.2.0 +typer==0.15.2 typing-inspect==0.9.0 -typing_extensions==4.11.0 -tzdata==2024.1 -urllib3==2.2.2 -uvicorn==0.29.0 -uvloop==0.19.0 -vllm==0.8.1 -vllm_nccl_cu12==2.18.1.0.4.0 -watchfiles==0.21.0 -websockets==12.0 -wrapt==1.16.0 +typing-inspection==0.4.0 +typing_extensions @ file:///croot/typing_extensions_1734714854207/work +tzdata==2025.2 +urllib3==2.3.0 +uvicorn==0.34.0 +uvloop==0.21.0 +vllm==0.8.3 +watchfiles==1.0.5 +wcwidth @ file:///Users/ktietz/demo/mc3/conda-bld/wcwidth_1629357192024/work +websocket-client==1.8.0 +websockets==15.0.1 +wrapt==1.17.2 wsproto==1.2.0 -xformers==0.0.25 -yarl==1.9.4 \ No newline at end of file +xformers==0.0.29.post2 +xgrammar==0.1.17 +yarl==1.19.0 +zipp==3.21.0 +zstandard==0.23.0 \ No newline at end of file