Replies: 4 comments
-
🤖 Hello, The OSError you're encountering is likely due to the fact that the HuggingFace LLM is trying to download the model and tokenizer from the HuggingFace Model Hub. This happens when the To resolve this issue, you can either ensure that your application has access to the internet and the HuggingFace Model Hub, or you can download the model and tokenizer files manually and load them from your local file system. To do this, you would need to provide the local paths to the model and tokenizer files when initializing the Here's an example of how you can load the model and tokenizer from local files: from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('/path/to/your/model')
tokenizer = AutoTokenizer.from_pretrained('/path/to/your/tokenizer')
huggingface_embedding = HuggingFaceEmbedding(model=model, tokenizer=tokenizer) In this example, replace I hope this helps! Let me know if you have any other questions. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
i think i have initialized the HuggingFaceLLM class properly : Loading checkpoint shards: 100%|██████████| 7/7 [00:13<00:00, 1.93s/it] but when i run this: |
Beta Was this translation helpful? Give feedback.
-
@james2v you need to specify both an llm and embed_model in the service context It's falling back to the default local embedding model (bge-small) and failing to download it Check out how to specify embedding models here |
Beta Was this translation helpful? Give feedback.
-
Question: it causes OutOfMemoryError: CUDA out of memory very easily! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Question Validation
Question
OSError: We couldn't connect to 'https://huggingface.co/' to load this file, couldn't find it in the cached files and it looks like BAAI/bge-small-en is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
Beta Was this translation helpful? Give feedback.
All reactions