Skip to content
This repository was archived by the owner on Sep 12, 2024. It is now read-only.
Discussion options

You must be logged in to vote

@SeeknnDestroy Talha, appreciate the quick update regarding HF models. Is HuggingFace's TGI supported as a backend? Specifically, if I have a HF model hosted a local TGI server, can I interact with it via AutoLLM? E.g.

from autollm import AutoQueryEngine

model = "meta-llama/Llama-2-7b-chat-hf"
api_base = "http://localhost:1234" #IP to TGI server

llm_params = {"model": model, "api_base": api_base, ...}
etc

Hello @dcruiz01, current HuggignFace examples support local and cloud TGI's. Your given code snippet should work :)

Replies: 4 comments

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by fcakyon
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
4 participants
Converted from issue

This discussion was converted from issue #69 on November 03, 2023 12:44.