Skip to content

Llama-3-Taiwan-8B-Instruct是否支援LangChain Tools #72

@Hsun1128

Description

@Hsun1128

目前使用 vllm 的方式運行8B模型

export NUM_GPUS=1
export PORT=8000

docker run \
  -e HF_TOKEN=$HF_TOKEN \
  --gpus all \
  -v ~/.cache/huggingface:/root/.cache/huggingface \
  -p "${PORT}:8000" \
  --ipc=host \
  vllm/vllm-openai:v0.4.0.post1 \
  --model "yentinglin/Llama-3-Taiwan-8B-Instruct" \
  -tp "${NUM_GPUS}"

然後使用LangChain 提供的連接vllm的方法連接 llm

from langchain_openai import ChatOpenAI

model_id = "yentinglin/Llama-3-Taiwan-8B-Instruct"
inference_server_url = "http://localhost:8000/v1"

llm = ChatOpenAI(
    model=model_id,
    openai_api_key="EMPTY",
    openai_api_base=inference_server_url,
    temperature=0,
    streaming=True,
)

接著參考官方的教學文件試做,但卻一直得不到 tool_calls,所以想請教您該模型是否支援 LangChain Tools 的用法,以及該問題的解決方法,還請您不吝指教,謝謝。

參考連結https://langchain-ai.github.io/langgraph/how-tos/pass-run-time-values-to-tools/#define-the-nodes

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions