Skip to content
Discussion options

You must be logged in to vote

Hello @kw1018! I'm here to help you with any bugs, questions, or contributions. Let's work together to solve this!

To integrate the "serverless inference" option from HuggingFace with an agent in LangGraph to build a demo product while keeping costs low, you can use the free serverless API provided by HuggingFace. Here is a step-by-step guide:

  1. Install the necessary packages:

    %pip install --upgrade --quiet huggingface_hub
  2. Get your HuggingFace API token:

    from getpass import getpass
    
    HUGGINGFACEHUB_API_TOKEN = getpass()
  3. Set the API token as an environment variable:

    import os
    
    os.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKEN
  4. Prepare the HuggingFaceEndpoint and LLMChain:

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by kw1018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant