Skip to content

Local LLM and Autogen  #1

@Foolafroos

Description

@Foolafroos

Hello Everyone !

Did you try to make it run on a local Setup ?

I tried by replacing the Configlist by a local one
#create the Configuration of your environnement
config_list = [
{
"api_type": "open_ai",
"api_base": "http://localhost:1234/v1",
"api_key": "NULL",
},
]

and replacing all the gpt4_config instance by llm_config=llm_config, .

I am having api issue. not surprising... but i was wondering if you already made it run with a local LLM and if you had any tips if so?

Thanks ! Discovered Panel trhough your video btw :)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions