Skip to content

locally hosted ollama model is not recognied with suggested input file #1040

@krishnapitike

Description

@krishnapitike

I am using the following script to run paperqa. However it looks like paperqa is not recognizing that I am asking it to use ollama model. The error is given below the input file:

Models hosted with ollama are also supported. To run the example below make sure you have downloaded llama3.2 and mxbai-embed-large via ollama.

from paperqa import Settings, ask

local_llm_config = {
"model_list": [
{
"model_name": "ollama/llama3.2",
"litellm_params": {
"model": "ollama/llama3.2",
"api_base": "http://localhost:11434",
},
}
]
}

answer_response = ask(
"What is PaperQA2?",
settings=Settings(
llm="ollama/llama3.2",
llm_config=local_llm_config,
summary_llm="ollama/llama3.2",
summary_llm_config=local_llm_config,
embedding="ollama/mxbai-embed-large",
),
)

error:

       AuthenticationError: litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the  
       client or by setting the OPENAI_API_KEY environment variable. Received Model Group=gpt-4o-2024-11-20                                                             
       Available Model Group Fallbacks=None                                                                                                                             

[15:47:11] Generating answer for 'What is PaperQA2?'.
Status: Paper Count=0 | Relevant Papers=0 | Current Evidence=0 | Current Cost=$0.0000
Answer: I cannot answer this question due to insufficient information.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions