请问,本地ollama的API的话,.env要怎么配置呢? #344
Unanswered
jianjunLeung
asked this question in
Q&A
Replies: 5 comments
-
Beta Was this translation helpful? Give feedback.
0 replies
-
Beta Was this translation helpful? Give feedback.
0 replies
-
|
这个也试过了,还是一样的问题 |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
你查下 ollama 使用openai 格式接口怎么设置啊…… |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
LLM_API_BASE="http://127.0.0.1:11434/v1/" |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment

Uh oh!
There was an error while loading. Please reload this page.
-
如题。我试了PRIMARY_MODEL填id和name都是会有下面的这个问题,本地ollama的地址确定是没问题的:
2025-02-24 10:07:53.188 | DEBUG | utils.pb_api:__init__:12 - initializing pocketbase client: http://127.0.0.1:8090 2025-02-24 10:07:53.471 | INFO | utils.pb_api:__init__:22 - pocketbase ready authenticated as admin - 13143398903@163.com 2025-02-24 10:07:53.474 | INFO | __main__:schedule_task:19 - task execute loop 1 2025-02-24 10:07:53.557 | DEBUG | general_process:main_process:54 - new task initializing... 2025-02-24 10:07:53.558 | DEBUG | general_process:main_process:58 - focus_id: m4ptje6n1bkm0k3, focus_point: 实时新闻 消息, explanation: 获取批量的实时的新闻消息,包括新闻的标题,新闻的内容,新闻的链接,新闻的时间。获取的时候获取多条。, search_engine: False [INIT].... → Crawl4AI 0.4.247 2025-02-24 10:07:54.207 | DEBUG | general_process:main_process:140 - process new url, still 0 urls in working list [FETCH]... ↓ https://www.jin10.com/... | Status: True | Time: 0.02s [COMPLETE] ● https://www.jin10.com/... | Status: True | Total: 0.02s INFO:openai._base_client:Retrying request to /chat/completions in 0.425538 seconds INFO:openai._base_client:Retrying request to /chat/completions in 0.848044 seconds Error code: 503 INFO:openai._base_client:Retrying request to /chat/completions in 0.430297 seconds INFO:openai._base_client:Retrying request to /chat/completions in 0.810780 seconds Error code: 503 INFO:openai._base_client:Retrying request to /chat/completions in 0.389863 seconds INFO:openai._base_client:Retrying request to /chat/completions in 0.857039 seconds Error code: 503 INFO:openai._base_client:Retrying request to /chat/completions in 0.406839 seconds INFO:openai._base_client:Retrying request to /chat/completions in 0.966001 seconds Error code: 503 INFO:openai._base_client:Retrying request to /chat/completions in 0.428617 seconds INFO:openai._base_client:Retrying request to /chat/completions in 0.927592 seconds Error code: 503 INFO:openai._base_client:Retrying request to /chat/completions in 0.390710 seconds INFO:openai._base_client:Retrying request to /chat/completions in 0.849299 seconds Error code: 503 INFO:openai._base_client:Retrying request to /chat/completions in 0.458106 seconds INFO:openai._base_client:Retrying request to /chat/completions in 0.784097 seconds Error code: 503 INFO:openai._base_client:Retrying request to /chat/completions in 0.384003 seconds INFO:openai._base_client:Retrying request to /chat/completions in 0.792511 seconds Error code: 503 INFO:openai._base_client:Retrying request to /chat/completions in 0.454747 seconds INFO:openai._base_client:Retrying request to /chat/completions in 0.914861 seconds Error code: 503 2025-02-24 10:08:53.689 | WARNING | agents.get_info:get_author_and_publish_date:239 - failed to parse from llm output: INFO:openai._base_client:Retrying request to /chat/completions in 0.448296 seconds INFO:openai._base_client:Retrying request to /chat/completions in 0.952611 seconds Error code: 503 2025-02-24 10:09:00.213 | WARNING | agents.get_info:get_info:321 - model hallucination: [] contains no summary tag 2025-02-24 10:09:00.359 | INFO | __main__:schedule_task:33 - task execute loop finished, work after 3600 seconds现在我的.env的内容是
`# LLM_API_KEY=""
LLM_API_BASE="http://localhost:11434/"
ZHIPU_API_KEY="xxxx" #for the search tool
PRIMARY_MODEL="qwen2.5:3b"
SECONDARY_MODEL="deepseek-r1:7b"
#use a secondary model to excute the filtering task for the cost saving
#if not set, will use the primary model to excute the filtering task
VL_MODEL="qwen2.5:3b"
PB_API_AUTH="13143398903@163.com|xxxxxx" ##your pb superuser account and password
##belowing is optional, go as you need
VERBOSE="true" ##for detail log info. If not need, remove this item.
PROJECT_DIR="work_dir"
#PB_API_BASE="" ##only use if your pb not run on 127.0.0.1:8090
#LLM_CONCURRENT_NUMBER=8 ##for concurrent llm requests, make sure your llm provider supports it(leave default is 1)`
Beta Was this translation helpful? Give feedback.
All reactions