Replies: 2 comments
-
|
hey <@1206965934762496081> yes those configs are correct for using gemini Embeddings: https://docs.cognee.ai/setup-configuration/embedding-providers#google-gemini LLMs: https://docs.cognee.ai/setup-configuration/llm-providers#google-gemini |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
<@778635401958522961> could you please take a look and help confirm and update the docs when you get a chance? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
=Hello, Cognee community!
I'm starting to use Cognee to build an assistant. I'm having a little trouble figuring out how to properly configure Gemini for LLM and embeddings.
If I do this, is it correct?
`# LLM
LLM_PROVIDER=gemini
LLM_MODEL=gemini/gemini-2.5-flash
LLM_API_KEY=YOUR_LLM_API_KEY_HERE
Embeddings
EMBEDDING_PROVIDER=gemini
EMBEDDING_MODEL=gemini/gemini-embedding-001
EMBEDDING_API_KEY=YOUR_EMBEDDING_API_KEY_HERE
EMBEDDING_DIMENSIONS=3072`
This discussion was automatically pulled from Discord.
Beta Was this translation helpful? Give feedback.
All reactions