Skip to content

[BUG] Connect local LLM - Ollama gemma:2bΒ #290

@jakhangir-esanov

Description

@jakhangir-esanov

What would you like to share?

How can i connect local llm. If i try to set locla model(ollama gemma:2b), "ollama is not supported" came out. We don't work with chpt or other llm depending security. So, we need to work with local llm.

Additional information

No response

Metadata

Metadata

Assignees

Labels

LLMLarge Language ModelenhancementNew feature or requestgood first issueGood for newcomershacktoberfestParticipation in the Hacktoberfest eventhelp wantedExtra attention is needed✨ featureNew feature requests or implementationsπŸ› bugIssues related to bugs or errorsπŸ“ documentationTasks related to writing or updating documentationπŸ“¦ dependenciesDependenciesπŸ•“ medium effortA task that can be completed in a few hoursπŸš€ performancePerformance optimizations or regressions🚨 securitySecurity-related issues or improvements🧠 backlogItems that are in the backlog for future workπŸ§ͺ testsTasks related to testing

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions