Skip to content

Not working with local installed llama.cpp (without docker, LocalAI, HARP and etc.) #297

@svedbox

Description

@svedbox

Which version of integration_openai are you using?

3.9.1

Which version of Nextcloud are you using?

32.02

Which browser are you using? In case you are using the phone App, specify the Android or iOS version and device please.

No response

Describe the Bug

Not working with local installed llama.cpp (same host) (without docker, LocalAI, HARP and etc.), app detects model right and not get any warning, but in assistant interface any request not receive answer.

Expected Behavior

Not answer in nextcloud assistant

To Reproduce

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions