Skip to content

Local LLM Usage #41

@aminekhelif

Description

@aminekhelif

Has anyone tried running it locally?

I adapted it for use with LM Studio by changing the tokenizer, LLM calls, and configurations. The connection to the API endpoint works, and persona creation is successful. However, listen_and_act or run fails, consistently producing an error related to the cognitive state attribute due to an empty response.

I have tried using LLaMA 3B, 1B, and Hermes 3 8B. I also increased the maximum tokens from 4000 to 8000.
the LLM generates nonsensical tokens, so I reduced the temperature from 1.5 to 0.8.

I’m reaching out to see if anyone else has experienced this issue and how they managed to resolve it.

Image

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions