Skip to content

error using multiagent framework #159

@srssina

Description

@srssina

Hello everyone, I'm using LM Studio to run a local Autogen (AG2) multi-agent framework program. Therefore, when I want to have a conversational chat between two different agents, I encounter this error. Does anyone have a clue how to fix it?

Error: received prediction-error This is usually an issue with the model's prompt template. If you are using a popular model, you can try to search the model under lmstudio-community, which will have fixed prompt templates. If you cannot find one, you are welcome to post this issue to our discord or issue tracker on GitHub. Alternatively, if you know how to write jinja templates, you can override the prompt template in My Models > model settings > Prompt Template. at _0x6732be.<computed> (/tmp/.mount_lm_stuUF71zX/resources/app/.webpack/main/index.js:710:126526) at _0x288f38._0x136314 (/tmp/.mount_lm_stuUF71zX/resources/app/.webpack/main/index.js:96:13369) at _0x288f38.emit (node:events:518:28) at _0x288f38.onChildMessage (/tmp/.mount_lm_stuUF71zX/resources/app/.webpack/main/index.js:86:202721) at _0x288f38.onChildMessage (/tmp/.mount_lm_stuUF71zX/resources/app/.webpack/main/index.js:101:3971) at _0x2e9f70.<anonymous> (/tmp/.mount_lm_stuUF71zX/resources/app/.webpack/main/index.js:86:201749) at _0x2e9f70.emit (node:events:518:28) at ChildProcess.<anonymous> (/tmp/.mount_lm_stuUF71zX/resources/app/.webpack/main/index.js:461:21158) at ChildProcess.emit (node:events:518:28) at emit (node:internal/child_process:949:14) - Caused By: Error: Error rendering prompt with jinja template: "Conversation roles must alternate user/assistant/user/assistant/...". This is usually an issue with the model's prompt template. If you are using a popular model, you can try to search the model under lmstudio-community, which will have fixed prompt templates. If you cannot find one, you are welcome to post this issue to our discord or issue tracker on GitHub. Alternatively, if you know how to write jinja templates, you can override the prompt template in My Models > model settings > Prompt Template. at /tmp/.mount_lm_stuUF71zX/resources/app/.webpack/lib/llmworker.js:85:47075 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async _0x2d9105.<computed> (/tmp/.mount_lm_stuUF71zX/resources/app/.webpack/lib/llmworker.js:85:38956) at async _0x2bc7a7.<computed> (/tmp/.mount_lm_stuUF71zX/resources/app/.webpack/lib/llmworker.js:31:2786) at async _0x547755.LLMEngineWrapper.predictTokens (/tmp/.mount_lm_stuUF71zX/resources/app/.webpack/lib/llmworker.js:85:20945) at async Object.predictTokens (/tmp/.mount_lm_stuUF71zX/resources/app/.webpack/lib/llmworker.js:108:13125) at async Object.handleMessage (/tmp/.mount_lm_stuUF71zX/resources/app/.webpack/lib/llmworker.js:108:2439

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions