Skip to content

Bakllava does not follow the prompt and sometimes gives nonsense responses.(Ollama) #23

@KansaiTraining

Description

@KansaiTraining

I am trying bakllava with Ollama (after I tried Llava) and when I send a query (with system and human prompt) two things happen:

  1. Bakllava >never< follows the indications of the system prompt. I indicated it explicitly how I want the response to be , but it never does
  2. Sometimes the responses are nonsensical

I format the query as indicated here with

formatted_prompt=f"{system_query}\nUSER: {human_query}\nASSISTANT:"

where the system_query is

"Return the requested information in the section delimited by ### ###. format the output as a JSON object. ###  Result:{True or False}   Reason:{from one to  three lines explaining the reason }### Always start with the Result."

but bakllava never returns Result or Reason, in the best of cases it answers the human query in free format
and in the worst cases it responds with

  • [0.18, 0.42, 0.36, 0.59]
  • the date
  • the date with "kp2 3k" added
  • KP (kilopixels per second)

Is this common or am I doing something wrong?

I call the model as indicated in the ollama page in the API usage.
When I do this with llava it runs well (although the responses are not always accurate)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions