Skip to content

Don't bundle all responses together in llama-server in the logsΒ #2

@simonw

Description

@simonw

This will need a change to LLM core too. It's not great that loading different models still results in everything ending up in the same llama-server model ID in the logs database.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions