Skip to content

[Feature] User-configurable stop sequencesย #2439

@cebtenzzre

Description

@cebtenzzre

Feature Request

This is part of the solution to dealing with broken models that emit stop tokens that are not specified in their HF configuration (#2167) and is also a commonly used feature in other similar apps.


From an earlier related draft by @cebtenzzre

  • some models output the wrong EOS token, so this is important
  • special tokens show up as blank in output because we use llama_token_to_piece with special=False, so they aren't even considered for our current hardcoded reverse prompts

Related things to fix:

Metadata

Metadata

Assignees

No one assigned

    Labels

    backendgpt4all-backend issueschatgpt4all-chat issuesenhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions