Skip to content

Added support for Chat Completion Model#5

Open
onlylonly wants to merge 2 commits intop-e-w:masterfrom
onlylonly:master
Open

Added support for Chat Completion Model#5
onlylonly wants to merge 2 commits intop-e-w:masterfrom
onlylonly:master

Conversation

@onlylonly
Copy link

I've added support to 'chat' model, and the ability to switch between chat and completion type model.

Added example in config.ts

config.ts:

  • Added SYSTEM_PROMPT, MODEL_TYPE, MODEL and 3 configuration examples
  • Added a comment to specify the model name used by the API and updated the MODEL constant.

main.ts:

  • Updated import statement to include MODEL from config.ts.
  • Changed the model parameter in streamText function to use the MODEL constant from config.ts.
  • Updated import statement to include MODEL_TYPE, SYSTEM_PROMPT from config.ts.
  • Added support to chat type model
  • Added ability to switch between 'chat' and 'completion' model
  • commented n_predict: MAX_TOKENS, cache_prompt: true in line 63

- Added a comment to specify the model name used by the API and updated the MODEL constant.

main.ts:
- Updated import statement to include MODEL from config.ts.
- Changed the model parameter in streamText function to use the MODEL constant from config.ts.
- Added SYSTEM_PROMPT, MODEL_TYPE and 3 configuration examples

main.ts:
- Updated import statement to include MODEL_TYPE, SYSTEM_PROMPT from config.ts.
- Added support to chat model
- Added ability to switch between chat and completion model
- commented n_predict: MAX_TOKENS, cache_prompt: true in line 63
@onlylonly
Copy link
Author

currently PARAMS from config.ts seems to be unused. Should we remove it?

@p-e-w
Copy link
Owner

p-e-w commented Jun 24, 2024

Thanks for the effort, but this is not the right way to broaden loader support. The right way is to add support for the text completion endpoint to those loaders (which I believe is currently happening in Ollama). Chat completion is a semantic mismatch for text completion, and using it to do the latter is a hack that I don't want in the code.

The fact that OpenAI restricts GPT-4 to the chat completion endpoint is unfortunate (and clearly intended to further limit what users can do with their models), but not a sufficient reason for doing things the wrong way.

As for local models, they all support text completion ("chat completion" is just text completion with a specific template), so no changes are required to use e.g. Llama 3 Instruct. The only problem is that some loaders, notably Ollama and Kobold, don't expose that endpoint, but that is their bug to fix.

currently PARAMS from config.ts seems to be unused. Should we remove it?

It's not unused, it's included into the params variable, though somehow you seem to have removed it in this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants