Added support for Chat Completion Model#5
Conversation
- Added a comment to specify the model name used by the API and updated the MODEL constant. main.ts: - Updated import statement to include MODEL from config.ts. - Changed the model parameter in streamText function to use the MODEL constant from config.ts.
- Added SYSTEM_PROMPT, MODEL_TYPE and 3 configuration examples main.ts: - Updated import statement to include MODEL_TYPE, SYSTEM_PROMPT from config.ts. - Added support to chat model - Added ability to switch between chat and completion model - commented n_predict: MAX_TOKENS, cache_prompt: true in line 63
|
currently PARAMS from config.ts seems to be unused. Should we remove it? |
|
Thanks for the effort, but this is not the right way to broaden loader support. The right way is to add support for the text completion endpoint to those loaders (which I believe is currently happening in Ollama). Chat completion is a semantic mismatch for text completion, and using it to do the latter is a hack that I don't want in the code. The fact that OpenAI restricts GPT-4 to the chat completion endpoint is unfortunate (and clearly intended to further limit what users can do with their models), but not a sufficient reason for doing things the wrong way. As for local models, they all support text completion ("chat completion" is just text completion with a specific template), so no changes are required to use e.g. Llama 3 Instruct. The only problem is that some loaders, notably Ollama and Kobold, don't expose that endpoint, but that is their bug to fix.
It's not unused, it's included into the |
I've added support to 'chat' model, and the ability to switch between chat and completion type model.
Added example in config.ts
config.ts:
main.ts: