Consolidation of parameters, and models... #932
ihatemakinganaccount
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
So, Alpaca is awesome and all, but also, as it's grown, it's becoming a little messy and things are a bit all over the place.
I.
Consider model parameters like top_p and temperature. In Ollama, these are defined in the modelfile, and can also be set with every message generated. While in Alpaca:
So this is a bit disjointed. I'm not sure why you'd want temperature to be an instance setting anyway, since it depends on the model how well it works on a given temperature, and you may want to tweak it per chat even? Or at least leave it to the modelfile. Same thing for context window and max tokens, that depends on what you want from a given model. On one instance, you can run a coding model where you want a huge context window and low creativity, and a fast quirky asistant that just responds to a given command.
What I think would make the most sense:
-- there's space in the new chat window, as well as in the '...' menu in the chat bubbles
-- if these settings were listed there, you could also use that just to see/check their values as 'stats for geeks'
-- if this would be settable per-message, it would open the door for further improvements down the line, like dynamic temperature settings, which some frontends apparently have
-- these could be advanced options which you could enable/hide in Preferences if you prefer to have a clean default UI
I'm not sure how other instances than Ollama work, I assume the exact parameters tend to differ, but that's the case with instance settings already (max_tokens seems to be the same as num_predict I guess)
I think this would simplify things in the long run, if you'd want to add more features.
II.
The other thing I'm wondering about, is models, where, at this time:
I'm not really sure what would be the best approach, I don't know what your plans are. But I think at least for the audio stuff, these shouldn't be hidden under preferences. Especially not the TTS voice, because it's a drag trying out different voices, and you may want to enable dictation only for certain models/chats.
I'd probably make it as new drop-down next to the model selector - so separate model selectors for LLM, TTS and STT.
You can look how Speech Note (https://github.com/mkiol/dsnote) does things, including populating the model database. I think it would make sense to treat the voice models similarly to LLMs. Honestly if it were up to me, I'd try to hook up with Speech Note and use their app for the voice stuff, instead of reinventing everything.
Anyway, just some thoughts I wanted to get out. Thanks for Alpaca!
Beta Was this translation helpful? Give feedback.
All reactions