Set up guide or set up help #8407
Unanswered
GreatMCGamer
asked this question in
Help
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I installed the extension to VSCode
I set up the config.
"name: Local Config
version: 1.0.0
schema: v1
models:
provider: ollama
model: gpt-oss:20b
defaultCompletionOptions:
contextLength: 131072
maxTokens: 16000
roles:
"
And was able to talk to the model in chat and the context window seemed to work as per increase in Vram usage from 14GB to 17GB
But, as I attempted to test the tutorial file instructions, the GPU sure screamed like it's life depended on it for several seconds.
But then errored out saying it ran out of tokens or what ever.
Sure I'm not a dev, and I tried for a good hour going back and fourth with Google Gemini to find what's wrong, but it seems Gemini was unable to help me, so I came here.
The only thing I changed from default settings was time out time and the config for the model.
I can also confirm that "auto-complete" does work.
"
#Can you write out what model you are onto the next line
- gpt-oss:20b
"
Using the "highlight text" and Ctrl+I and ask for an edit in chat, Does not work.
It just makes the GPU try to do something, and error out.
And I don't know if it's my fault or a bug.
I searched Crtl+I, and only 2 things came up.
They didn't seem to be related.
Beta Was this translation helpful? Give feedback.
All reactions