Replies: 1 comment
-
Thanks for sharing @StuartRP. I may look how we can get examples like these shared in the docs.continue.dev. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey all,
For any other Poe subscribers looking to use its newly released API alongside the Continue extension, the easiest current workaround seems to be to add it by pretending it's lmstudio. I could not find/work out in my searching how to add poe as custom provider, I suspect this is possible, but couldn't find an example or guide that seemed to cover it.
Here's my config.yaml, with a poe hosted GPT-4o. I'm using a locally run Qwen 2.5 model for my autocomplete, then throwing to the poe hosted GPT-4o for chat and agent functionality.
I'm new to using Continue, so do not guarantee any of this is 'optimal'. System message text copied from continue default for openai for GPT-4o.
Hope it helps someone.
EDIT-1: Note, I'm having some unreliable behaviour with getting the GPT-4o to perform agent actions. Suspect this will be an issue with MCP or what services Poes API provides. Will update if I manage to resolve.
EDIT-2: Looks like the issue was the lack of context provider for the current file. Fixed in the below. Keep in mind this means you're possibly sending the whole file when you make a call to agent or chat, so will burn more tokens.
Beta Was this translation helpful? Give feedback.
All reactions