You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* welcome page now shows login button
* adding cline apikey to context
* added cline provider + making sure welcome page closes
* adjusting welcome page language
* swapped cline provider to use openai flavor requests
* handling credit limit errors (super hacky ugh)
* welcome view hiding api options by default
* added back account page and relying more heavily on the firebase custom token to ensure persistent login
* persisting login state through window reloads
* renaming Account -> Cline Account
* use openrouter format with cline api
* Update welcome experience
* Fix error handling
* added generation details tracking to cline provider
* small edit to account view
* logging cline instead of openrouter
* logging cline instead of openrouter
* response.data instead of response.data.data
* set isdev to false for prerelease
* changeset added
* Update tricky-zebras-talk.md
* Update tricky-zebras-talk.md
* welcome page language
* more logging and updated authentication system to be more robust and reliable
* only removing custom token on explicit log out action
* even more aggressive custom token fetching + storing in apiConfig
* moving firebase auth logic to webview because nodejs firebase token refresh is not supported
* cleaned up all old logic for extension server side auth persistence
* reconciling merge conflicts with main
* reconciling merge conflicts
* package-lock.json
* package lock
* package-lock
* packagejson + lock
* use standard API Request Failed title for all API errors while maintaining detailed credit limit information in error content
* deleting packagejson line for vertexai
* reverting package-lock changes
* package-lock.json straight from main
* removing redundant csp allowance
* Fix package-lock
* Add missing options to cline stream
* Remove duplicate csp
* Remove email subscribe
* Update welcome page and cline account view
* Fixes
* Fix errors
---------
Co-authored-by: Saoud Rizwan <[email protected]>
// this is specifically for claude models (some models may 'support prompt caching' automatically without this)
41
-
switch(model.id){
42
-
case"anthropic/claude-3.7-sonnet":
43
-
case"anthropic/claude-3.7-sonnet:beta":
44
-
case"anthropic/claude-3.7-sonnet:thinking":
45
-
case"anthropic/claude-3-7-sonnet":
46
-
case"anthropic/claude-3-7-sonnet:beta":
47
-
case"anthropic/claude-3.5-sonnet":
48
-
case"anthropic/claude-3.5-sonnet:beta":
49
-
case"anthropic/claude-3.5-sonnet-20240620":
50
-
case"anthropic/claude-3.5-sonnet-20240620:beta":
51
-
case"anthropic/claude-3-5-haiku":
52
-
case"anthropic/claude-3-5-haiku:beta":
53
-
case"anthropic/claude-3-5-haiku-20241022":
54
-
case"anthropic/claude-3-5-haiku-20241022:beta":
55
-
case"anthropic/claude-3-haiku":
56
-
case"anthropic/claude-3-haiku:beta":
57
-
case"anthropic/claude-3-opus":
58
-
case"anthropic/claude-3-opus:beta":
59
-
openAiMessages[0]={
60
-
role: "system",
61
-
content: [
62
-
{
63
-
type: "text",
64
-
text: systemPrompt,
65
-
// @ts-ignore-next-line
66
-
cache_control: {type: "ephemeral"},
67
-
},
68
-
],
69
-
}
70
-
// Add cache_control to the last two user messages
71
-
// (note: this works because we only ever add one user message at a time, but if we added multiple we'd need to mark the user message before the last assistant message)
// NOTE: this is fine since env details will always be added at the end. but if it weren't there, and the user added a image_url type message, it would pop a text part before it and then move it after to the end.
// Not sure how openrouter defaults max tokens when no value is provided, but the anthropic api requires this value and since they offer both 4096 and 8192 variants, we should ensure 8192.
temperature=undefined// extended thinking does not support non-1 temperature
135
-
reasoning={max_tokens: budget_tokens}
136
-
}
137
-
break
138
-
}
139
-
140
-
// Removes messages in the middle when close to context window limit. Should not be applied to models that support prompt caching since it would continuously break the cache.
// except for deepseek (which we set supportsPromptCache to true for), where because the context window is so small our truncation algo might miss and we should use openrouter's middle-out transform as a fallback to ensure we don't exceed the context window (FIXME: once we have a more robust token estimator we should not rely on this)
0 commit comments