Skip to content

Commit 7dcb828

Browse files
Error during code context and indexing (#32)
* fix: comment out unused token length calculation * chore: updated changelog and removed notes
1 parent 92f505c commit 7dcb828

File tree

3 files changed

+10
-22
lines changed

3 files changed

+10
-22
lines changed

CHANGELOG.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,23 @@
11
# Changelog
22

3-
## [3.0.2] - YYYY-MM-DD
3+
## [3.0.2] - 2025-01-30
44

55
### Added
66

77
- Merged changes from Cline 3.2.0 (see [changelog](https://github.com/cline/cline/blob/main/CHANGELOG.md#320)). 
88
- Added copy to clipboard for HAI tasks
99
- Added ability to add custom instruction markdown files to the workspace
1010
- Added ability to dynamically choose custom instructions while conversing
11+
- Added inline editing (Ability to select a piece of code and edit it with HAI)
1112

1213
### Fixed
1314

1415
- Fixed AWS Bedrock session token preserved in the global state
16+
- Fixed unnecessary LLM and embedding validation occurring on every indexing update
17+
- Fixed issue causing the extension host to terminate unexpectedly
18+
- Fixed LLM and embedding validation errors appearing on the welcome page post-installation
19+
- Fixed embedding configuration incorrectly validating when an LLM model name is provided
20+
- Fixed errors encountered during code context processing and indexing operations
1521

1622
## [3.0.1] - 2024-12-20
1723

src/integrations/code-prep/CodeContextAddition.ts

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -140,9 +140,10 @@ export class CodeContextAdditionAgent extends EventEmitter {
140140

141141
// TODO: Figure out the way to calculate the token based on the selected model
142142
// currently `tiktoken` doesn't support other then GPT models.
143+
// commented the code since tokenLength not in use
143144

144-
const encoding = encodingForModel("gpt-4o")
145-
const tokenLength = encoding.encode(fileContent).length
145+
// const encoding = encodingForModel("gpt-4o")
146+
// const tokenLength = encoding.encode(fileContent).length
146147

147148
// TODO: `4096` is arbitrary, we need to figure out the optimal value for this. incase of `getModel` returns `null`
148149
const maxToken = llmApi.getModel().info.maxTokens ?? 4096 * 4 // 1 token ~= 4 char

webview-ui/src/components/settings/ApiOptions.tsx

Lines changed: 0 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -650,17 +650,6 @@ const ApiOptions = ({
650650
placeholder={`Default: ${azureOpenAiDefaultApiVersion}`}
651651
/>
652652
)}
653-
<p
654-
style={{
655-
fontSize: "12px",
656-
marginTop: 3,
657-
color: "var(--vscode-descriptionForeground)",
658-
}}>
659-
<span style={{ color: "var(--vscode-errorForeground)" }}>
660-
(<span style={{ fontWeight: 500 }}>Note:</span> HAI uses complex prompts and works best with Claude
661-
models. Less capable models may not work as expected.)
662-
</span>
663-
</p>
664653
</div>
665654
)}
666655

@@ -784,10 +773,6 @@ const ApiOptions = ({
784773
local server
785774
</VSCodeLink>{" "}
786775
feature to use it with this extension.{" "}
787-
<span style={{ color: "var(--vscode-errorForeground)" }}>
788-
(<span style={{ fontWeight: 500 }}>Note:</span> HAI uses complex prompts and works best with Claude
789-
models. Less capable models may not work as expected.)
790-
</span>
791776
</p>
792777
</div>
793778
)}
@@ -845,10 +830,6 @@ const ApiOptions = ({
845830
style={{ display: "inline", fontSize: "inherit" }}>
846831
quickstart guide.
847832
</VSCodeLink>
848-
<span style={{ color: "var(--vscode-errorForeground)" }}>
849-
(<span style={{ fontWeight: 500 }}>Note:</span> HAI uses complex prompts and works best with Claude
850-
models. Less capable models may not work as expected.)
851-
</span>
852833
</p>
853834
</div>
854835
)}

0 commit comments

Comments
 (0)