-
Notifications
You must be signed in to change notification settings - Fork 0
Fix TypeScript errors in thinkingTagExtractor tests #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: feat/add-configurable-thinking-output-format-support-for-vLLM
Are you sure you want to change the base?
Fix TypeScript errors in thinkingTagExtractor tests #3
Conversation
The test file was incorrectly using thinkingOpenTag/thinkingCloseTag properties on LLMOptions, but these properties only exist in VllmOptions. Converted integration tests with BaseLLM to pure unit tests for the ThinkingTagExtractor class, which aligns with the vLLM-specific architecture of the thinking tag extraction feature. Co-authored-by: AyRickk <[email protected]>
|
@copilot Pourquoi dans cette PR j'ai un check qui ne passe pas (le core check) |
|
@copilot Pourquoi dans cette PR j'ai un check qui ne passe pas (le core check) |
The "core check" is not failing - it's not running at all because PR #2 targets your local branch The on:
pull_request:
branches:
- mainTo run the upstream CI checks for submitting to continuedev#8901, you'll need to:
Your current PR #2 is internal to your fork, so the upstream CI doesn't apply to it. |
|
@copilot Je ne te parle pas de la PR 02 je te parle de cette PR la continuedev#8901 Qui est la PR relié à notre branch sur le projet de base Lis ses informations |
The "core check" on PR continuedev#8901 is failing because of 9 failing tests in The tests expect Failed tests:
Example error: You'll need to debug the vLLM streaming implementation in |
|
@copilot corrige le llm/thinkingTagIntegration.vitest.ts pour que cela fonctionne, il a été crée automatiquement lors d'une ancienne version, il faut le mettre à jour |
The integration tests were failing because: 1. vLLM is not in PROVIDER_HANDLES_TEMPLATING list 2. "mock-model" causes autodetect to set template to "chatml" 3. This causes streamChat to use the templateMessages path 4. That path wraps all chunks with role: "assistant", losing "thinking" role Fix: Add template: "none" to test VllmOptions to bypass template-based message formatting and allow thinking role chunks to pass through. Co-authored-by: AyRickk <[email protected]>
These temporary test workspace files were created by the test suite and should not be committed. Co-authored-by: AyRickk <[email protected]>
Fixed in commit The root cause was that vLLM is not in Fix: Added All 12 integration tests now pass alongside the 19 unit tests. Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
|
@copilot relie la PR continuedev#8901 J'ai un check qui a fail je ne comprend pas explique moi |
All CI checks are now passing on PR continuedev#8901. The latest commit The PR is in "blocked" mergeable state only because it's waiting for review from @Patrick-Erichsen, not because of any failing checks. You may have seen older failed runs - those were for previous commits. The fix you pushed at 21:50 UTC resolved the test failures. |
|
@copilot https://github.com/continuedev/continue/actions/runs/19748505995/job/56587363453?pr=8901 Here is the error, why I have it ? |
This failure is unrelated to our changes. It's a flaky test in the CLI package: Our changes only modified test files in
The failing test is in Status: The test failure at line 116 is looking for "plan]" in the rendered UI frame, but the frame shows the normal input prompt instead. This is likely a race condition in the test setup. You could try re-running the CI workflow - flaky tests often pass on a second run. |
thinkingTagExtractor.vitest.tsusesthinkingOpenTag/thinkingCloseTagproperties inLLMOptions, but these properties only exist inVllmOptions(vLLM-specific)Vllm.ts, and the properties were removed fromLLMOptionsThinkingTagExtractorclass onlythinkingTagIntegration.vitest.tsintegration tests for vLLM streamingtemplate: "none"to bypass template-based message formattingSummary
Fixed two test files for the thinking tag extraction feature:
thinkingTagExtractor.vitest.tsConverted from integration tests with
BaseLLMto unit tests forThinkingTagExtractordirectly.thinkingTagIntegration.vitest.ts(NEW)Added integration tests for vLLM streaming with thinking tag extraction. Fixed failing tests by adding
template: "none"option to VllmOptions.Root cause of upstream PR continuedev#8901 failures: vLLM is not in
PROVIDER_HANDLES_TEMPLATING, so "mock-model" causes autodetect to set template to "chatml". This makesstreamChat()use the templateMessages path which wraps all chunks withrole: "assistant", losing the"thinking"role.Security Summary: No security vulnerabilities found by CodeQL.
All 1606 tests pass including 12 new integration tests and 19 unit tests.
Original prompt
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.