Skip to content

add unit tests#16

Open
satvik007 wants to merge 3 commits intomainfrom
dev/satvik/add-unit-tests
Open

add unit tests#16
satvik007 wants to merge 3 commits intomainfrom
dev/satvik/add-unit-tests

Conversation

@satvik007
Copy link
Contributor

No description provided.

@gemini-code-assist
Copy link

Summary of Changes

Hello @satvik007, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the project's test coverage by introducing a comprehensive set of unit tests. The primary goal is to ensure the reliability and correctness of core functionalities, API tool integrations, and data validations. This effort also includes setting up the necessary testing infrastructure and standardizing schema definitions for better maintainability.

Highlights

  • Comprehensive Unit Test Suite: Introduced a new and extensive suite of unit tests using Vitest, covering various modules including configuration, schema validations, error handling, and all API tools (projects, custom fields, folders, requirements, shared preconditions, shared steps, tags, and test cases).
  • Centralized Schema Definition: Refactored the testCaseMarkerSchema from a local definition within src/tools/tcases.ts to a shared src/schemas.ts file, promoting reusability and consistency.
  • Enhanced Development Scripts: Added new scripts to package.json for dedicated type checking (typecheck) and running unit tests (test:unit), streamlining the development and CI/CD process.
  • Mock Data Fixtures: Created a dedicated directory for mock data fixtures to facilitate isolated and reliable testing of API tool interactions without relying on actual API calls.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/ci.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@claude
Copy link

claude bot commented Dec 24, 2025

Pull Request Review: Add Unit Tests

Summary

This PR adds comprehensive unit test coverage for the QA Sphere MCP server, including tests for all tools, configuration, error handling, and schemas. It also introduces a CI workflow to run tests, formatting, linting, type checking, and builds automatically.

Code Quality & Best Practices ✅

Strengths:

  1. Excellent test organization - Tests are well-structured with clear separation between fixtures, unit tests, and setup utilities
  2. Comprehensive coverage - All major tools are tested (projects, test cases, folders, tags, custom fields, requirements, shared steps/preconditions)
  3. Good use of mocking - Properly mocks axios and configuration to avoid real API calls
  4. CI/CD implementation - GitHub Actions workflow includes all essential checks (format, lint, typecheck, build, test)
  5. Error handling tests - Dedicated test suite for network errors, server errors, authentication, and validation
  6. Schema validation tests - Ensures project codes and test case markers follow expected patterns

Suggested Improvements:

1. Missing vitest configuration file

The tests reference vitest but there's no vitest.config.ts. Consider adding:

import { defineConfig } from 'vitest/config'

export default defineConfig({
  test: {
    globals: true,
    environment: 'node',
    setupFiles: ['./src/tests/setup.ts'],
  },
})

2. Test file structure in tsconfig

The tsconfig.json excludes node_modules but doesn't explicitly include test files. Consider updating to handle test files:

{
  "include": ["src/**/*"],
  "exclude": ["node_modules", "dist"]
}

3. Type safety in test mocking

In src/tests/setup.ts, the MockedAxios type could be more robust:

export type MockedAxios = {
  get: Mock<any, any>
  post: Mock<any, any>
  patch: Mock<any, any>
  isAxiosError: Mock<any, any>
}

Potential Bugs or Issues ⚠️

1. Module reset in config tests

Location: src/tests/unit/config.test.ts:12

The test uses vi.resetModules() but may not properly handle ES module caching. The dynamic import pattern is correct, but ensure the module is fully reloaded between tests.

Suggestion: Verify that environment variable changes are properly isolated between tests.

2. Test isolation concern

Location: Multiple test files

Tests import from the actual tool modules which triggers module-level code execution. While the axios mock should prevent actual API calls, consider whether this could cause side effects.

Current pattern:

const { registerTools } = await import('../../../tools/tcases.js')

This is acceptable but ensure mocks are set up before any imports.

3. Error message assertions could be more specific

Location: src/tests/unit/error-handling.test.ts:35,49,89,109,135,165

Tests use .rejects.toThrow('Failed to fetch project') which matches partial strings. Consider using more specific matchers or regex patterns to ensure exact error messages.

Performance Considerations ✅

Good:

  1. Parallel test execution - Vitest runs tests in parallel by default
  2. Minimal dependencies - Test fixtures are lightweight
  3. Efficient mocking - No actual network calls are made

Minor concern:

The CI workflow runs formatting and linting separately. Consider combining them:

- name: Check code quality
  run: npx biome check .

Security Concerns ✅

Good practices:

  1. API key mocking - Real credentials are never used in tests
  2. URL normalization tested - Config tests verify URL handling including protocol and trailing slashes
  3. Input validation - Schema tests ensure proper validation of project codes and markers
  4. Authentication error handling - 401/403 errors are properly tested

Recommendations:

  1. Consider adding tests for potential injection attacks in parameters (though Zod schemas should prevent this)
  2. Add tests for maximum length strings to ensure buffer overflow protection

Test Coverage 📊

Well-covered areas:

  • ✅ All tool registration and handlers
  • ✅ Error scenarios (404, 401, 403, 400, 500)
  • ✅ Schema validation (projectCodeSchema, testCaseMarkerSchema)
  • ✅ Config URL normalization
  • ✅ Network errors (ECONNREFUSED, ECONNABORTED)

Missing coverage:

  • ⚠️ src/utils.ts - The JSONStringify utility function isn't tested
  • ⚠️ src/index.ts - Server initialization and MCP setup
  • ⚠️ Edge cases for list_test_cases with all filter combinations
  • ⚠️ Template test case specific functionality
  • ⚠️ update_test_case tool - No tests found for the PATCH endpoint

Specific File Feedback

.github/workflows/ci.yml

Line 26: npx biome format . doesn't have a --check flag. This will modify files instead of checking. Use:

- name: Check formatting
  run: npx biome format --check .

src/schemas.ts:7-11

Good addition of testCaseMarkerSchema! The regex pattern correctly validates the PROJECT_CODE-SEQUENCE format.

src/tools/tcases.ts (lines removed)

Lines removed: 9 deletions

Without seeing the diff context, ensure that any removed code wasn't necessary. The test file suggests the tool still works, so this is likely cleanup.

Summary & Recommendations

This is a high-quality PR that significantly improves the project's reliability and maintainability. The test suite is comprehensive and well-structured.

Before merging:

  1. ✅ Add vitest.config.ts for explicit test configuration
  2. ✅ Fix the CI formatting check to use --check flag
  3. ✅ Add tests for utils.ts (JSONStringify)
  4. ✅ Add tests for update_test_case tool
  5. ✅ Consider adding integration test documentation

Optional enhancements:

  • Add test coverage reporting (e.g., vitest --coverage)
  • Add badges to README showing build status and coverage
  • Consider adding e2e tests with a mock QA Sphere API server

Overall verdict: Excellent work! With the minor fixes suggested above, this PR will provide a solid foundation for maintaining code quality going forward.


Generated by Claude Code 🤖

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive suite of unit tests, which is a great addition for improving code quality and stability. The changes include adding test scripts to package.json, creating numerous test files for schemas, configuration, error handling, and various tools, along with necessary test fixtures and setup files. A good refactoring was also done by moving testCaseMarkerSchema to a shared schemas.ts file.

My review focuses on the new test implementations. I've found a couple of areas for improvement in the tests to make them more robust and correct. Please see my detailed comments.

@claude
Copy link

claude bot commented Jan 20, 2026

Pull Request Review: Add Unit Tests

Overview

This PR adds comprehensive unit tests to the qasphere-mcp project, along with a CI workflow. This is an excellent addition that significantly improves code quality and maintainability.

Summary: Overall strong contribution with minor issues to address


Positive Highlights

  1. Comprehensive Test Coverage - Tests cover all major tools (projects, folders, tags, test cases, requirements, shared steps/preconditions, custom fields)
  2. Well-Organized Structure - Clear separation between fixtures, utils, and test files
  3. Good Error Testing - Covers various HTTP error codes (401, 403, 404, 400, 500)
  4. CI Integration - GitHub Actions workflow properly configured
  5. Schema Refactoring - Good extraction of testCaseMarkerSchema to avoid duplication

Issues Found

Critical Issues

1. Module Import Caching Problem (src/tests/unit/config.test.ts)

Severity: High

The config tests use dynamic imports with vi.resetModules() to test different environment configurations. However, there is a critical ordering issue in the afterEach hook where modules are reset AFTER the environment is restored. This could cause race conditions or test pollution.

Fix: Reset modules before restoring env in the afterEach hook.

2. Missing Error Handling in mockToolCall (src/tests/utils.ts:11)

Severity: Medium-High

The mockToolCall utility uses optional chaining to access the handler, but if find() returns undefined, this will fail silently leading to cryptic test failures.

Fix: Add proper error handling with helpful error messages when the tool is not found.

Medium Issues

3. CI Workflow Missing Check Flag (.github/workflows/ci.yml:27)

Severity: Medium

The "Check formatting" step runs "npx biome format ." which will MODIFY files instead of just checking them. This defeats the purpose of CI validation.

Fix: Change to "npx biome format --check ."

4. Inconsistent Error Message Testing

Severity: Medium

Some tests only check for generic messages like "Failed to fetch project" without verifying the specific error details. This makes tests less robust.

Recommendation: For known error scenarios, test for more specific error messages when possible.

5. Magic Array Index in mockToolCall (src/tests/utils.ts:10-11)

Severity: Low-Medium

Hardcoded index [3] is fragile. If the MCP SDK changes the tool() method signature, all tests will break silently.

Recommendation: Add a runtime assertion or document the expected signature more clearly.

Minor Issues / Suggestions

6. Type Safety Issues

Severity: Low

Multiple uses of "as any" throughout the test files. Consider creating proper type definitions for mock server to improve type safety.

7. Incomplete Test: Mixed Case Protocol (src/tests/unit/config.test.ts:48-54)

Severity: Low

The test expects HTTPS:// to remain uppercase, but the implementation uses .toLowerCase() for protocol checking. Please verify if the actual behavior normalizes to lowercase or preserves case.

8. No Vitest Config File

Severity: Low

The project uses Vitest but has no vitest.config.ts. Consider adding explicit configuration for test patterns, coverage reporting, and environment setup.


Security Concerns

No Critical Security Issues Found:

  • API key is properly mocked in tests
  • No hardcoded credentials
  • Test fixtures use safe mock data

Test Coverage Assessment

Coverage is strong but consider adding:

  1. Edge case tests: Empty strings, very long inputs, Unicode/special characters, null/undefined handling
  2. Integration scenarios: Multiple simultaneous tool calls, state changes between operations
  3. Network resilience: Retry logic, rate limiting handling

Recommendations

Must Fix Before Merge

  1. Fix module reset order in config.test.ts (Critical)
  2. Add error handling to mockToolCall utility (Critical)
  3. Fix CI workflow format check command (Medium)

Should Fix

  1. Improve type safety (remove as any)
  2. Verify/fix mixed case protocol test
  3. Add vitest configuration file

Nice to Have

  1. Add more edge case tests
  2. Add coverage reporting to CI
  3. Add test documentation in README

Conclusion

This is a high-quality PR that adds significant value to the project. The test infrastructure is well-designed and comprehensive. The critical issues are minor and easily fixable. Once the must-fix items are addressed, this PR should be merged.

Recommendation: Approve with requested changes

Great work on establishing the testing foundation!

@claude
Copy link

claude bot commented Jan 20, 2026

Pull Request Review: Add Unit Tests

Summary

This is an excellent PR that adds comprehensive unit test coverage to the qasphere-mcp project! The addition of 1,488 lines of test code with minimal changes to the actual codebase (just 9 lines deleted) shows a well-focused testing effort. Overall, this is high-quality work that significantly improves the project's maintainability and reliability.

✅ Strengths

1. Excellent Test Organization

  • Clear separation of concerns with dedicated fixture files for each domain
  • Well-structured test files mirroring the source code organization
  • Good use of beforeEach hooks for test isolation

2. Comprehensive Coverage

  • Tests cover all major tool categories (projects, test cases, folders, requirements, etc.)
  • Good mix of happy path and error scenario testing
  • Network error handling is thoroughly tested (401, 403, 404, 500, timeouts, connection errors)

3. Good Testing Patterns

  • Proper use of mocks for external dependencies (axios)
  • Fixtures are well-designed and reusable
  • Test descriptions are clear and follow consistent naming conventions

4. CI/CD Integration

  • Added GitHub Actions workflow with comprehensive checks (format, lint, typecheck, build, test)
  • Good use of Node.js 20 and npm caching

5. Code Refactoring

  • Moving testCaseMarkerSchema from tcases.ts to schemas.ts is a good architectural improvement (src/schemas.ts:72-78)

🔍 Issues & Recommendations

1. Critical: Fragile Test Helper (src/tests/utils.ts:11)

const handler = mockServer.tool.mock.calls.find((call: any) => call[0] === toolName)?.[3]

Issue: The magic index [3] is brittle and relies on the internal implementation of the MCP SDK. If the SDK changes the order of arguments to server.tool(), all tests will break.

Recommendation: Add a comment explaining what index 3 represents, or better yet, add validation:

const handler = mockServer.tool.mock.calls.find((call: any) => call[0] === toolName)?.[3]
if (!handler) {
  throw new Error(`Tool '${toolName}' not found or handler not at expected position`)
}

2. Missing Error Assertions (Multiple test files)

Many error tests only check that an error is thrown but don't verify the error type or specific properties.

Example: src/tests/unit/tools/tcases.test.ts:60

await expect(handler({ marker: 'BDI-999' })).rejects.toThrow('Failed to fetch test case')

Recommendation: For better error handling verification, also check error types when appropriate:

await expect(handler({ marker: 'BDI-999' })).rejects.toThrow(expect.objectContaining({
  message: expect.stringContaining('Failed to fetch test case')
}))

3. Incomplete Vitest Type Declaration (src/tests/unit/vitest.d.ts:1-12)

The type declaration for axios mock doesn't include common axios properties like defaults, create, etc.

Recommendation: Consider using a more complete mock type or document why only these specific methods are needed:

// Mock only the methods used in tests to keep mocks minimal

4. Test Isolation Concern (src/tests/unit/config.test.ts:396)

The config tests use vi.resetModules() to reload the module, but this could be fragile if there are other module-level side effects.

Recommendation: Consider using a test environment that better isolates environment variables, or add a comment explaining why module reset is necessary here.

5. Missing Vitest Configuration

The PR adds test scripts but no vitest.config.ts file.

Recommendation: Add a vitest config file for better control over test execution:

// vitest.config.ts
import { defineConfig } from 'vitest/config'

export default defineConfig({
  test: {
    globals: true,
    environment: 'node',
    coverage: {
      provider: 'v8',
      reporter: ['text', 'json', 'html']
    }
  }
})

6. Potential Race Condition in Config Tests (src/tests/unit/config.test.ts)

The dynamic imports in config tests could have timing issues if other tests are running concurrently.

Recommendation: Consider adding { sequence: true } to the test or using proper test isolation strategies.

7. CI Workflow: Missing Test Coverage Reporting

The CI workflow runs tests but doesn't report coverage metrics.

Recommendation: Add coverage reporting to the CI workflow:

- name: Run tests with coverage
  run: npm test -- --coverage

- name: Upload coverage reports
  uses: codecov/codecov-action@v3
  if: always()

8. Fixture Data Could Be More Realistic

Some fixture data uses placeholder values like 'uuid-123' which might not match actual UUID formats.

Recommendation: Use realistic UUIDs or document that these are intentionally simplified for testing:

// Using simplified IDs for easier test debugging
id: 'uuid-123',

9. Missing Tests for Edge Cases

  • No tests for pagination edge cases (page 0, negative page, limit exceeding max)
  • No tests for concurrent requests or rate limiting
  • No tests for malformed API responses (missing required fields in responses)

🔒 Security Considerations

Good:

  • API keys are properly mocked in tests, no hardcoded credentials
  • Tests verify authentication and authorization error handling (401, 403)

⚠️ Note:

  • Consider adding tests for API key validation edge cases (empty string, special characters, etc.)

⚡ Performance Considerations

Good:

  • Tests properly mock HTTP calls, avoiding real network requests
  • Fixtures are lightweight

Suggestion:

  • Consider adding performance benchmarks for critical paths if the codebase grows
  • Current test approach is appropriate for unit tests

📋 Test Coverage Assessment

Well Covered:

  • ✅ Error handling (network errors, HTTP status codes)
  • ✅ Happy path scenarios for all tools
  • ✅ Schema validation
  • ✅ Configuration handling

Could Use More Coverage:

  • ⚠️ Edge cases in pagination and filtering
  • ⚠️ Response data transformation edge cases
  • ⚠️ URLSearchParams construction with various input combinations

📝 Minor Code Style Notes

  1. Consistent Mock Structure (src/tests/setup.ts:3-8): The MockedAxios type is well-defined but could include JSDoc comments explaining its purpose.

  2. Type Safety: Good use of TypeScript throughout. The as any cast in mockToolCall (src/tests/utils.ts:7) is appropriate for test utilities but could have a comment explaining why.

🎯 Recommendations Priority

High Priority:

  1. Fix the fragile array index in mockToolCall helper
  2. Add vitest configuration file
  3. Add error type validation to critical error tests

Medium Priority:
4. Add test coverage reporting to CI
5. Add more edge case tests for pagination
6. Improve error assertion specificity

Low Priority:
7. Add JSDoc comments to test utilities
8. Use more realistic fixture data
9. Add performance benchmarks (future consideration)

✨ Overall Assessment

Rating: Excellent (4.5/5)

This PR demonstrates strong testing practices and adds significant value to the project. The test suite is well-organized, comprehensive, and follows good patterns. The issues identified are relatively minor and mostly involve making the tests more robust and adding additional coverage for edge cases.

Recommendation: Approve with minor suggestions

The PR is production-ready as-is, but addressing the high-priority recommendations would make it even more robust. Excellent work! 🎉

Copy link
Contributor

@ramilamparo ramilamparo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. I made some changes that extracts tool mocking logic. It should be easier to update tests once we upgrade @modelcontextprotocol/mcp.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants