Skip to content

Conversation

@Saahi30
Copy link
Collaborator

@Saahi30 Saahi30 commented Oct 27, 2025

📝 Description

This pull request sets up the testing scaffolding for the project, providing a foundation for writing and running tests efficiently. It includes the configuration of testing frameworks, setup files, and initial test cases to ensure the codebase is test-ready.

🔧 Changes Made

Configured the testing framework (Pytest) for the project.
Added setup files for initializing the testing environment.
Created example test cases to demonstrate the testing structure.
Updated documentation to include instructions for running tests.
Integrated testing scripts into the CI/CD pipeline.

✅ Checklist

  • [ ✅] I have read the contributing guidelines.

Summary by CodeRabbit

Release Notes

  • New Features
    • Added comprehensive Content Analytics API enabling content linking to contracts, sync management, and performance metric retrieval.
    • Introduced ROI calculations, contract-wide analytics aggregation, and audience demographics analysis.
    • Added manual sync triggers and content preview functionality for seamless workflow management.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 27, 2025

Walkthrough

This pull request introduces a comprehensive Content Analytics API module for the backend with endpoints for contract-content linking, content syncing, and analytics data retrieval including ROI and demographics calculations. It also adds extensive test suites for backend services and frontend components, alongside frontend test configuration and setup files.

Changes

Cohort / File(s) Change Summary
Backend Content Analytics API
Backend/app/routes/content_analytics_backup.py
New module with router, request/response models, and 11+ endpoints for contract-content linking, preview, sync, and analytics retrieval. Includes auth verification, error handling, and database interactions with User, ContractContentMapping, ContentAnalytics models.
Backend Analytics API Tests
Backend/test_analytics_api.py
New integration test class with 14+ test methods covering endpoint functionality, role-based access control, error handling, and parameter validation across content linking, syncing, and analytics workflows.
Backend ROI Service Tests
Backend/test_roi_service.py
New unit test suite validating ROI calculations, CPA, CTR, engagement rates, campaign and portfolio ROI methods, trend calculations, and edge case handling.
Frontend Integration Tests
Frontend/src/__tests__/integration/integration-workflows.test.ts, Frontend/src/__tests__/integration/end-to-end-workflows-fixed.test.tsx
New test suites for integration service workflow management, analytics page end-to-end flows, API parameter validation, error handling, performance scenarios, and real-time updates.
Frontend Component Unit Tests
Frontend/src/components/ui/__tests__/empty-state.test.tsx, Frontend/src/components/ui/__tests__/error-state.test.tsx
New test suites for EmptyState and ErrorState components covering variants, callbacks, sizes, and content rendering.
Frontend Hook Unit Tests
Frontend/src/hooks/__tests__/useAudienceAnalytics*.test.ts, Frontend/src/hooks/__tests__/useEmailPreferences.test.ts, Frontend/src/hooks/__tests__/useExportData.test.ts, Frontend/src/hooks/__tests__/useIntegration.test.ts, Frontend/src/hooks/__tests__/useRealTimeAnalytics.performance.test.ts, Frontend/src/hooks/__tests__/useRetry.test.ts
New comprehensive test suites for audience analytics, email preferences, export data, integration, real-time analytics (performance), and retry hooks, with mocked services and state verification.
Frontend Service Unit Tests
Frontend/src/services/__tests__/emailPreferencesService.test.ts, Frontend/src/services/__tests__/errorHandlingService.test.ts, Frontend/src/services/__tests__/integrationService.test.ts
New test suites for email preferences, error handling, and integration services verifying API interactions, error responses, and workflow execution.
Frontend Page Tests
Frontend/src/pages/__tests__/Analytics.test.tsx
New test suite for Analytics page covering rendering, tab switching, data refresh, export, error handling, and mock data fallback.
Frontend Test Configuration
Frontend/vitest.config.ts, Frontend/src/test-setup.ts, Frontend/src/test-setup.md
New Vitest configuration with React support, test environment setup, global mocks for localStorage and browser APIs, and comprehensive test documentation.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant API as Content Analytics API
    participant Auth as Auth Service
    participant DB as Database
    participant SyncSvc as Sync Service

    User->>API: POST /contracts/{id}/content (link content)
    API->>Auth: verify_contract_access()
    Auth->>DB: check user sponsorship/ownership
    DB-->>Auth: access verified
    Auth-->>API: access granted
    API->>DB: create ContractContentMapping
    DB-->>API: mapping created
    API->>SyncSvc: schedule_sync_job()
    SyncSvc->>SyncSvc: trigger immediate sync
    SyncSvc->>DB: update ContentAnalytics
    SyncSvc-->>API: sync complete
    API-->>User: LinkContentResponse (200)
Loading
sequenceDiagram
    participant User
    participant API as Analytics Endpoints
    participant DB as Database
    participant Auth as Auth Service

    User->>API: GET /analytics/contracts/{id}
    API->>Auth: verify_contract_access()
    Auth-->>API: access verified
    API->>DB: query ContentAnalytics for contract
    DB-->>API: analytics records
    API->>API: aggregate by date range
    API->>API: compute metrics (engagement, ROI)
    API-->>User: aggregated analytics (200)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

  • Backend API module (content_analytics_backup.py): Dense implementation with 11+ endpoints, access control logic, database interactions, multiple service integrations, and error handling patterns requiring close validation of business logic and authorization checks.
  • ROI and analytics calculations in backend tests and service: Math-heavy logic with edge cases (zero values, missing data) and derived metrics that need verification for correctness.
  • Frontend test suite breadth: 15+ new test files across components, hooks, services, and pages with diverse mocking strategies (fetch, context, localStorage), requiring review of mock setup accuracy and test completeness.
  • Heterogeneous changes: Mix of API endpoints, unit tests, integration tests, and configuration files creates varied review contexts rather than repetitive patterns.

Areas requiring extra attention:

  • Authorization and access control in verify_contract_access() and related endpoint guards
  • ROI calculations and financial metric derivations in test assertions
  • Integration test mocking accuracy for API responses and state transitions
  • Vitest configuration and test-setup.ts globals ensuring tests don't interfere with each other

Poem

🐰 Hop into analytics so keen,
APIs linking contracts clean,
Tests trace each flow and metric bright,
ROI shines through the test night,
Vitest hops in, mock and true—
Quality bounces, PR brand new!

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The PR title "FEAT: Set Up Testing Scaffolding for Project" is partially related to the changeset. It accurately describes a real and substantial aspect of the changes: multiple test configuration files, setup utilities, and comprehensive test suites are being added across both backend and frontend (vitest.config.ts, test-setup.ts, test-setup.md, and numerous test files). However, the title does not capture the full scope of the PR, which also introduces a significant new feature—the Content Analytics API (Backend/app/routes/content_analytics_backup.py) with multiple endpoints for content linking, syncing, and analytics operations. The title focuses on one major aspect but omits the other.
Docstring Coverage ✅ Passed Docstring coverage is 95.83% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

♻️ Duplicate comments (1)
Backend/test_analytics_api.py (1)

487-489: Avoid catching bare Exception in test runner.

Either remove runner (preferred) or catch specific requests.exceptions.RequestException.

Since the runner is recommended for removal, this is addressed by the previous comment.

🧹 Nitpick comments (23)
Frontend/src/test-setup.ts (1)

1-37: Auto‑clear mocks between tests to reduce flakiness

Add per‑test cleanup for spies/mocks and any stubbed globals.

 import { vi } from 'vitest'
@@
 Object.defineProperty(window.URL, 'revokeObjectURL', {
   writable: true,
   value: vi.fn(),
 });
+
+// Ensure clean state between tests
+afterEach(() => {
+  vi.clearAllMocks()
+  // Vitest >=1 provides unstubAllGlobals; optional chaining keeps compatibility
+  // @ts-ignore
+  vi.unstubAllGlobals?.()
+})
Frontend/vitest.config.ts (1)

7-11: Set jsdom base URL to stabilize absolute/relative URL logic in tests

Prevents unexpected URL resolution in code using new URL(...) or relative fetches.

   test: {
     globals: true,
     environment: 'jsdom',
     setupFiles: ['./src/test-setup.ts'],
+    environmentOptions: {
+      jsdom: {
+        url: 'http://localhost/',
+      },
+    },
   },
Frontend/src/services/__tests__/emailPreferencesService.test.ts (2)

28-35: Avoid asserting a hard‑coded absolute base URL

Make tests resilient to env/config by matching path and options only.

-      expect(fetch).toHaveBeenCalledWith(
-        'http://localhost:8000/api/email-preferences/',
-        {
-          headers: {
-            'Content-Type': 'application/json',
-          },
-        }
-      );
+      expect(fetchMock).toHaveBeenCalledWith(
+        expect.stringMatching(/\/api\/email-preferences\/$/),
+        expect.objectContaining({
+          headers: expect.objectContaining({ 'Content-Type': 'application/json' }),
+        })
+      );

Apply similarly to update calls:

-      expect(fetch).toHaveBeenCalledWith(
-        'http://localhost:8000/api/email-preferences/',
-        {
-          method: 'PUT',
-          headers: { 'Content-Type': 'application/json' },
-          body: JSON.stringify({ email_notifications_enabled: false }),
-        }
-      );
+      expect(fetchMock).toHaveBeenCalledWith(
+        expect.stringMatching(/\/api\/email-preferences\/$/),
+        expect.objectContaining({
+          method: 'PUT',
+          headers: expect.objectContaining({ 'Content-Type': 'application/json' }),
+          body: JSON.stringify({ email_notifications_enabled: false }),
+        })
+      );

Also applies to: 74-83, 112-121


7-9: One‑shot mock instances for clarity

Prefer mockResolvedValueOnce/mockRejectedValueOnce for sequential flows to avoid accidental reuse across tests.

Also applies to: 11-13, 21-25, 39-46, 52-58, 67-71, 87-94, 100-108, 127-135, 142-149

Frontend/src/hooks/__tests__/useEmailPreferences.test.ts (1)

39-47: Prefer waitFor over setTimeout(0) to flush effects predictably

Improves determinism and readability.

-    await act(async () => {
-      await new Promise(resolve => setTimeout(resolve, 0));
-    });
+    await waitFor(() => {
+      expect(result.current.loading).toBe(false)
+    })

Apply similarly to other occurrences using setTimeout(resolve, 0).

Also applies to: 55-62, 74-85, 96-110, 117-131, 138-143, 151-167

Frontend/src/components/ui/__tests__/empty-state.test.tsx (1)

1-107: Solid coverage and assertions

Looks good. Optional: prefer userEvent and role‑based queries for interactions/accessibility.

-import { render, screen, fireEvent } from '@testing-library/react';
+import { render, screen } from '@testing-library/react';
+import userEvent from '@testing-library/user-event';
@@
-const actionButton = screen.getByText('Refresh');
-fireEvent.click(actionButton);
+await userEvent.click(screen.getByRole('button', { name: /refresh/i }));
Frontend/src/components/ui/__tests__/error-state.test.tsx (1)

1-83: Comprehensive and clear tests

All good. Optional: use userEvent and role queries; also assert disabled state via role where applicable.

-import { render, screen, fireEvent } from '@testing-library/react';
+import { render, screen } from '@testing-library/react';
+import userEvent from '@testing-library/user-event';
@@
-const retryButton = screen.getByText('Try Again');
-fireEvent.click(retryButton);
+await userEvent.click(screen.getByRole('button', { name: /try again/i }));
Frontend/src/test-setup.md (1)

39-65: Use Vitest APIs in the setup snippet (replace Jest globals) and harden browser API mocks

The example uses jest.fn() and the Jest entry of jest‑dom in a Vitest project. Switch to vi.fn() and @testing-library/jest-dom/vitest. Also prefer defining properties on window for localStorage/matchMedia to match jsdom behavior.

Apply to the code block shown in this doc:

-import '@testing-library/jest-dom'
+import '@testing-library/jest-dom/vitest'
+import { vi, expect } from 'vitest'
+import matchers from '@testing-library/jest-dom/matchers'
+expect.extend(matchers)

-// Mock localStorage
-const localStorageMock = {
-  getItem: jest.fn(),
-  setItem: jest.fn(),
-  removeItem: jest.fn(),
-  clear: jest.fn(),
-};
-global.localStorage = localStorageMock;
+// Mock localStorage
+const localStorageMock = {
+  getItem: vi.fn(),
+  setItem: vi.fn(),
+  removeItem: vi.fn(),
+  clear: vi.fn(),
+}
+Object.defineProperty(window, 'localStorage', {
+  writable: true,
+  value: localStorageMock,
+})

 // Mock window.matchMedia
 Object.defineProperty(window, 'matchMedia', {
   writable: true,
-  value: jest.fn().mockImplementation(query => ({
+  value: vi.fn().mockImplementation((query: string) => ({
     matches: false,
     media: query,
     onchange: null,
-    addListener: jest.fn(), // deprecated
-    removeListener: jest.fn(), // deprecated
-    addEventListener: jest.fn(),
-    removeEventListener: jest.fn(),
-    dispatchEvent: jest.fn(),
+    addListener: vi.fn(), // deprecated
+    removeListener: vi.fn(), // deprecated
+    addEventListener: vi.fn(),
+    removeEventListener: vi.fn(),
+    dispatchEvent: vi.fn(),
   })),
 });
Frontend/src/__tests__/integration/end-to-end-workflows-fixed.test.tsx (2)

229-265: Avoid vi.doMock after importing the module under test

Analytics is imported on Line 17. Mocking ../../hooks/useIntegration later with vi.doMock won’t affect already-resolved imports, so this test may not exercise the intended active‑workflow state.

Hoist a controllable mock before importing Analytics, then import Analytics:

- import Analytics from '../../pages/Analytics';
+ // Mocks first (hoisted)
+ const integrationState = { workflows: [], activeWorkflows: [] }
+ vi.mock('../../hooks/useIntegration', () => ({
+   useIntegration: () => ({
+     ...integrationState,
+     getWorkflowStatus: vi.fn(),
+     cancelWorkflow: vi.fn(),
+     refreshWorkflows: vi.fn(),
+     isExecuting: false,
+     error: null,
+     executeBrandOnboarding: vi.fn(),
+     executeContentLinking: vi.fn(),
+     executeExport: vi.fn(),
+     executeAlertSetup: vi.fn(),
+     clearError: vi.fn(),
+   })
+ }))
+ import Analytics from '../../pages/Analytics';

Then inside the test, set the state before render:

integrationState.workflows = [/* your mocked workflow */]
integrationState.activeWorkflows = [/* ... */]

102-104: Prefer vi.stubGlobal('fetch', ...) for fetch mocking and easier teardown

Direct assignment works but can leak. Using vi.stubGlobal makes intent clear and supports restoration with vi.unstubAllGlobals().

-// Mock fetch
-global.fetch = vi.fn();
+// Mock fetch
+vi.stubGlobal('fetch', vi.fn());
+// (optional) afterEach: vi.unstubAllGlobals()
Frontend/src/hooks/__tests__/useRetry.test.ts (1)

5-17: Simplify and stabilize fake timers lifecycle

Re‑enabling fake timers inside afterEach is unexpected and can leak across suites. Prefer a consistent lifecycle.

-// Mock timers
-vi.useFakeTimers();
+// Mock timers
+beforeAll(() => vi.useFakeTimers())
+afterAll(() => vi.useRealTimers())
 
 describe('useRetry', () => {
   beforeEach(() => {
     vi.clearAllTimers();
   });
 
   afterEach(() => {
-    vi.runOnlyPendingTimers();
-    vi.useRealTimers();
-    vi.useFakeTimers();
+    vi.runOnlyPendingTimers();
   });

Repeat the beforeAll/afterAll pattern (or keep the top-level one) for the useApiWithRetry suite to avoid toggling timers back on in its afterEach.

Frontend/src/services/__tests__/integrationService.test.ts (1)

46-90: Consider using more specific error assertions.

The tests use try-catch blocks to handle expected errors, but the assertions only verify that an error occurred. Consider using Vitest's toThrow or rejects.toThrow matchers for more precise error validation.

For example, you could refactor like this:

-  try {
-    await integrationService.executeBrandOnboardingWorkflow('brand-123');
-  } catch (error) {
-    // Expected to fail due to OAuth verification
-    expect(error).toBeInstanceOf(Error);
-    expect(error.message).toContain('OAuth');
-  }
+  await expect(
+    integrationService.executeBrandOnboardingWorkflow('brand-123')
+  ).rejects.toThrow(/OAuth/);
Backend/test_analytics_api.py (5)

68-70: Guard response.json() to avoid crashes on non-JSON error pages.

Wrap JSON parsing; fall back to response.text for logging.

Apply a small helper and use it at call sites:

@@
-import json
+import json
+
+def _safe_json(resp):
+    try:
+        return resp.json()
+    except ValueError:
+        return {"_non_json_body": resp.text[:1000]}
@@
-        print(f"Response: {response.json()}")
+        print(f"Response: {_safe_json(response)}")

Also applies to: 101-103, 130-132, 184-186, 236-238, 290-292, 311-313, 343-345, 376-378, 411-413


116-121: Prefer pytest parametrize over manual loops.

Cleaner failure reporting and independent cases.

Example refactor:

@@
-    def test_get_contract_analytics(self):
+    @pytest.mark.parametrize("params", [{}, {"days": 7}, {"days": 90}])
+    def test_get_contract_analytics(self, params):
@@
-        # Test with different parameters
-        test_params = [
-            {},  # Default parameters
-            {"days": 7},  # Last 7 days
-            {"days": 90}  # Last 90 days
-        ]
-
-        for params in test_params:
-            response = requests.get(
-                f"{self.base_url}/analytics/contracts/{self.test_contract_id}",
-                params=params,
-                headers=self.headers
-            )
+        response = requests.get(
+            f"{self.base_url}/analytics/contracts/{self.test_contract_id}",
+            params=params,
+            headers=self.headers,
+            timeout=self.DEFAULT_TIMEOUT,
+        )
@@
-        return True

Repeat for ROI/demographics/content analytics.

Also applies to: 123-164


391-417: RBAC test assumes auth-less server; make outcome explicit and stable.

Either inject Authorization or override get_current_user; otherwise the expected status varies with DB state.

Consider overriding the dependency in tests:

# in conftest.py or test module
from fastapi.testclient import TestClient
from app.main import app
from app.routes.content_analytics_backup import get_current_user
app.dependency_overrides[get_current_user] = lambda: FakeUser(id="test-user", role="brand")

Would you like me to generate a minimal TestClient-based integration test harness?


452-519: Remove custom test runner; let pytest handle discovery/reporting.

This duplicates pytest and complicates CI.

Apply:

-    def run_all_tests(self):
-        """Run all test methods"""
-        ...
-        return results
-
-
-def main():
-    """Main function to run the tests"""
-    tester = TestAnalyticsAPI()
-    results = tester.run_all_tests()
-    
-    # Return exit code based on results
-    failed_tests = sum(1 for result in results.values() if result != "PASSED")
-    return 0 if failed_tests == 0 else 1
-
-
-if __name__ == "__main__":
-    import sys
-    sys.exit(main())
+    pass  # pytest will discover and run test_* methods

Also applies to: 521-523


23-30: Placeholder IDs make tests non-deterministic.

Create fixtures to set up real contracts/content, or mark tests xfail until proper setup exists.

I can scaffold pytest fixtures that create a contract and linked content via the API and tear them down afterwards.

Backend/app/routes/content_analytics_backup.py (6)

1-7: Validate days query param across endpoints.

Prevent negative or extreme ranges with FastAPI’s Query validation.

Apply:

@@
-from fastapi import APIRouter, Depends, HTTPException, status
+from fastapi import APIRouter, Depends, HTTPException, status, Query
+from typing import Annotated
@@
-async def get_content_analytics(
+async def get_content_analytics(
     content_mapping_id: str,
-    days: int = 30,
+    days: Annotated[int, Query(ge=1, le=365)] = 30,
@@
-async def get_contract_analytics(
+async def get_contract_analytics(
     contract_id: str,
-    days: int = 30,
+    days: Annotated[int, Query(ge=1, le=365)] = 30,
@@
-async def get_contract_roi(
+async def get_contract_roi(
     contract_id: str,
-    days: int = 30,
+    days: Annotated[int, Query(ge=1, le=365)] = 30,
@@
-async def get_contract_demographics(
+async def get_contract_demographics(
     contract_id: str,
-    days: int = 30,
+    days: Annotated[int, Query(ge=1, le=365)] = 30,

Also applies to: 341-347, 404-410, 535-541, 671-677


223-227: Use SQLAlchemy boolean expressions, not == True.

Replace equality comparisons with .is_(True) for portability and lint cleanliness.

Example shown in previous diff; apply consistently.

Also applies to: 351-355, 421-425, 565-569, 692-696


125-131: Exception re-throws: chain the original error and use explicit conversion flag.

Prefer raise HTTPException(..., detail=f"...: {e!s}") from e to preserve context and satisfy RUF010/B904.

Apply pattern:

-    except Exception as e:
-        raise HTTPException(
-            status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
-            detail=f"Failed to link content: {str(e)}"
-        )
+    except Exception as e:
+        raise HTTPException(
+            status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
+            detail=f"Failed to link content: {e!s}",
+        ) from e

Repeat in other handlers.

Also applies to: 169-173, 206-210, 255-258, 288-291, 335-338, 398-401, 529-532, 665-668, 845-848


56-65: Auth placeholder returns first user from DB.

OK for scaffolding, but ensure tests override this dependency; don’t ship to prod.

Consider feature-flagging with ENV (e.g., TEST_MODE) or raising 401 unless explicitly overridden in tests.


149-165: Potential blocking service calls inside async endpoints.

content_linking_service.get_linked_content(...) and other services look sync; ensure they are async or executed off-thread to avoid blocking the event loop.

I can help wrap sync service calls with anyio.to_thread.run_sync or refactor to async if needed.


1-7: Import for Session no longer needed after unifying on AsyncSession.

Remove unused Session import reference (and add Query/Annotated per earlier comment).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a3be437 and 804abeb.

⛔ Files ignored due to path filters (2)
  • Backend/test_roi.db is excluded by !**/*.db
  • test_roi.db is excluded by !**/*.db
📒 Files selected for processing (21)
  • Backend/app/routes/content_analytics_backup.py (1 hunks)
  • Backend/test_analytics_api.py (1 hunks)
  • Backend/test_roi_service.py (1 hunks)
  • Frontend/src/__tests__/integration/end-to-end-workflows-fixed.test.tsx (1 hunks)
  • Frontend/src/__tests__/integration/integration-workflows.test.ts (1 hunks)
  • Frontend/src/components/ui/__tests__/empty-state.test.tsx (1 hunks)
  • Frontend/src/components/ui/__tests__/error-state.test.tsx (1 hunks)
  • Frontend/src/hooks/__tests__/useAudienceAnalytics.simple.test.ts (1 hunks)
  • Frontend/src/hooks/__tests__/useAudienceAnalytics.test.ts (1 hunks)
  • Frontend/src/hooks/__tests__/useEmailPreferences.test.ts (1 hunks)
  • Frontend/src/hooks/__tests__/useExportData.test.ts (1 hunks)
  • Frontend/src/hooks/__tests__/useIntegration.test.ts (1 hunks)
  • Frontend/src/hooks/__tests__/useRealTimeAnalytics.performance.test.ts (1 hunks)
  • Frontend/src/hooks/__tests__/useRetry.test.ts (1 hunks)
  • Frontend/src/pages/__tests__/Analytics.test.tsx (1 hunks)
  • Frontend/src/services/__tests__/emailPreferencesService.test.ts (1 hunks)
  • Frontend/src/services/__tests__/errorHandlingService.test.ts (1 hunks)
  • Frontend/src/services/__tests__/integrationService.test.ts (1 hunks)
  • Frontend/src/test-setup.md (1 hunks)
  • Frontend/src/test-setup.ts (1 hunks)
  • Frontend/vitest.config.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
Backend/app/routes/content_analytics_backup.py (2)
Backend/app/db/db.py (1)
  • get_db (38-40)
Backend/app/models/models.py (3)
  • User (25-53)
  • Sponsorship (76-92)
  • SponsorshipApplication (114-128)
Backend/test_roi_service.py (1)
Backend/app/models/models.py (3)
  • Sponsorship (76-92)
  • SponsorshipPayment (146-162)
  • User (25-53)
🪛 ast-grep (0.39.6)
Frontend/src/hooks/__tests__/useIntegration.test.ts

[warning] 34-34: Detected potential storage of sensitive information in browser localStorage. Sensitive data like email addresses, personal information, or authentication tokens should not be stored in localStorage as it's accessible to any script.
Context: localStorage.setItem('token', 'test-token')
Note: [CWE-312] Cleartext Storage of Sensitive Information [REFERENCES]
- https://owasp.org/www-community/vulnerabilities/HTML5_Security_Cheat_Sheet
- https://cwe.mitre.org/data/definitions/312.html

(browser-storage-sensitive-data)

Frontend/src/__tests__/integration/integration-workflows.test.ts

[warning] 16-16: Detected potential storage of sensitive information in browser localStorage. Sensitive data like email addresses, personal information, or authentication tokens should not be stored in localStorage as it's accessible to any script.
Context: localStorage.setItem('token', 'test-token')
Note: [CWE-312] Cleartext Storage of Sensitive Information [REFERENCES]
- https://owasp.org/www-community/vulnerabilities/HTML5_Security_Cheat_Sheet
- https://cwe.mitre.org/data/definitions/312.html

(browser-storage-sensitive-data)

Frontend/src/services/__tests__/integrationService.test.ts

[warning] 14-14: Detected potential storage of sensitive information in browser localStorage. Sensitive data like email addresses, personal information, or authentication tokens should not be stored in localStorage as it's accessible to any script.
Context: localStorage.setItem('token', 'test-token')
Note: [CWE-312] Cleartext Storage of Sensitive Information [REFERENCES]
- https://owasp.org/www-community/vulnerabilities/HTML5_Security_Cheat_Sheet
- https://cwe.mitre.org/data/definitions/312.html

(browser-storage-sensitive-data)

🪛 Biome (2.1.2)
Frontend/src/pages/__tests__/Analytics.test.tsx

[error] 6-6: Shouldn't redeclare 'it'. Consider to delete it or rename it.

'it' is defined here:

(lint/suspicious/noRedeclare)


[error] 7-7: Shouldn't redeclare 'it'. Consider to delete it or rename it.

'it' is defined here:

(lint/suspicious/noRedeclare)


[error] 8-8: Shouldn't redeclare 'it'. Consider to delete it or rename it.

'it' is defined here:

(lint/suspicious/noRedeclare)


[error] 9-9: Shouldn't redeclare 'it'. Consider to delete it or rename it.

'it' is defined here:

(lint/suspicious/noRedeclare)


[error] 10-10: Shouldn't redeclare 'it'. Consider to delete it or rename it.

'it' is defined here:

(lint/suspicious/noRedeclare)


[error] 11-11: Shouldn't redeclare 'it'. Consider to delete it or rename it.

'it' is defined here:

(lint/suspicious/noRedeclare)


[error] 12-12: Shouldn't redeclare 'it'. Consider to delete it or rename it.

'it' is defined here:

(lint/suspicious/noRedeclare)


[error] 13-13: Shouldn't redeclare 'it'. Consider to delete it or rename it.

'it' is defined here:

(lint/suspicious/noRedeclare)


[error] 14-14: Shouldn't redeclare 'it'. Consider to delete it or rename it.

'it' is defined here:

(lint/suspicious/noRedeclare)


[error] 15-15: Shouldn't redeclare 'it'. Consider to delete it or rename it.

'it' is defined here:

(lint/suspicious/noRedeclare)


[error] 16-16: Shouldn't redeclare 'it'. Consider to delete it or rename it.

'it' is defined here:

(lint/suspicious/noRedeclare)

Frontend/src/hooks/__tests__/useAudienceAnalytics.simple.test.ts

[error] 148-148: Shouldn't redeclare 'getMockAudienceData'. Consider to delete it or rename it.

'getMockAudienceData' is defined here:

(lint/suspicious/noRedeclare)

🪛 Ruff (0.14.1)
Backend/test_analytics_api.py

39-39: Probable use of requests call without timeout

(S113)


63-63: Probable use of requests call without timeout

(S113)


96-96: Probable use of requests call without timeout

(S113)


124-124: Probable use of requests call without timeout

(S113)


178-178: Probable use of requests call without timeout

(S113)


230-230: Probable use of requests call without timeout

(S113)


285-285: Probable use of requests call without timeout

(S113)


306-306: Probable use of requests call without timeout

(S113)


337-337: Probable use of requests call without timeout

(S113)


370-370: Probable use of requests call without timeout

(S113)


406-406: Probable use of requests call without timeout

(S113)


424-424: Probable use of requests call without timeout

(S113)


434-434: Probable use of requests call without timeout

(S113)


442-442: Probable use of requests call without timeout

(S113)


487-487: Do not catch blind exception: Exception

(BLE001)


488-488: Use explicit conversion flag

Replace with conversion flag

(RUF010)

Backend/app/routes/content_analytics_backup.py

57-57: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


95-95: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


96-96: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


102-105: Abstract raise to an inner function

(TRY301)


127-127: Do not catch blind exception: Exception

(BLE001)


128-131: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


130-130: Use explicit conversion flag

Replace with conversion flag

(RUF010)


137-137: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


138-138: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


144-147: Abstract raise to an inner function

(TRY301)


169-169: Do not catch blind exception: Exception

(BLE001)


170-173: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


172-172: Use explicit conversion flag

Replace with conversion flag

(RUF010)


180-180: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


181-181: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


187-190: Abstract raise to an inner function

(TRY301)


202-202: Consider moving this statement to an else block

(TRY300)


206-206: Do not catch blind exception: Exception

(BLE001)


207-210: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


209-209: Use explicit conversion flag

Replace with conversion flag

(RUF010)


216-216: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


217-217: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


225-225: Avoid equality comparisons to True; use ContractContentMapping.is_active: for truth checks

Replace with ContractContentMapping.is_active

(E712)


231-234: Abstract raise to an inner function

(TRY301)


238-241: Abstract raise to an inner function

(TRY301)


254-254: Do not catch blind exception: Exception

(BLE001)


255-258: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


257-257: Use explicit conversion flag

Replace with conversion flag

(RUF010)


264-264: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


265-265: Undefined name Session

(F821)


265-265: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


271-274: Abstract raise to an inner function

(TRY301)


287-287: Do not catch blind exception: Exception

(BLE001)


288-291: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


290-290: Use explicit conversion flag

Replace with conversion flag

(RUF010)


298-298: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


299-299: Undefined name Session

(F821)


299-299: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


306-306: Avoid equality comparisons to True; use ContractContentMapping.is_active: for truth checks

Replace with ContractContentMapping.is_active

(E712)


310-313: Abstract raise to an inner function

(TRY301)


317-320: Abstract raise to an inner function

(TRY301)


334-334: Do not catch blind exception: Exception

(BLE001)


335-338: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


337-337: Use explicit conversion flag

Replace with conversion flag

(RUF010)


345-345: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


346-346: Undefined name Session

(F821)


346-346: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


353-353: Avoid equality comparisons to True; use ContractContentMapping.is_active: for truth checks

Replace with ContractContentMapping.is_active

(E712)


357-360: Abstract raise to an inner function

(TRY301)


364-367: Abstract raise to an inner function

(TRY301)


379-382: Abstract raise to an inner function

(TRY301)


397-397: Do not catch blind exception: Exception

(BLE001)


398-401: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


400-400: Use explicit conversion flag

Replace with conversion flag

(RUF010)


408-408: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


409-409: Undefined name Session

(F821)


409-409: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


415-418: Abstract raise to an inner function

(TRY301)


423-423: Avoid equality comparisons to True; use ContractContentMapping.is_active: for truth checks

Replace with ContractContentMapping.is_active

(E712)


528-528: Do not catch blind exception: Exception

(BLE001)


529-532: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


531-531: Use explicit conversion flag

Replace with conversion flag

(RUF010)


539-539: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


540-540: Undefined name Session

(F821)


540-540: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


546-549: Abstract raise to an inner function

(TRY301)


555-558: Abstract raise to an inner function

(TRY301)


567-567: Avoid equality comparisons to True; use ContractContentMapping.is_active: for truth checks

Replace with ContractContentMapping.is_active

(E712)


664-664: Do not catch blind exception: Exception

(BLE001)


665-668: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


667-667: Use explicit conversion flag

Replace with conversion flag

(RUF010)


675-675: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


676-676: Undefined name Session

(F821)


676-676: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


682-685: Abstract raise to an inner function

(TRY301)


694-694: Avoid equality comparisons to True; use ContractContentMapping.is_active: for truth checks

Replace with ContractContentMapping.is_active

(E712)


844-844: Do not catch blind exception: Exception

(BLE001)


845-848: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)


847-847: Use explicit conversion flag

Replace with conversion flag

(RUF010)

🔇 Additional comments (7)
Frontend/src/test-setup.ts (1)

28-37: LGTM for URL object stubs

Deterministic createObjectURL and revokeObjectURL no‑op are appropriate for jsdom.

Frontend/vitest.config.ts (1)

5-11: Config looks solid for React + jsdom + setup file

No blocking issues found.

Frontend/src/services/__tests__/errorHandlingService.test.ts (1)

1-282: LGTM! Comprehensive error handling test coverage.

This test suite provides excellent coverage of error handling scenarios including network errors, timeouts, various HTTP status codes, context-specific messaging, retry logic, and exponential backoff. The tests are well-organized and validate both the service and utility functions.

Frontend/src/hooks/__tests__/useIntegration.test.ts (1)

1-148: LGTM! Comprehensive hook testing with proper mocking.

The test suite effectively validates the useIntegration hook's functionality including initialization state, workflow execution methods, status monitoring, and error handling. The mocking strategy for both the integration service and toast notifications is appropriate.

Frontend/src/__tests__/integration/integration-workflows.test.ts (1)

1-197: LGTM! Solid integration test coverage.

This test suite provides comprehensive coverage of integration workflows including workflow management, parameter validation, error handling, and performance scenarios. The tests effectively validate the integration service API and workflow behavior.

Frontend/src/hooks/__tests__/useAudienceAnalytics.test.ts (1)

1-436: LGTM! Thorough audience analytics testing.

This test suite provides excellent coverage of the useAudienceAnalytics hook including data fetching, error handling, parameter handling, refresh mechanisms, auto-refresh intervals, demographic calculations, engagement insights, and authentication. The tests validate both successful scenarios and edge cases like insufficient data.

Backend/test_roi_service.py (1)

1-380: LGTM! Comprehensive ROI service testing.

This test suite provides thorough coverage of the ROI service including core calculations (ROI percentage, CPA, CTR, engagement rate), campaign-level metrics, portfolio calculations, trend analysis, and target comparisons. The tests include proper edge case handling (zero values, empty data, None values) and use appropriate mocking strategies for database interactions.

Comment on lines +261 to +276
@router.post("/contracts/{contract_id}/sync", response_model=SyncResponse)
async def sync_contract_content(
contract_id: str,
current_user: User = Depends(get_current_user),
db: Session = Depends(get_db)
):
"""Manually trigger sync for all content in a contract"""
try:
# Verify user has access to this contract
if not verify_contract_access(contract_id, current_user.id, db):
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail="Access denied to this contract"
)

# Sync contract content
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Async/DB mismatch and missing awaits allow authorization bypass and runtime failures.

Endpoints annotate db: Session but get_db yields AsyncSession; several calls use db.query(...) (invalid on AsyncSession) and access checks call verify_contract_access(...) without await. This will skip RBAC checks and can 500 at runtime.

Apply this pattern (fix shown for multiple functions; replicate across file):

@@
-@router.post("/contracts/{contract_id}/sync", response_model=SyncResponse)
-async def sync_contract_content(
-    contract_id: str,
-    current_user: User = Depends(get_current_user),
-    db: Session = Depends(get_db)
-):
+@router.post("/contracts/{contract_id}/sync", response_model=SyncResponse)
+async def sync_contract_content(
+    contract_id: str,
+    current_user: User = Depends(get_current_user),
+    db: AsyncSession = Depends(get_db),
+):
@@
-        if not verify_contract_access(contract_id, current_user.id, db):
+        if not await verify_contract_access(contract_id, current_user.id, db):
             raise HTTPException(
                 status_code=status.HTTP_403_FORBIDDEN,
                 detail="Access denied to this contract"
             )
@@
-@router.post("/content/{content_mapping_id}/sync", response_model=SyncResponse)
-async def sync_content(
+@router.post("/content/{content_mapping_id}/sync", response_model=SyncResponse)
+async def sync_content(
     content_mapping_id: str,
     force_refresh: bool = False,
     current_user: User = Depends(get_current_user),
-    db: Session = Depends(get_db)
+    db: AsyncSession = Depends(get_db),
 ):
@@
-        mapping = db.query(ContractContentMapping).filter(
-            ContractContentMapping.id == content_mapping_id,
-            ContractContentMapping.is_active == True
-        ).first()
+        result = await db.execute(
+            select(ContractContentMapping).where(
+                ContractContentMapping.id == content_mapping_id,
+                ContractContentMapping.is_active.is_(True),
+            )
+        )
+        mapping = result.scalars().first()
@@
-        if not verify_contract_access(mapping.contract_id, current_user.id, db):
+        if not await verify_contract_access(mapping.contract_id, current_user.id, db):
             raise HTTPException(
                 status_code=status.HTTP_403_FORBIDDEN,
                 detail="Access denied to this content"
             )
@@
-async def get_content_analytics(
+async def get_content_analytics(
     content_mapping_id: str,
-    days: int = 30,
+    days: int = 30,
     current_user: User = Depends(get_current_user),
-    db: Session = Depends(get_db)
+    db: AsyncSession = Depends(get_db),
 ):
@@
-        mapping = db.query(ContractContentMapping).filter(
-            ContractContentMapping.id == content_mapping_id,
-            ContractContentMapping.is_active == True
-        ).first()
+        result = await db.execute(
+            select(ContractContentMapping).where(
+                ContractContentMapping.id == content_mapping_id,
+                ContractContentMapping.is_active.is_(True),
+            )
+        )
+        mapping = result.scalars().first()
@@
-        if not verify_contract_access(mapping.contract_id, current_user.id, db):
+        if not await verify_contract_access(mapping.contract_id, current_user.id, db):
             raise HTTPException(
                 status_code=status.HTTP_403_FORBIDDEN,
                 detail="Access denied to this content"
             )
@@
-async def get_contract_analytics(
+async def get_contract_analytics(
     contract_id: str,
-    days: int = 30,
+    days: int = 30,
     current_user: User = Depends(get_current_user),
-    db: Session = Depends(get_db)
+    db: AsyncSession = Depends(get_db),
 ):
@@
-        mappings = db.query(ContractContentMapping).filter(
-            ContractContentMapping.contract_id == contract_id,
-            ContractContentMapping.is_active == True
-        ).all()
+        result = await db.execute(
+            select(ContractContentMapping).where(
+                ContractContentMapping.contract_id == contract_id,
+                ContractContentMapping.is_active.is_(True),
+            )
+        )
+        mappings = result.scalars().all()
@@
-            latest_analytics = db.query(ContentAnalytics).filter(
-                ContentAnalytics.contract_content_id == mapping.id,
-                ContentAnalytics.metrics_collected_at >= start_date,
-                ContentAnalytics.metrics_collected_at <= end_date
-            ).order_by(ContentAnalytics.metrics_collected_at.desc()).first()
+            latest_analytics = (
+                await db.execute(
+                    select(ContentAnalytics)
+                    .where(
+                        ContentAnalytics.contract_content_id == mapping.id,
+                        ContentAnalytics.metrics_collected_at >= start_date,
+                        ContentAnalytics.metrics_collected_at <= end_date,
+                    )
+                    .order_by(ContentAnalytics.metrics_collected_at.desc())
+                )
+            ).scalars().first()
@@
-async def get_contract_roi(
+async def get_contract_roi(
     contract_id: str,
-    days: int = 30,
+    days: int = 30,
     current_user: User = Depends(get_current_user),
-    db: Session = Depends(get_db)
+    db: AsyncSession = Depends(get_db),
 ):
@@
-        contract = db.query(Sponsorship).filter(Sponsorship.id == contract_id).first()
+        contract = (
+            await db.execute(select(Sponsorship).where(Sponsorship.id == contract_id))
+        ).scalars().first()
@@
-        mappings = db.query(ContractContentMapping).filter(
-            ContractContentMapping.contract_id == contract_id,
-            ContractContentMapping.is_active == True
-        ).all()
+        result = await db.execute(
+            select(ContractContentMapping).where(
+                ContractContentMapping.contract_id == contract_id,
+                ContractContentMapping.is_active.is_(True),
+            )
+        )
+        mappings = result.scalars().all()
@@
-            latest_analytics = db.query(ContentAnalytics).filter(
-                ContentAnalytics.contract_content_id == mapping.id,
-                ContentAnalytics.metrics_collected_at >= start_date,
-                ContentAnalytics.metrics_collected_at <= end_date
-            ).order_by(ContentAnalytics.metrics_collected_at.desc()).first()
+            latest_analytics = (
+                await db.execute(
+                    select(ContentAnalytics)
+                    .where(
+                        ContentAnalytics.contract_content_id == mapping.id,
+                        ContentAnalytics.metrics_collected_at >= start_date,
+                        ContentAnalytics.metrics_collected_at <= end_date,
+                    )
+                    .order_by(ContentAnalytics.metrics_collected_at.desc())
+                )
+            ).scalars().first()
@@
-async def get_contract_demographics(
+async def get_contract_demographics(
     contract_id: str,
-    days: int = 30,
+    days: int = 30,
     current_user: User = Depends(get_current_user),
-    db: Session = Depends(get_db)
+    db: AsyncSession = Depends(get_db),
 ):
@@
-        mappings = db.query(ContractContentMapping).filter(
-            ContractContentMapping.contract_id == contract_id,
-            ContractContentMapping.is_active == True
-        ).all()
+        result = await db.execute(
+            select(ContractContentMapping).where(
+                ContractContentMapping.contract_id == contract_id,
+                ContractContentMapping.is_active.is_(True),
+            )
+        )
+        mappings = result.scalars().all()
@@
-            latest_analytics = db.query(ContentAnalytics).filter(
-                ContentAnalytics.contract_content_id == mapping.id,
-                ContentAnalytics.metrics_collected_at >= start_date,
-                ContentAnalytics.metrics_collected_at <= end_date
-            ).order_by(ContentAnalytics.metrics_collected_at.desc()).first()
+            latest_analytics = (
+                await db.execute(
+                    select(ContentAnalytics)
+                    .where(
+                        ContentAnalytics.contract_content_id == mapping.id,
+                        ContentAnalytics.metrics_collected_at >= start_date,
+                        ContentAnalytics.metrics_collected_at <= end_date,
+                    )
+                    .order_by(ContentAnalytics.metrics_collected_at.desc())
+                )
+            ).scalars().first()

Also applies to: 298-306, 341-347, 404-410, 535-541, 671-677

🧰 Tools
🪛 Ruff (0.14.1)

264-264: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


265-265: Undefined name Session

(F821)


265-265: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


271-274: Abstract raise to an inner function

(TRY301)

🤖 Prompt for AI Agents
In Backend/app/routes/content_analytics_backup.py around lines 261-276 (and
likewise at 298-306, 341-347, 404-410, 535-541, 671-677), the endpoint is
annotated with db: Session but get_db yields an AsyncSession and
verify_contract_access is called without await, causing RBAC checks to be
skipped and queries to fail; change the db dependency typing to AsyncSession,
make any called helper functions (e.g., verify_contract_access) async if they
are not already, await their calls (await verify_contract_access(...)), and
replace sync Session calls (db.query/db.execute without await) with async
patterns (await db.execute(select(...)) and use result.scalars().first()/all()
or await db.get(...)) to perform queries; apply the same async fixes to the
other listed line ranges so all access checks and DB operations use await and
AsyncSession APIs.

Comment on lines +16 to +23
def __init__(self):
self.base_url = "http://localhost:8000/api"
self.test_user_id = None
self.test_contract_id = None
self.test_content_mapping_id = None
self.headers = {"Content-Type": "application/json"}

def setup_test_data(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Pytest will not collect classes with init — convert to fixtures/class attrs.

Remove init and use class attributes + module-level skip marker/env-driven base URL to make tests discoverable.

Apply this minimal diff:

@@
-import pytest
+import pytest
+import os
@@
-class TestAnalyticsAPI:
+RUN_INTEGRATION = os.getenv("RUN_API_INTEGRATION_TESTS", "0") == "1"
+pytestmark = pytest.mark.skipif(not RUN_INTEGRATION, reason="Set RUN_API_INTEGRATION_TESTS=1 to run API integration tests")
+
+class TestAnalyticsAPI:
@@
-    def __init__(self):
-        self.base_url = "http://localhost:8000/api"
-        self.test_user_id = None
-        self.test_contract_id = None
-        self.test_content_mapping_id = None
-        self.headers = {"Content-Type": "application/json"}
+    base_url = os.getenv("API_BASE_URL", "http://localhost:8000/api")
+    headers = {"Content-Type": "application/json"}
+    DEFAULT_TIMEOUT = float(os.getenv("HTTP_TIMEOUT", "10"))
+    test_user_id = None
+    test_contract_id = None
+    test_content_mapping_id = None

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In Backend/test_analytics_api.py around lines 16 to 23, the test class defines
an __init__ which prevents pytest from collecting it; remove the __init__ method
and instead declare base_url, test_user_id, test_contract_id,
test_content_mapping_id, and headers as class attributes (or set them via a
pytest fixture). Also add a module-level skip marker or check that reads the
base URL from an environment variable (with a sensible default) so tests are
discoverable and can be skipped when the service is unavailable, and convert
setup_test_data to either a classmethod or fixture that initializes instance
state without relying on __init__.

Comment on lines +31 to +57
def test_link_content_to_contract(self):
"""Test POST /api/contracts/:id/content endpoint"""
print("Testing content linking endpoint...")

# Test data
test_content_url = "https://www.instagram.com/p/ABC123DEF456/"

# Test request
response = requests.post(
f"{self.base_url}/contracts/{self.test_contract_id}/content",
json={"content_url": test_content_url},
headers=self.headers
)

print(f"Status Code: {response.status_code}")
print(f"Response: {response.json()}")

# Assertions for successful linking
if response.status_code == 200:
data = response.json()
assert "success" in data
assert "message" in data
if data["success"]:
assert "content_mapping_id" in data
self.test_content_mapping_id = data["content_mapping_id"]

return response.status_code == 200
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Tests should assert, not return booleans/print.

Use pytest assertions and fail on unexpected codes; returning booleans is ignored by pytest.

Example for this test:

@@ def test_link_content_to_contract(self):
-        # Assertions for successful linking
-        if response.status_code == 200:
-            data = response.json()
-            assert "success" in data
-            assert "message" in data
-            if data["success"]:
-                assert "content_mapping_id" in data
-                self.test_content_mapping_id = data["content_mapping_id"]
-
-        return response.status_code == 200
+        assert response.status_code in (200, 201)
+        data = _safe_json(response)
+        assert "success" in data and "message" in data
+        if data.get("success"):
+            assert "content_mapping_id" in data
+            self.test_content_mapping_id = data["content_mapping_id"]

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Ruff (0.14.1)

39-39: Probable use of requests call without timeout

(S113)

🤖 Prompt for AI Agents
In Backend/test_analytics_api.py around lines 31 to 57, the test currently
prints status and returns a boolean instead of using pytest assertions; update
the test to assert response.status_code == 200 with a descriptive message on
failure, remove prints and the return, parse response.json() once into a
variable, assert required keys ("success", "message") exist and that
data["success"] is True, then assert "content_mapping_id" exists when success is
True and assign it to self.test_content_mapping_id; ensure any unexpected status
or missing keys raises an assertion so pytest fails the test.

Comment on lines +39 to +43
response = requests.post(
f"{self.base_url}/contracts/{self.test_contract_id}/content",
json={"content_url": test_content_url},
headers=self.headers
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

All HTTP calls lack timeouts — add a sane default.

Prevent hangs/flaky CI by passing a timeout to every requests.* call.

Apply the pattern below (example shown for the first few; replicate for all):

@@
-        response = requests.post(
+        response = requests.post(
             f"{self.base_url}/contracts/{self.test_contract_id}/content",
             json={"content_url": test_content_url},
-            headers=self.headers
+            headers=self.headers,
+            timeout=self.DEFAULT_TIMEOUT,
         )
@@
-        response = requests.get(
+        response = requests.get(
             f"{self.base_url}/contracts/{self.test_contract_id}/content",
-            headers=self.headers
+            headers=self.headers,
+            timeout=self.DEFAULT_TIMEOUT,
         )

Also applies to: 63-66, 96-99, 124-128, 178-182, 230-234, 285-289, 306-309, 337-341, 370-374, 406-409, 424-428, 434-437, 442-446

🧰 Tools
🪛 Ruff (0.14.1)

39-39: Probable use of requests call without timeout

(S113)

🤖 Prompt for AI Agents
In Backend/test_analytics_api.py around lines 39-43 (and similarly at 63-66,
96-99, 124-128, 178-182, 230-234, 285-289, 306-309, 337-341, 370-374, 406-409,
424-428, 434-437, 442-446), all requests.* calls lack a timeout causing
potential hangs; update each requests.get/post/put/delete call to include a sane
timeout (e.g., timeout=10) or use a module-level constant (DEFAULT_TIMEOUT = 10)
and pass timeout=DEFAULT_TIMEOUT to every call so tests fail fast instead of
hanging.

Comment on lines +1 to +2
import { describe, it, expect, vi } from 'vitest';
import { getMockAudienceData } from '../useAudienceAnalytics';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove conflicting import.

The function getMockAudienceData is imported from useAudienceAnalytics on line 2, but then redeclared and exported on line 148. This creates a naming conflict flagged by static analysis.

Since this file appears to be the source of the mock data generator (based on the comment "Export the mock data generator for testing" at line 147), remove the import:

 import { describe, it, expect, vi } from 'vitest';
-import { getMockAudienceData } from '../useAudienceAnalytics';

If getMockAudienceData is meant to be imported from useAudienceAnalytics for testing, then remove the redeclaration at lines 148-222 instead.

🤖 Prompt for AI Agents
In Frontend/src/hooks/__tests__/useAudienceAnalytics.simple.test.ts around lines
1 to 2 and 147 to 222, there is a naming conflict: getMockAudienceData is
imported on line 2 but also redeclared/exported starting at line 148; remove the
conflicting import (delete the getMockAudienceData import on line 2) so the
local exported mock generator is the single source of truth, or alternatively if
the intention is to use the external implementation, delete the local
redeclaration block at lines 148–222—pick one approach and ensure only one
definition of getMockAudienceData remains.

Comment on lines +22 to +33
jest.mock('@/context/AuthContext', () => ({
useAuth: () => ({ user: mockUser })
}));

// Mock sonner toast
jest.mock('sonner', () => ({
toast: {
success: jest.fn(),
error: jest.fn()
}
}));

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Convert Jest mocks/spies/globals to Vitest equivalents

Jest APIs won’t exist in Vitest. Switch to vi.* and use vi.stubGlobal for fetch.

-const mockUser = { id: '1', email: '[email protected]', role: 'brand' };
-jest.mock('@/context/AuthContext', () => ({
+const mockUser = { id: '1', email: '[email protected]', role: 'brand' }
+vi.mock('@/context/AuthContext', () => ({
   useAuth: () => ({ user: mockUser })
-}));
+}))
 
-// Mock sonner toast
-jest.mock('sonner', () => ({
+// Mock sonner toast
+vi.mock('sonner', () => ({
   toast: {
-    success: jest.fn(),
-    error: jest.fn()
+    success: vi.fn(),
+    error: vi.fn()
   }
-}));
+}))
 
 // Mock the analytics components
-jest.mock('@/components/analytics/performance-overview', () => {
+vi.mock('@/components/analytics/performance-overview', () => {
   return function MockPerformanceOverview({ loading }: { loading?: boolean }) {
     if (loading) return <div>Loading performance overview...</div>;
     return <div>Performance Overview Component</div>;
   };
-});
+});
 
-jest.mock('@/components/analytics/metrics-chart', () => {
+vi.mock('@/components/analytics/metrics-chart', () => {
   return function MockMetricsChart({ title }: { title?: string }) {
     return <div>Metrics Chart: {title}</div>;
   };
-});
+});
 
-jest.mock('@/components/analytics/contract-comparison', () => {
+vi.mock('@/components/analytics/contract-comparison', () => {
   return function MockContractComparison({ loading }: { loading?: boolean }) {
     if (loading) return <div>Loading contract comparison...</div>;
     return <div>Contract Comparison Component</div>;
   };
-});
+});
 
-// Mock fetch
-global.fetch = jest.fn();
+// Mock fetch
+vi.stubGlobal('fetch', vi.fn())
 
 describe('Analytics', () => {
   beforeEach(() => {
-    jest.clearAllMocks();
-    (fetch as jest.Mock).mockClear();
+    vi.clearAllMocks()
+    ;(global.fetch as any).mockClear()
   });

You’ll also need to replace the remaining jest.fn()/jest.spyOn/(fetch as jest.Mock) uses with vi.fn()/vi.spyOn/(global.fetch as any) throughout this file.

To find any lingering Jest usages across the repo:

Also applies to: 35-53, 56-56, 64-68


🏁 Script executed:

#!/bin/bash
rg -n --glob '!**/node_modules/**' -e '\bjest\.' -e "from 'node:test'" -e 'require\(.+node:test.+\)'

Length of output: 9410


Let me check the file directly:

Based on my verification, the review comment contains an important issue that needs to be addressed:

The diff contains a problematic pattern with redundant mock clearing

vi.clearAllMocks() calls .mockClear() on all spies, which clears mock history without affecting mock implementations. The provided diff shows:

vi.clearAllMocks()
;(global.fetch as any).mockClear()

This is redundant since vi.clearAllMocks() already clears all mocks, including the global fetch. The second line is unnecessary.

Additionally, the review comment should clarify that vi.stubGlobal will not automatically reset between tests unless you enable unstubGlobals config option or call vi.unstubAllGlobals().

For handling the global fetch mock in the beforeEach hook, the simpler and correct approach would be:

beforeEach(() => {
  vi.clearAllMocks()
})

Since vi.stubGlobal('fetch', vi.fn()) at the top level has already set up the mock, vi.clearAllMocks() will handle clearing its state between tests.

Convert Jest mocks/spies/globals to Vitest equivalents

Replace all Jest APIs with their Vitest counterparts:

  • jest.mock()vi.mock()
  • jest.fn()vi.fn()
  • jest.spyOn()vi.spyOn()
  • jest.clearAllMocks()vi.clearAllMocks()
  • global.fetch = jest.fn()vi.stubGlobal('fetch', vi.fn())
  • (fetch as jest.Mock)(global.fetch as any) or use vi.mocked(fetch) for type safety

Remove the redundant (global.fetch as any).mockClear() call in the beforeEach since vi.clearAllMocks() already handles it.

Also verify that the file doesn't have the duplicate node:test imports shown in the script output (lines 5-18), as those would indicate a file corruption issue requiring cleanup.


🏁 Script executed:

#!/bin/bash
# View the Analytics.test.tsx file, especially the imports and mock setup around lines 1-80
head -90 Frontend/src/pages/__tests__/Analytics.test.tsx

Length of output: 106


🏁 Script executed:

#!/bin/bash
# Find the Analytics test file
fd -type f -name "Analytics.test.tsx"

Length of output: 170


🏁 Script executed:

#!/bin/bash
# Also check if the Frontend directory structure exists
ls -la Frontend/src/pages/__tests__/ 2>/dev/null || echo "Path does not exist, searching for test files..."
fd -type f "Analytics.*test" 

Length of output: 349


🏁 Script executed:

#!/bin/bash
# Read the actual file content
cat Frontend/src/pages/__tests__/Analytics.test.tsx

Length of output: 8094


Clean up duplicate imports and convert all Jest APIs to Vitest equivalents

The file has a critical issue: lines 5-16 contain 11 duplicate import { it } from 'node:test' statements. This must be consolidated to a single import statement with all needed functions.

Additionally, the file still uses Jest APIs despite importing from node:test, which is an invalid mixed state. The review comment's suggestions are correct—convert:

  • jest.mock()vi.mock()
  • jest.fn()vi.fn()
  • jest.spyOn()vi.spyOn()
  • jest.clearAllMocks()vi.clearAllMocks()
  • global.fetch = jest.fn()vi.stubGlobal('fetch', vi.fn())
  • (fetch as jest.Mock)(global.fetch as any)

First fix the imports (lines 5-18):

import { it, describe, beforeEach } from 'node:test';

Then apply the remaining Jest→Vitest conversions shown in the review comment diff. The diff provided is appropriate, though note that vi.clearAllMocks() already clears the stubbed fetch, making the separate (global.fetch as any).mockClear() in beforeEach redundant and should be removed.

🤖 Prompt for AI Agents
In Frontend/src/pages/__tests__/Analytics.test.tsx around lines 22 to 33,
consolidate the many duplicate "import { it } from 'node:test'" statements into
a single import that includes all needed functions (e.g., import { it, describe,
beforeEach } from 'node:test'), then convert all Jest APIs to Vitest
equivalents: replace jest.mock() → vi.mock(), jest.fn() → vi.fn(), jest.spyOn()
→ vi.spyOn(), jest.clearAllMocks() → vi.clearAllMocks(), replace global.fetch =
jest.fn() with vi.stubGlobal('fetch', vi.fn()), and cast mocks using
(global.fetch as any) instead of (fetch as jest.Mock); finally remove the
redundant (global.fetch as any).mockClear() in beforeEach since
vi.clearAllMocks() already resets the stubbed fetch.

Comment on lines +1 to +155
/**
* Tests for Email Preferences Service - Simple YES/NO toggle functionality
*/

import { emailPreferencesService } from '../emailPreferencesService';

// Mock fetch globally
global.fetch = jest.fn();

describe('EmailPreferencesService', () => {
beforeEach(() => {
jest.clearAllMocks();
});

describe('getEmailPreference', () => {
it('should fetch email preference successfully', async () => {
const mockResponse = {
email_notifications_enabled: true
};

(fetch as jest.Mock).mockResolvedValueOnce({
ok: true,
json: async () => mockResponse,
});

const result = await emailPreferencesService.getEmailPreference();

expect(fetch).toHaveBeenCalledWith(
'http://localhost:8000/api/email-preferences/',
{
headers: {
'Content-Type': 'application/json',
},
}
);
expect(result).toEqual(mockResponse);
});

it('should handle fetch error', async () => {
(fetch as jest.Mock).mockResolvedValueOnce({
ok: false,
status: 500,
statusText: 'Internal Server Error',
json: async () => ({ detail: 'Server error' }),
});

await expect(emailPreferencesService.getEmailPreference()).rejects.toThrow(
'Server error'
);
});

it('should handle network error', async () => {
(fetch as jest.Mock).mockRejectedValueOnce(new Error('Network error'));

await expect(emailPreferencesService.getEmailPreference()).rejects.toThrow(
'Network error'
);
});
});

describe('updateEmailPreference', () => {
it('should update email preference successfully', async () => {
const mockResponse = {
email_notifications_enabled: false
};

(fetch as jest.Mock).mockResolvedValueOnce({
ok: true,
json: async () => mockResponse,
});

const result = await emailPreferencesService.updateEmailPreference(false);

expect(fetch).toHaveBeenCalledWith(
'http://localhost:8000/api/email-preferences/',
{
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ email_notifications_enabled: false }),
}
);
expect(result).toEqual(mockResponse);
});

it('should handle update error', async () => {
(fetch as jest.Mock).mockResolvedValueOnce({
ok: false,
status: 400,
statusText: 'Bad Request',
json: async () => ({ detail: 'Invalid request' }),
});

await expect(emailPreferencesService.updateEmailPreference(true)).rejects.toThrow(
'Invalid request'
);
});

it('should handle update with enabled preference', async () => {
const mockResponse = {
email_notifications_enabled: true
};

(fetch as jest.Mock).mockResolvedValueOnce({
ok: true,
json: async () => mockResponse,
});

const result = await emailPreferencesService.updateEmailPreference(true);

expect(fetch).toHaveBeenCalledWith(
'http://localhost:8000/api/email-preferences/',
{
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ email_notifications_enabled: true }),
}
);
expect(result.email_notifications_enabled).toBe(true);
});
});

describe('error handling', () => {
it('should handle malformed JSON response', async () => {
(fetch as jest.Mock).mockResolvedValueOnce({
ok: false,
status: 500,
statusText: 'Internal Server Error',
json: async () => {
throw new Error('Invalid JSON');
},
});

await expect(emailPreferencesService.getEmailPreference()).rejects.toThrow(
'HTTP 500: Internal Server Error'
);
});

it('should use default error message when detail is not provided', async () => {
(fetch as jest.Mock).mockResolvedValueOnce({
ok: false,
status: 404,
statusText: 'Not Found',
json: async () => ({}),
});

await expect(emailPreferencesService.getEmailPreference()).rejects.toThrow(
'HTTP 404: Not Found'
);
});
});
}); No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify remaining Jest usage across the repo and type support

Run to find lingering jest. usage and check tsconfig types:


🏁 Script executed:

#!/bin/bash
# Find Jest API usage in frontend tests
rg -n -C2 -g "Frontend/**" -e '\bjest\.' -e 'as jest\.' --type ts --type tsx

# Ensure Vitest types are included for TS
fd -a 'tsconfig*.json' Frontend | xargs -I {} sh -c 'echo "## {}"; cat "{}" | sed -n "1,120p" | rg -n "\btypes\b|\bjsx\b" -n -C2'

Length of output: 72


🏁 Script executed:

#!/bin/bash
# Search for Jest usage in Frontend test files (without problematic type flags)
rg -n -C2 'jest\.' Frontend/src --type ts -g "*.test.ts" -g "*.spec.ts" | head -50

# Check package.json for test framework setup
echo "## Frontend package.json (test-related)"
cat Frontend/package.json | jq '.devDependencies | to_entries[] | select(.key | contains("jest") or contains("vitest") or contains("test"))' 2>/dev/null || grep -A5 -B5 -i "jest\|vitest" Frontend/package.json | head -40

# Read full tsconfig.json
echo "## Full tsconfig.json"
cat Frontend/tsconfig.json

Length of output: 4355


🏁 Script executed:

#!/bin/bash
# Read tsconfig.app.json
echo "## Frontend/tsconfig.app.json"
cat Frontend/tsconfig.app.json 2>/dev/null || echo "File not found"

# Read raw package.json to see test dependencies
echo ""
echo "## Frontend/package.json (raw, all content)"
cat Frontend/package.json 2>/dev/null | head -100

# Find all test files in Frontend
echo ""
echo "## All test files in Frontend"
find Frontend/src -type f \( -name "*.test.ts" -o -name "*.spec.ts" \) 2>/dev/null

Length of output: 3221


Install Jest and configure TypeScript types for test support

The test file uses Jest API (jest.fn(), jest.clearAllMocks(), jest.Mock) but Jest is not installed and types are not configured:

  • Frontend/package.json has no jest or @types/jest in devDependencies
  • Frontend/tsconfig.app.json has no types compiler option for Jest globals
  • This will cause TypeScript compilation errors on lines using jest.* and the jest.Mock type cast

Add jest and @types/jest to devDependencies, configure TypeScript types, and add a test script to package.json.

🤖 Prompt for AI Agents
In Frontend/src/services/__tests__/emailPreferencesService.test.ts (lines
1-155): the test uses Jest globals and types (jest.fn, jest.Mock,
jest.clearAllMocks) but the project lacks Jest and @types/jest and TypeScript
doesn't include Jest types, causing compile errors; to fix, add jest and
@types/jest to Frontend/package.json devDependencies and add an npm script
("test": "jest" or your preferred runner), then update the frontend TypeScript
config (e.g., Frontend/tsconfig.app.json or tsconfig.json used for tests) to
include "types": ["jest"] under the "compilerOptions" (or add a separate
tsconfig.spec.json that references jest types), and ensure Jest is configured
(basic jest.config.js/ts or package.json jest field) so tests run correctly.

Comment on lines +5 to +13
import { emailPreferencesService } from '../emailPreferencesService';

// Mock fetch globally
global.fetch = jest.fn();

describe('EmailPreferencesService', () => {
beforeEach(() => {
jest.clearAllMocks();
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Tests use Jest API/types under Vitest; migrate to vi and stub fetch correctly

Current usage of jest.fn/clearAllMocks and jest.Mock will fail without a shim. Prefer native Vitest APIs and stub globals safely.

-import { emailPreferencesService } from '../emailPreferencesService';
+import { emailPreferencesService } from '../emailPreferencesService';
+import { vi, Mock, beforeEach, afterEach } from 'vitest'
@@
-// Mock fetch globally
-global.fetch = jest.fn();
+// Mock fetch globally with Vitest
+const fetchMock = vi.fn()
+vi.stubGlobal('fetch', fetchMock)
@@
-  beforeEach(() => {
-    jest.clearAllMocks();
-  });
+  beforeEach(() => {
+    vi.clearAllMocks();
+  });
+  afterEach(() => {
+    // restore stubbed globals to avoid cross‑test bleed
+    // @ts-ignore
+    vi.unstubAllGlobals?.()
+  })

And replace all occurrences of:

  • (fetch as jest.Mock)fetchMock
  • jest.fn()vi.fn()

Example updates:

-      (fetch as jest.Mock).mockResolvedValueOnce({
+      fetchMock.mockResolvedValueOnce({
         ok: true,
         json: async () => mockResponse,
       });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { emailPreferencesService } from '../emailPreferencesService';
// Mock fetch globally
global.fetch = jest.fn();
describe('EmailPreferencesService', () => {
beforeEach(() => {
jest.clearAllMocks();
});
import { emailPreferencesService } from '../emailPreferencesService';
import { vi, Mock, beforeEach, afterEach } from 'vitest'
// Mock fetch globally with Vitest
const fetchMock = vi.fn()
vi.stubGlobal('fetch', fetchMock)
describe('EmailPreferencesService', () => {
beforeEach(() => {
vi.clearAllMocks();
});
afterEach(() => {
// restore stubbed globals to avoid cross‑test bleed
// @ts-ignore
vi.unstubAllGlobals?.()
})
🤖 Prompt for AI Agents
In Frontend/src/services/__tests__/emailPreferencesService.test.ts around lines
5-13, the test file uses Jest globals (jest.fn, jest.clearAllMocks and jest.Mock
casts) which will fail under Vitest; replace jest.fn() with vi.fn(), replace
jest.clearAllMocks() with vi.clearAllMocks() (or vi.restoreAllMocks() if you
prefer restoring spies), and stub fetch by creating a typed fetchMock (e.g.
const fetchMock = vi.fn()) and assign it to global.fetch with a safe cast
(global.fetch = fetchMock as unknown as typeof fetch); also update all
occurrences of (fetch as jest.Mock) to use the new fetchMock variable instead.

Comment on lines +1 to +3
import '@testing-library/jest-dom'
import { vi } from 'vitest'

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add Jest→Vitest compatibility shim (short‑term) to unblock current tests using jest.*

Several tests call jest.fn/clearAllMocks/mock. Provide an alias now; migrate to vi.* next.

Apply this at the top:

 import '@testing-library/jest-dom'
 import { vi } from 'vitest'
 
+// Temporary Jest compatibility for Vitest
+// Remove once all tests use `vi.*`
+// eslint-disable-next-line @typescript-eslint/no-explicit-any
+;(globalThis as any).jest = vi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import '@testing-library/jest-dom'
import { vi } from 'vitest'
import '@testing-library/jest-dom'
import { vi } from 'vitest'
// Temporary Jest compatibility for Vitest
// Remove once all tests use `vi.*`
// eslint-disable-next-line @typescript-eslint/no-explicit-any
;(globalThis as any).jest = vi
🤖 Prompt for AI Agents
In Frontend/src/test-setup.ts around lines 1 to 3, tests still call jest.* but
the project uses Vitest; add a short-term compatibility shim at the top that
assigns a global jest object delegating to vi (e.g., globalThis.jest = { fn:
vi.fn, spyOn: vi.spyOn, clearAllMocks: vi.clearAllMocks, resetAllMocks:
vi.resetAllMocks, restoreAllMocks: vi.restoreAllMocks, mock: vi.mock,
fnMockImplementation: vi.fn } or similar mappings) so existing tests using
jest.fn/clearAllMocks/mock continue to work; place this shim before any imports
that run tests and keep it temporary while migrating tests to use vi.*.

Comment on lines +5 to +12
const localStorageMock = {
getItem: vi.fn(),
setItem: vi.fn(),
removeItem: vi.fn(),
clear: vi.fn(),
};
global.localStorage = localStorageMock;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

localStorage mock is not Storage‑shape; will fail type checks and return undefined from getItem

Implement full Storage interface (length/key) and return null when missing. Also avoid direct assignment to global.localStorage.

Apply this diff:

-import { vi } from 'vitest'
+import { vi } from 'vitest'

 // Mock localStorage
-const localStorageMock = {
-  getItem: vi.fn(),
-  setItem: vi.fn(),
-  removeItem: vi.fn(),
-  clear: vi.fn(),
-};
-global.localStorage = localStorageMock;
+const store = new Map<string, string>()
+const localStorageMock: Storage = {
+  get length() { return store.size },
+  clear: vi.fn(() => { store.clear() }),
+  getItem: vi.fn((key: string) => (store.has(key) ? store.get(key)! : null)),
+  key: vi.fn((index: number) => Array.from(store.keys())[index] ?? null),
+  removeItem: vi.fn((key: string) => { store.delete(key) }),
+  setItem: vi.fn((key: string, value: string) => { store.set(key, String(value)) }),
+}
+Object.defineProperty(window, 'localStorage', {
+  configurable: true,
+  writable: true,
+  value: localStorageMock,
+})
🤖 Prompt for AI Agents
In Frontend/src/test-setup.ts around lines 5 to 12, the localStorage mock
currently lacks the full Storage shape (missing length and key) and getItem
returns undefined which will fail type checks and runtime code expecting
string|null; also avoid direct assignment to global.localStorage. Replace the
mock with an object that implements the Storage interface: maintain an internal
Map, implement getItem(key): string|null, setItem(key, value): void (store
strings), removeItem(key): void, clear(): void, key(index): string|null, and a
length getter that returns the current size; then attach it to the global object
using Object.defineProperty(global, 'localStorage', { value: mock, configurable:
true }) (or globalThis) instead of direct assignment.

@Saahi30
Copy link
Collaborator Author

Saahi30 commented Oct 30, 2025

not relevant for the moment..closing

@Saahi30 Saahi30 closed this Oct 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant