Skip to content

feat(auth): implement core authentication infrastructure#36

Merged
Patrick-Ehimen merged 1 commit intomainfrom
feat/auth-infrastructure
Nov 15, 2025
Merged

feat(auth): implement core authentication infrastructure#36
Patrick-Ehimen merged 1 commit intomainfrom
feat/auth-infrastructure

Conversation

@Patrick-Ehimen
Copy link
Copy Markdown
Owner

@Patrick-Ehimen Patrick-Ehimen commented Nov 15, 2025

This PR implements the foundational authentication infrastructure to enable multi-client API key support in the Lighthouse MCP server.

New Authentication Components

1. SecureKeyHandler (src/auth/SecureKeyHandler.ts)

  • Secure API key hashing for logging and caching
  • Key sanitization (shows only first/last 4 characters)
  • Timing-safe comparison to prevent timing attacks
  • Memory cleanup utilities
  • Format validation

2. KeyValidationCache (src/auth/KeyValidationCache.ts)

  • LRU cache with configurable size and TTL
  • Automatic cleanup of expired entries
  • LRU eviction when cache is full
  • Cache statistics tracking

3. RateLimiter (src/auth/RateLimiter.ts)

  • Sliding window rate limiting algorithm
  • Per-key rate limiting
  • Configurable requests per minute
  • Retry-after information for rate-limited requests
  • Automatic cleanup of old entries

4. RequestContext (src/auth/RequestContext.ts)

  • Request isolation and tracking
  • Unique request ID generation
  • API key and service isolation
  • Sanitized logging context
  • Request age tracking

5. LighthouseServiceFactory (src/auth/LighthouseServiceFactory.ts)

  • Service instance pooling by API key
  • Configurable pool size
  • LRU eviction for pool management
  • Automatic cleanup of expired services
  • Service statistics

6. AuthManager (src/auth/AuthManager.ts)

  • Main authentication coordinator
  • API key validation with caching
  • Fallback to default API key (backward compatibility)
  • Rate limiting enforcement
  • Authentication result tracking

Summary by Sourcery

Implement foundational authentication infrastructure with multi-client API key support by introducing secure key utilities, caching, rate limiting, service pooling, and request context management, and integrate authentication and performance configurations into the server and tools

New Features:

  • Add SecureKeyHandler, KeyValidationCache, RateLimiter, RequestContext, LighthouseServiceFactory, and AuthManager modules to handle API key hashing, validation, caching, rate-limiting, request isolation, and service pooling

Enhancements:

  • Extend server configuration with authentication and performance settings including defaults
  • Update all Lighthouse MCP tools to accept an optional apiKey parameter for per-request authentication

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Nov 15, 2025

Reviewer's Guide

This PR introduces a full authentication infrastructure for multi-client API key support by adding new auth modules (key handling, caching, rate limiting, request context, service pooling, and manager), extending server configuration with auth and performance defaults, and enabling optional per-request API keys in all MCP tools.

ER diagram for new authentication configuration types

erDiagram
  AUTHCONFIG {
    string defaultApiKey
    boolean enablePerRequestAuth
    boolean requireAuthentication
    CACHECONFIG keyValidationCache
    RATELIMITCONFIG rateLimiting
  }
  CACHECONFIG {
    boolean enabled
    int maxSize
    int ttlSeconds
    int cleanupIntervalSeconds
  }
  RATELIMITCONFIG {
    boolean enabled
    int requestsPerMinute
    int burstLimit
    boolean keyBasedLimiting
  }
  PERFORMANCECONFIG {
    int servicePoolSize
    int serviceTimeoutMinutes
    int concurrentRequestLimit
  }
  AUTHCONFIG ||--|{ CACHECONFIG : contains
  AUTHCONFIG ||--|{ RATELIMITCONFIG : contains
Loading

Class diagram for new authentication infrastructure

classDiagram
  class AuthManager {
    - config: AuthConfig
    - cache: KeyValidationCache
    - rateLimiter: RateLimiter
    + validateApiKey(apiKey: string): Promise<ValidationResult>
    + getEffectiveApiKey(requestKey?: string): Promise<string>
    + authenticate(requestKey?: string): Promise<AuthenticationResult>
    + sanitizeApiKey(apiKey: string): string
    + isRateLimited(apiKey: string): boolean
    + invalidateKey(apiKey: string): void
    + getCacheStats()
    + getRateLimitStatus(apiKey: string)
    + destroy(): void
  }
  class KeyValidationCache {
    - cache: Map<string, CacheEntry>
    - config: CacheConfig
    + get(keyHash: string): ValidationResult | null
    + set(keyHash: string, result: ValidationResult): void
    + invalidate(keyHash: string): void
    + clear(): void
    + getStats()
    + destroy(): void
  }
  class RateLimiter {
    - limits: Map<string, RateLimitEntry>
    - config: RateLimitConfig
    + isAllowed(keyHash: string): RateLimitResult
    + recordRequest(keyHash: string): void
    + getStatus(keyHash: string): RateLimitResult
    + reset(keyHash: string): void
    + clear(): void
    + destroy(): void
  }
  class SecureKeyHandler {
    + hashKey(apiKey: string): string
    + sanitizeForLogs(apiKey: string): string
    + secureCompare(a: string, b: string): boolean
    + clearFromMemory(obj: any, keys: string[]): void
    + isValidFormat(apiKey: string): boolean
  }
  class RequestContext {
    + apiKey: string
    + keyHash: string
    + service: ILighthouseService
    + toolName: string
    + requestId: string
    + timestamp: Date
    + toLogContext(): LogContext
    + getAge(): number
    + isExpired(timeoutMs: number): boolean
  }
  class LighthouseServiceFactory {
    - services: Map<string, ServiceEntry>
    - config: PerformanceConfig
    + createService(apiKey: string): Promise<ILighthouseService>
    + getService(apiKey: string): Promise<ILighthouseService>
    + removeService(apiKey: string): void
    + clear(): void
    + getStats()
    + destroy(): void
  }
  AuthManager --> KeyValidationCache
  AuthManager --> RateLimiter
  AuthManager --> SecureKeyHandler
  KeyValidationCache --> CacheEntry
  RateLimiter --> RateLimitEntry
  RequestContext --> ILighthouseService
  RequestContext --> LogContext
  LighthouseServiceFactory --> ServiceEntry
  LighthouseServiceFactory --> ILighthouseService
Loading

File-Level Changes

Change Details Files
Introduce core authentication components
  • Implement SecureKeyHandler for key hashing, sanitization, secure compare, memory cleanup, and format validation
  • Add KeyValidationCache with LRU eviction, TTL cleanup, and stats tracking
  • Create RateLimiter using a sliding-window algorithm with per-key limits and retry-after info
  • Build RequestContext for request isolation, ID generation, sanitized logging, and age tracking
  • Develop LighthouseServiceFactory for API key–isolated service pooling, LRU eviction, and cleanup
  • Implement AuthManager to coordinate key validation, caching, rate limiting, default/fallback logic, and authentication results
apps/mcp-server/src/auth/SecureKeyHandler.ts
apps/mcp-server/src/auth/KeyValidationCache.ts
apps/mcp-server/src/auth/RateLimiter.ts
apps/mcp-server/src/auth/RequestContext.ts
apps/mcp-server/src/auth/LighthouseServiceFactory.ts
apps/mcp-server/src/auth/AuthManager.ts
apps/mcp-server/src/auth/types.ts
apps/mcp-server/src/auth/index.ts
Extend server configuration with authentication and performance defaults
  • Import AuthConfig and PerformanceConfig into server-config
  • Define DEFAULT_AUTH_CONFIG and DEFAULT_PERFORMANCE_CONFIG with sensible defaults
  • Add authentication and performance sections to DEFAULT_SERVER_CONFIG
apps/mcp-server/src/config/server-config.ts
Enable per-request API key support in MCP tools
  • Add optional apiKey to BaseToolParams
  • Update each Lighthouse*Tool input schema and params interface to include apiKey
apps/mcp-server/src/tools/types.ts
apps/mcp-server/src/tools/LighthouseCreateDatasetTool.ts
apps/mcp-server/src/tools/LighthouseFetchFileTool.ts
apps/mcp-server/src/tools/LighthouseGenerateKeyTool.ts
apps/mcp-server/src/tools/LighthouseGetDatasetTool.ts
apps/mcp-server/src/tools/LighthouseListDatasetsTool.ts
apps/mcp-server/src/tools/LighthouseSetupAccessControlTool.ts
apps/mcp-server/src/tools/LighthouseUpdateDatasetTool.ts
apps/mcp-server/src/tools/LighthouseUploadFileTool.ts

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@Patrick-Ehimen Patrick-Ehimen marked this pull request as ready for review November 15, 2025 13:18
Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • Extend all tool input parameter interfaces from BaseToolParams instead of manually adding apiKey to each tool to reduce duplication.
  • Respect the enablePerRequestAuth flag in AuthManager to conditionally disable per-request key validation when configured.
  • Ensure RateLimiter and KeyValidationCache fully honor their configuration options (e.g. burstLimit, keyBasedLimiting) and track cache hit/miss counts so that stats like hitRate are meaningful.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Extend all tool input parameter interfaces from BaseToolParams instead of manually adding apiKey to each tool to reduce duplication.
- Respect the enablePerRequestAuth flag in AuthManager to conditionally disable per-request key validation when configured.
- Ensure RateLimiter and KeyValidationCache fully honor their configuration options (e.g. burstLimit, keyBasedLimiting) and track cache hit/miss counts so that stats like hitRate are meaningful.

## Individual Comments

### Comment 1
<location> `apps/mcp-server/src/auth/RateLimiter.ts:23` </location>
<code_context>
+   * Check if a request is allowed for the given key
+   */
+  isAllowed(keyHash: string): RateLimitResult {
+    if (!this.config.enabled) {
+      return {
+        allowed: true,
</code_context>

<issue_to_address>
**suggestion (bug_risk):** Rate limiting logic does not enforce burstLimit.

BurstLimit from RateLimitConfig should be enforced to prevent short-term request spikes, not just requestsPerMinute.

Suggested implementation:

```typescript
  isAllowed(keyHash: string): RateLimitResult {
    if (!this.config.enabled) {
      return {
        allowed: true,
        remaining: Infinity,
        resetTime: new Date(Date.now() + 60000),
      };
    }

    const now = Date.now();
    const windowStart = now - 60 * 1000; // 1 minute window

    // Burst limit enforcement
    // Use a 1 second window for burst limit, or make it configurable
    const burstWindowMs = this.config.burstLimitWindowMs || 1000;
    const burstWindowStart = now - burstWindowMs;

    let entry = this.limits.get(keyHash);
    if (!entry) {

```

```typescript
    let entry = this.limits.get(keyHash);
    if (!entry) {
      entry = {
        count: 0,
        timestamps: [],
        lastRequest: now,
      };
      this.limits.set(keyHash, entry);
    }

    // Remove old timestamps outside burst window
    entry.timestamps = entry.timestamps.filter(ts => ts > burstWindowStart);

    // Enforce burst limit
    if (entry.timestamps.length >= this.config.burstLimit) {
      return {
        allowed: false,
        remaining: 0,
        resetTime: new Date(entry.timestamps[0] + burstWindowMs),
        reason: 'Burst limit exceeded',
      };
    }

    // Add current timestamp for burst tracking
    entry.timestamps.push(now);

```

1. Ensure that `RateLimitConfig` includes `burstLimit` and optionally `burstLimitWindowMs` (default to 1000ms if not set).
2. Make sure that the `entry` object for each key includes a `timestamps: number[]` property for tracking request times.
3. You may need to update the cleanup logic to remove old timestamps from entries.
4. If you want to provide a more specific error message, update the `RateLimitResult` type to include a `reason` field.
</issue_to_address>

### Comment 2
<location> `apps/mcp-server/src/auth/LighthouseServiceFactory.ts:29` </location>
<code_context>
+    const service = new LighthouseService(apiKey);
+
+    // Initialize if the service has an initialize method
+    if (service.initialize) {
+      await service.initialize();
+    }
</code_context>

<issue_to_address>
**suggestion (bug_risk):** Service initialization is conditionally awaited, but errors are not handled.

Handle errors from LighthouseService.initialize to prevent unhandled promise rejections in this factory method.
</issue_to_address>

### Comment 3
<location> `apps/mcp-server/src/auth/KeyValidationCache.ts:81` </location>
<code_context>
+  /**
+   * Get cache statistics
+   */
+  getStats(): {
+    size: number;
+    maxSize: number;
</code_context>

<issue_to_address>
**suggestion:** Cache hit rate is always zero in getStats.

The hitRate property is currently hardcoded. To accurately reflect cache performance, implement tracking for cache hits and misses.

Suggested implementation:

```typescript
  private cacheHits: number = 0;
  private cacheMisses: number = 0;

  /**
   * Get cache statistics
   */
  getStats(): {
    size: number;
    maxSize: number;
    hitRate: number;
  } {
    const total = this.cacheHits + this.cacheMisses;
    const hitRate = total === 0 ? 0 : this.cacheHits / total;
    return {
      size: this.cache.size,
      maxSize: this.config.maxSize,
      hitRate,
    };
  }

```

You must increment `this.cacheHits` and `this.cacheMisses` in the cache access method (e.g., in your `get(key)` method):
- If the cache contains the key, increment `this.cacheHits`.
- If the cache does not contain the key, increment `this.cacheMisses`.

Example:
```ts
get(key: string): ValueType | undefined {
  if (this.cache.has(key)) {
    this.cacheHits++;
    return this.cache.get(key);
  } else {
    this.cacheMisses++;
    return undefined;
  }
}
```
Make sure to add this logic wherever cache accesses occur.
</issue_to_address>

### Comment 4
<location> `apps/mcp-server/src/auth/SecureKeyHandler.ts:33-43` </location>
<code_context>
+  /**
+   * Clear sensitive data from memory
+   */
+  static clearFromMemory(obj: any, keys: string[]): void {
+    keys.forEach((key) => {
+      if (obj[key]) {
</code_context>

<issue_to_address>
**🚨 suggestion (security):** clearFromMemory may not reliably clear sensitive data from memory.

JavaScript's garbage collection may retain data even after properties are set to null or deleted. For sensitive information, consider alternative methods or clearly document this limitation.

```suggestion
  /**
   * Attempt to clear sensitive data from memory.
   *
   * WARNING: JavaScript's garbage collection may retain data in memory
   * even after properties are set to null or deleted. This method does not guarantee
   * immediate or reliable removal of sensitive information from memory.
   * For highly sensitive data, consider using secure storage mechanisms outside of JavaScript.
   */
  static clearFromMemory(obj: any, keys: string[]): void {
    if (process.env.NODE_ENV === "development") {
      console.warn(
        "[SecureKeyHandler.clearFromMemory] WARNING: JavaScript's garbage collection may retain sensitive data in memory even after clearing. This method does not guarantee secure removal."
      );
    }
    keys.forEach((key) => {
      if (obj[key]) {
        obj[key] = null;
        delete obj[key];
      }
    });
  }
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

* Check if a request is allowed for the given key
*/
isAllowed(keyHash: string): RateLimitResult {
if (!this.config.enabled) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Rate limiting logic does not enforce burstLimit.

BurstLimit from RateLimitConfig should be enforced to prevent short-term request spikes, not just requestsPerMinute.

Suggested implementation:

  isAllowed(keyHash: string): RateLimitResult {
    if (!this.config.enabled) {
      return {
        allowed: true,
        remaining: Infinity,
        resetTime: new Date(Date.now() + 60000),
      };
    }

    const now = Date.now();
    const windowStart = now - 60 * 1000; // 1 minute window

    // Burst limit enforcement
    // Use a 1 second window for burst limit, or make it configurable
    const burstWindowMs = this.config.burstLimitWindowMs || 1000;
    const burstWindowStart = now - burstWindowMs;

    let entry = this.limits.get(keyHash);
    if (!entry) {
    let entry = this.limits.get(keyHash);
    if (!entry) {
      entry = {
        count: 0,
        timestamps: [],
        lastRequest: now,
      };
      this.limits.set(keyHash, entry);
    }

    // Remove old timestamps outside burst window
    entry.timestamps = entry.timestamps.filter(ts => ts > burstWindowStart);

    // Enforce burst limit
    if (entry.timestamps.length >= this.config.burstLimit) {
      return {
        allowed: false,
        remaining: 0,
        resetTime: new Date(entry.timestamps[0] + burstWindowMs),
        reason: 'Burst limit exceeded',
      };
    }

    // Add current timestamp for burst tracking
    entry.timestamps.push(now);
  1. Ensure that RateLimitConfig includes burstLimit and optionally burstLimitWindowMs (default to 1000ms if not set).
  2. Make sure that the entry object for each key includes a timestamps: number[] property for tracking request times.
  3. You may need to update the cleanup logic to remove old timestamps from entries.
  4. If you want to provide a more specific error message, update the RateLimitResult type to include a reason field.

const service = new LighthouseService(apiKey);

// Initialize if the service has an initialize method
if (service.initialize) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Service initialization is conditionally awaited, but errors are not handled.

Handle errors from LighthouseService.initialize to prevent unhandled promise rejections in this factory method.

/**
* Get cache statistics
*/
getStats(): {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Cache hit rate is always zero in getStats.

The hitRate property is currently hardcoded. To accurately reflect cache performance, implement tracking for cache hits and misses.

Suggested implementation:

  private cacheHits: number = 0;
  private cacheMisses: number = 0;

  /**
   * Get cache statistics
   */
  getStats(): {
    size: number;
    maxSize: number;
    hitRate: number;
  } {
    const total = this.cacheHits + this.cacheMisses;
    const hitRate = total === 0 ? 0 : this.cacheHits / total;
    return {
      size: this.cache.size,
      maxSize: this.config.maxSize,
      hitRate,
    };
  }

You must increment this.cacheHits and this.cacheMisses in the cache access method (e.g., in your get(key) method):

  • If the cache contains the key, increment this.cacheHits.
  • If the cache does not contain the key, increment this.cacheMisses.

Example:

get(key: string): ValueType | undefined {
  if (this.cache.has(key)) {
    this.cacheHits++;
    return this.cache.get(key);
  } else {
    this.cacheMisses++;
    return undefined;
  }
}

Make sure to add this logic wherever cache accesses occur.

Comment on lines +33 to +43
/**
* Clear sensitive data from memory
*/
static clearFromMemory(obj: any, keys: string[]): void {
keys.forEach((key) => {
if (obj[key]) {
obj[key] = null;
delete obj[key];
}
});
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚨 suggestion (security): clearFromMemory may not reliably clear sensitive data from memory.

JavaScript's garbage collection may retain data even after properties are set to null or deleted. For sensitive information, consider alternative methods or clearly document this limitation.

Suggested change
/**
* Clear sensitive data from memory
*/
static clearFromMemory(obj: any, keys: string[]): void {
keys.forEach((key) => {
if (obj[key]) {
obj[key] = null;
delete obj[key];
}
});
}
/**
* Attempt to clear sensitive data from memory.
*
* WARNING: JavaScript's garbage collection may retain data in memory
* even after properties are set to null or deleted. This method does not guarantee
* immediate or reliable removal of sensitive information from memory.
* For highly sensitive data, consider using secure storage mechanisms outside of JavaScript.
*/
static clearFromMemory(obj: any, keys: string[]): void {
if (process.env.NODE_ENV === "development") {
console.warn(
"[SecureKeyHandler.clearFromMemory] WARNING: JavaScript's garbage collection may retain sensitive data in memory even after clearing. This method does not guarantee secure removal."
);
}
keys.forEach((key) => {
if (obj[key]) {
obj[key] = null;
delete obj[key];
}
});
}

@Patrick-Ehimen Patrick-Ehimen merged commit b372f4a into main Nov 15, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant