-
Notifications
You must be signed in to change notification settings - Fork 2.7k
chore: refactor live server into a scalable structure #7483
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughRefactors the Live app to a modular, controller-driven architecture, introduces a singleton HocusPocus server manager and Redis manager, updates runtime entrypoint to start.js, adds controllers (health, collaboration, document conversion), centralizes startup/shutdown in start.ts, replaces manual logger usages with @plane/logger, and removes legacy HocusPocus/extension/error/redis-url modules. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor OS as Process
participant Start as start.ts
participant Srv as Server
participant Redis as RedisManager
participant HP as HocusPocusServerManager
participant Exp as Express App
OS->>Start: bootstrap()
Start->>Srv: new Server()
Start->>Srv: initialize()
Srv->>Redis: initialize()
Redis-->>Srv: ready / or no-op if disabled
Srv->>HP: initialize()
HP-->>Srv: HocusPocus instance
Srv->>Exp: mount CONTROLLERS under LIVE_BASE_PATH
Start->>Srv: listen()
Srv-->>Start: listening
sequenceDiagram
autonumber
actor C as Client (WS)
participant Exp as Express WS route /collaboration
participant Ctrl as CollaborationController
participant HP as HocusPocusServerManager
participant HPS as HocusPocus Server
C->>Exp: WS upgrade
Exp->>Ctrl: handleConnection(ws, req)
Ctrl->>HP: getServer()
HP-->>Ctrl: HocusPocus instance
Ctrl->>HPS: handleConnection(ws, req)
HPS-->>C: Collaboration session established
note over Ctrl,HPS: On error: log and close socket
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
- add registerControllers function abstracting both rest, ws controllers - update logger to a simple json based logger
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors the live server into a more scalable and modular architecture by separating concerns and introducing proper initialization patterns. The refactoring moves from a single monolithic server file to a structured approach with dedicated managers, controllers, and initialization sequences.
- Introduced separate initialization phases for Redis and HocusPocus server management
- Restructured server startup with proper dependency injection and graceful shutdown handling
- Migrated from inline route handlers to dedicated controller classes using decorators
Reviewed Changes
Copilot reviewed 17 out of 18 changed files in this pull request and generated 4 comments.
Show a summary per file
File | Description |
---|---|
apps/live/src/start.ts | New entry point that handles server initialization and graceful shutdown |
apps/live/src/server.ts | Refactored server class with cleaner separation of concerns and async initialization |
apps/live/src/redis.ts | New Redis manager singleton with connection pooling and error handling |
apps/live/src/hocuspocus.ts | New HocusPocus server manager replacing the previous extensions approach |
apps/live/src/middlewares/logger.ts | Removed deprecated manual logger export |
apps/live/src/controllers/*.ts | New controller classes replacing inline route handlers |
apps/live/src/core/lib/*.ts | Updated logger imports to use centralized logger |
apps/live/Dockerfile.live | Updated entry point to use new start.js file |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
constructor(private readonly hocusPocusServer: Hocuspocus) {} | ||
|
||
@WebSocket("/") | ||
handleConnection(ws: any, req: Request) { | ||
try { | ||
// Initialize the connection with Hocuspocus |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CollaborationController constructor expects a HocusPocus server instance, but controllers are instantiated without parameters in the CONTROLLERS array. This will cause runtime errors when the decorator system tries to instantiate the controller.
constructor(private readonly hocusPocusServer: Hocuspocus) {} | |
@WebSocket("/") | |
handleConnection(ws: any, req: Request) { | |
try { | |
// Initialize the connection with Hocuspocus | |
private hocusPocusServer?: Hocuspocus; | |
// Setter to inject Hocuspocus server instance after instantiation | |
setHocuspocusServer(server: Hocuspocus) { | |
this.hocusPocusServer = server; | |
} | |
@WebSocket("/") | |
handleConnection(ws: any, req: Request) { | |
try { | |
// Initialize the connection with Hocuspocus | |
if (!this.hocusPocusServer) { | |
logger.error("Hocuspocus server instance not set in CollaborationController."); | |
ws.close(); | |
return; | |
} |
Copilot uses AI. Check for mistakes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
♻️ Duplicate comments (5)
apps/live/src/controllers/health.controller.ts (1)
6-13
: Augment health with dependency checks (redis, DB, etc.)Echoing prior feedback: a health endpoint used by orchestrators should reflect dependency readiness so unhealthy instances don’t receive traffic.
If feasible, include quick probes (timeouts) to Redis/HocusPocus/etc., or split into /health/live and /health/ready.
apps/live/src/redis.ts (2)
119-121
: Prefer driver status for readiness.
Use the client's status rather than a parallel boolean to avoid drift.Apply this diff:
public isClientConnected(): boolean { - return this.isConnected && this.redisClient !== null; + return this.redisClient?.status === "ready"; }
1-1
: Follow-up on ioredis vs node-redis.
Upstream ioredis docs steer users toward redis/node-redis. If a switch is planned, consider tracking it as a separate PR to reduce blast radius here.Do you want me to open a follow-up issue to evaluate the migration (perf, feature parity like auto-reconnect, cluster/sharding, and TLS/auth support)?
apps/live/src/server.ts (1)
34-45
: Ack: switched to async/await in initialize(), per prior feedback.This addresses the earlier concern about Promise chains and unhandled rejections. Nice.
apps/live/src/hocuspocus.ts (1)
16-16
: Type-only import for TUserDetails looks good.Matches prior feedback; keeps the runtime bundle lean.
🧹 Nitpick comments (28)
apps/live/Dockerfile.live (1)
62-64
: Harden runner-stage Dockerfile for safer, production-ready runtimeVerified that the app defaults to port 3000 (
process.env.PORT || 3000
in apps/live/src/server.ts:28), so the exposed port is correct.Please consider these optional refactors in apps/live/Dockerfile.live (runner stage, lines 62–64) to reduce blast radius and improve signal handling:
• Install and use tini to reap zombies and forward signals
• Set NODE_ENV=production for optimized behavior
• Drop root privileges by switching to the node userSuggested diff:
FROM base AS runner WORKDIR /app -RUN apk add --no-cache curl # existing setup steps... +RUN apk add --no-cache tini +ENV NODE_ENV=production EXPOSE 3000 -USER root +USER node -CMD ["node", "apps/live/dist/start.js"] +ENTRYPOINT ["/sbin/tini", "--"] +CMD ["node", "apps/live/dist/start.js"]apps/live/package.json (1)
22-49
: Remove unused “morgan” dependencyIt looks like only pino-http is actually imported in apps/live/src/middlewares/logger.ts (and there are no references to morgan elsewhere), so the morgan package isn’t used at runtime and can be removed to keep dependencies lean.
• Remove the morgan entry from apps/live/package.json
• Run npm uninstall morgan (or the equivalent) in the apps/live directory
• Confirm there are no stray import or require calls for morgan in the codebaseSuggested diff:
--- a/apps/live/package.json +++ b/apps/live/package.json @@ dependencies - "morgan": "1.10.1",apps/live/src/core/lib/page.ts (2)
1-1
: Prefer structured error logging; narrow catch variable typeAssuming @plane/logger is pino-based, pass errors as objects for proper serialization and stack capture. Also, in TS with useUnknownInCatchVariables, error is unknown — narrow it.
Apply:
-import { logger } from "@plane/logger"; +import { logger } from "@plane/logger"; @@ - logger.error("Update error:", error); + logger.error({ err: error instanceof Error ? error : new Error(String(error)) }, "Update error"); @@ - logger.error("Error while transforming from HTML to Uint8Array", error); + logger.error({ err: error instanceof Error ? error : new Error(String(error)) }, "Error while transforming from HTML to Uint8Array"); @@ - logger.error("Fetch error:", error); + logger.error({ err: error instanceof Error ? error : new Error(String(error)) }, "Fetch error");Also applies to: 33-33, 51-51, 78-78
21-21
: Be consistent on fallback return values (undefined vs null)updatePageDescription returns undefined on missing params, while fetchPageDescriptionBinary returns null. Consider standardizing to one sentinel for “no data” across these APIs.
Also applies to: 63-63
apps/live/src/controllers/convert-document.controller.ts (2)
12-23
: Validate input more strictly and return actionable 400 errorsCurrent check only guards against undefined. Recommend validating types and constraining variant to known values to prevent downstream errors.
Example (if using zod):
const Body = z.object({ description_html: z.string().min(1), variant: z.enum(["page", "issue", "comment"]) }); const parsed = Body.safeParse(req.body); if (!parsed.success) { return res.status(400).json({ message: "Invalid body", issues: parsed.error.issues }); }
29-33
: Use structured error logging; avoid string + error param patternAlign with centralized logger patterns so errors are captured with stack.
- logger.error("Error in /convert-document endpoint:", error); + logger.error({ err: error instanceof Error ? error : new Error(String(error)) }, "Error in /convert-document endpoint");apps/live/src/controllers/health.controller.ts (1)
7-13
: Nit: remove unnecessary async or await a readiness checkFunction is async but doesn’t await anything. Either remove async or add the dependency checks mentioned above.
Apply:
- async healthCheck(_req: Request, res: Response) { + healthCheck(_req: Request, res: Response) { res.status(200).json({ status: "OK", timestamp: new Date().toISOString(), version: process.env.APP_VERSION || "1.0.0", }); }apps/live/src/redis.ts (6)
35-49
: Support auth/TLS and stricter validation in Redis URL resolution.
Right now only REDIS_URL or host:port are considered. Add username/password and TLS toggles; also reject non-numeric/<=0 ports.Apply this diff:
private getRedisUrl(): string { - const redisUrl = process.env.REDIS_URL?.trim(); - const redisHost = process.env.REDIS_HOST?.trim(); - const redisPort = process.env.REDIS_PORT?.trim(); + const redisUrl = process.env.REDIS_URL?.trim(); + const redisHost = process.env.REDIS_HOST?.trim(); + const redisPort = process.env.REDIS_PORT?.trim(); + const redisUsername = process.env.REDIS_USERNAME?.trim(); + const redisPassword = process.env.REDIS_PASSWORD?.trim(); + const redisTls = (process.env.REDIS_TLS ?? process.env.REDIS_SSL)?.toLowerCase() === "true"; if (redisUrl) { return redisUrl; } - if (redisHost && redisPort && !Number.isNaN(Number(redisPort))) { - return `redis://${redisHost}:${redisPort}`; + const port = Number(redisPort); + if (redisHost && Number.isFinite(port) && port > 0) { + const proto = redisTls ? "rediss" : "redis"; + const auth = + redisPassword + ? `${encodeURIComponent(redisUsername ?? "")}:${encodeURIComponent(redisPassword)}@` + : ""; + return `${proto}://${auth}${redisHost}:${port}`; } return ""; }
111-117
: Reduce log level to avoid noise when client is intentionally disabled.
Repeated warns in hot paths can spam logs (e.g., when REDIS_URL is intentionally omitted). Consider debug here.Apply this diff:
public getClient(): Redis | null { if (!this.redisClient || !this.isConnected) { - logger.warn("Redis client not available or not connected"); + logger.debug("Redis client not available or not connected"); return null; } return this.redisClient; }
123-137
: Harden disconnect to avoid dangling listeners on forced close.
On forced disconnect, remove listeners; also guard against rejected promises.Apply this diff:
public async disconnect(): Promise<void> { if (this.redisClient) { try { await this.redisClient.quit(); logger.info("Redis client disconnected gracefully"); } catch (error) { logger.error("Error disconnecting Redis client:", error); // Force disconnect if quit fails this.redisClient.disconnect(); + } finally { + // Ensure no listeners linger on re-init + this.redisClient.removeAllListeners(); - } finally { this.redisClient = null; this.isConnected = false; } } }
139-156
: Validate TTL and prefer single SET with EX when possible.
- Guard against non-positive TTL.
- Using SET with EX keeps options open for NX/XX in future and avoids API branching.
Apply this diff:
public async set(key: string, value: string, ttl?: number): Promise<boolean> { const client = this.getClient(); if (!client) return false; try { - if (ttl) { - await client.setex(key, ttl, value); + if (typeof ttl === "number") { + if (ttl <= 0) { + logger.warn(`Skipping SET for ${key} due to non-positive ttl=${ttl}`); + return false; + } + await client.set(key, value, "EX", ttl); } else { await client.set(key, value); } return true;
169-180
: Return deletion success based on affected keys.
Return true only if a key was removed to better signal effect.Apply this diff:
public async del(key: string): Promise<boolean> { const client = this.getClient(); if (!client) return false; try { - await client.del(key); - return true; + const n = await client.del(key); + return n > 0; } catch (error) { logger.error(`Error deleting Redis key ${key}:`, error); return false; } }
182-193
: exists: generalize boolean check.
For future multi-key usage, prefer result > 0.Apply this diff:
public async exists(key: string): Promise<boolean> { @@ try { - const result = await client.exists(key); - return result === 1; + const result = await client.exists(key); + return result > 0;apps/live/src/core/lib/authentication.ts (1)
20-21
: Use structured logging with context and avoid double string interpolation.
Include userId and let the logger format the error. This helps searchability and redaction.Apply this diff:
- logger.error("Failed to fetch current user:", error); + logger.error({ err: error as Error, userId }, "Failed to fetch current user");apps/live/src/controllers/index.ts (1)
1-5
: Make the controller registry read-only and typed to avoid accidental mutation.
Freezing/typing the array prevents runtime edits and helps DI tools with inference.Apply this diff:
+type ControllerCtor = new (...args: any[]) => unknown; -export const CONTROLLERS = [CollaborationController, ConvertDocumentController, HealthController]; +export const CONTROLLERS: readonly ControllerCtor[] = [ + CollaborationController, + ConvertDocumentController, + HealthController, +] as const;apps/live/src/start.ts (2)
10-16
: Consider awaiting listen if it’s async.
If Server.listen returns a Promise (e.g., after binding), await it so startup failures surface in this try/catch.If listen() is async, update line 11 accordingly:
- server.listen(); + await server.listen();If not, ignore.
18-19
: Add SIGTERM/SIGINT handlers for graceful shutdowns (Kubernetes, systemd).
This ensures clean teardown on pod stop or Ctrl+C.Proposed addition (outside selected lines, for clarity):
process.once("SIGTERM", async () => { logger.info("SIGTERM received. Shutting down gracefully..."); try { if (server) await server.destroy(); } finally { process.exit(0); } }); process.once("SIGINT", async () => { logger.info("SIGINT received. Shutting down gracefully..."); try { if (server) await server.destroy(); } finally { process.exit(0); } });apps/live/src/controllers/collaboration.controller.ts (3)
8-11
: Remove or wire the metrics field.
It’s unused; either integrate with your metrics system or drop it to keep the class lean.Apply this diff:
export class CollaborationController { - private metrics = { - errors: 0, - };
1-5
: Type the WebSocket to avoid any and improve DX.
Use ws’ WebSocket type (or the one your WS lib exposes) to get autocompletion and compile-time checks.Apply this diff:
-import type { Request } from "express"; +import type { Request } from "express"; +import type WebSocket from "ws"; @@ - handleConnection(ws: any, req: Request) { + handleConnection(ws: WebSocket, req: Request) {
15-29
: Attach error handler before delegating and use once() to avoid leaks; add close logging.
This prevents missing early errors and ensures the handler runs once.Apply this diff:
@WebSocket("/") handleConnection(ws: any, req: Request) { try { - // Initialize the connection with Hocuspocus - this.hocusPocusServer.handleConnection(ws, req); - - // Set up error handling for the connection - ws.on("error", (error: any) => { + // Attach minimal handlers up front to avoid missing early errors + ws.once("error", (error: any) => { logger.error("WebSocket connection error:", error); ws.close(); }); + ws.once("close", (code: number, reason: Buffer) => { + logger.debug({ code, reason: reason?.toString() }, "WebSocket connection closed"); + }); + + // Initialize the connection with Hocuspocus + this.hocusPocusServer.handleConnection(ws, req); } catch (error) { logger.error("WebSocket connection error:", error); ws.close(); } }apps/live/src/server.ts (4)
42-43
: Logging message is misleading.The catch covers Redis and HocusPocus init. Message “Failed to setup Redis” is inaccurate.
- logger.error("Failed to setup Redis:", error); + logger.error("Failed to initialize Live server components:", error);
71-75
: Add listener for HTTP server 'error' events.Capture listen errors (port in use, permission issues) and surface them via the logger.
public listen() { this.serverInstance = this.app.listen(this.app.get("port"), () => { logger.info(`Plane Live server has started at port ${this.app.get("port")}`); }); + this.serverInstance.on("error", (err: unknown) => { + logger.error("Express listen error:", err); + }); }
18-22
: Tighten types for app/router/server fields.Using any here weakens safety. Prefer explicit types from express, http, and @hocuspocus/server.
-export class Server { - private app: any; - private router: any; - private hocuspocusServer: any; - private serverInstance: any; +export class Server { + private app: express.Application; + private router: express.Router; + private hocuspocusServer: import("@hocuspocus/server").Hocuspocus | null = null; + private serverInstance: import("http").Server | null = null;
58-58
: CORS defaults may be too permissive for cookies.If ws/HTTP auth relies on cookies, consider restricting origin and enabling credentials based on env (e.g., CORS_ORIGIN).
- this.app.use(cors()); + this.app.use( + cors({ + origin: process.env.CORS_ORIGIN?.split(",") ?? true, // array of origins or all + credentials: true, + }), + );Would you like a small helper that parses CORS_ORIGIN into a validated array?
apps/live/src/hocuspocus.ts (4)
61-69
: Avoid console.error; use structured logger and don’t log token-related errors verbatim.Use the shared logger and keep the message generic to avoid leaking authentication context. The fallback behavior remains unchanged.
- } catch (error) { - // If token parsing fails, fallback to request headers - console.error("Token parsing failed, using request headers:", error); + } catch (_error) { + // If token parsing fails, fallback to request headers + logger.warn("Token parsing failed; falling back to request headers"); } finally {
70-85
: Clarify credential requirements when token parsing fails.If token parsing fails, userId stays undefined even if cookie exists, leading to "Credentials not provided". Either accept a header-based userId (e.g., x-user-id) or fail early with a clearer message.
Proposed minimal tweak:
- if (!cookie || !userId) { - throw new Error("Credentials not provided"); - } + if (!cookie || !userId) { + throw new Error("Credentials not provided: expected JSON token with { id, cookie }"); + }If you intend to support header-only auth, I can wire a configurable header name here.
114-116
: Augment error logs with sanitized context.Including pageId and documentType (not cookie/state) helps triage without leaking PII.
- logger.error("Error in fetching document", error); + logger.error("Error in fetching document", { + error, + pageId, + documentType, + });- logger.error("Error in updating document:", error); + logger.error("Error in updating document:", { + error, + pageId, + documentType, + });Also applies to: 137-139
189-195
: Consider adding a destroy() on the manager for symmetric lifecycle control.Encapsulates teardown in the same place as setup and simplifies callers.
export class HocusPocusServerManager { @@ public static resetInstance(): void { HocusPocusServerManager.instance = null; } + + public async destroy(): Promise<void> { + if (this.server) { + await this.server.destroy(); + this.server = null; + this.isInitialized = false; + } + } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (17)
apps/live/Dockerfile.live
(1 hunks)apps/live/package.json
(1 hunks)apps/live/src/controllers/collaboration.controller.ts
(1 hunks)apps/live/src/controllers/convert-document.controller.ts
(1 hunks)apps/live/src/controllers/health.controller.ts
(1 hunks)apps/live/src/controllers/index.ts
(1 hunks)apps/live/src/core/extensions/index.ts
(0 hunks)apps/live/src/core/helpers/error-handler.ts
(0 hunks)apps/live/src/core/hocuspocus-server.ts
(0 hunks)apps/live/src/core/lib/authentication.ts
(2 hunks)apps/live/src/core/lib/page.ts
(4 hunks)apps/live/src/core/lib/utils/redis-url.ts
(0 hunks)apps/live/src/hocuspocus.ts
(1 hunks)apps/live/src/middlewares/logger.ts
(0 hunks)apps/live/src/redis.ts
(1 hunks)apps/live/src/server.ts
(2 hunks)apps/live/src/start.ts
(1 hunks)
💤 Files with no reviewable changes (5)
- apps/live/src/middlewares/logger.ts
- apps/live/src/core/lib/utils/redis-url.ts
- apps/live/src/core/hocuspocus-server.ts
- apps/live/src/core/helpers/error-handler.ts
- apps/live/src/core/extensions/index.ts
🔇 Additional comments (6)
apps/live/package.json (2)
27-27
: LGTM: shared logger/decorators adoption is consistent with the new architectureBringing in @plane/decorators and @plane/logger as workspace deps aligns with the controller-based routing and centralized logging. No concerns here.
Also applies to: 29-29
50-66
: DevDependency versions validatedAll pinned devDependencies have been confirmed to exist in the npm registry and resolve correctly under pnpm:
- [email protected] exists and installs successfully.
- @types/[email protected] exists and installs successfully.
- [email protected] exists and installs successfully.
No invalid semver strings were detected, and these packages are compatible with Node.js 22.
apps/live/src/redis.ts (1)
25-33
: Good single in-flight connection guard.
The connectionPromise pattern here prevents thundering-herd connects and is clear/readable.apps/live/src/core/lib/authentication.ts (2)
2-4
: LGTM: import path and logger migration are consistent with the refactor.
No functional changes introduced; this aligns with the repo-wide logging standard.
6-6
: Confirm UserService instance is stateless/thread-safe.
Module-scoped singleton is fine if the service has no per-request state or cached headers. If it does, instantiate per request.Would you like me to scan for mutable fields/side effects in UserService to confirm it’s safe at module scope?
apps/live/src/hocuspocus.ts (1)
168-169
: Verify 10s debounce suits UX and data loss tolerance.A 10,000ms store debounce reduces write load but increases risk of data loss on abrupt disconnects and may feel laggy for persistence. Confirm product requirements.
If you need, I can make debounce configurable via env (e.g., LIVE_STORE_DEBOUNCE_MS) with a safe default of 1000–2000ms.
private async connect(): Promise<void> { | ||
try { | ||
const redisUrl = this.getRedisUrl(); | ||
|
||
if (!redisUrl) { | ||
logger.warn("No Redis URL provided, Redis functionality will be disabled"); | ||
this.isConnected = false; | ||
return; | ||
} | ||
|
||
this.redisClient = new Redis(redisUrl, { | ||
lazyConnect: true, | ||
keepAlive: 30000, | ||
connectTimeout: 10000, | ||
commandTimeout: 5000, | ||
enableOfflineQueue: false, | ||
maxRetriesPerRequest: 3, | ||
}); | ||
|
||
// Set up event listeners | ||
this.redisClient.on("connect", () => { | ||
logger.info("Redis client connected"); | ||
this.isConnected = true; | ||
}); | ||
|
||
this.redisClient.on("ready", () => { | ||
logger.info("Redis client ready"); | ||
this.isConnected = true; | ||
}); | ||
|
||
this.redisClient.on("error", (error) => { | ||
logger.error("Redis client error:", error); | ||
this.isConnected = false; | ||
}); | ||
|
||
this.redisClient.on("close", () => { | ||
logger.warn("Redis client connection closed"); | ||
this.isConnected = false; | ||
}); | ||
|
||
this.redisClient.on("reconnecting", () => { | ||
logger.info("Redis client reconnecting..."); | ||
this.isConnected = false; | ||
}); | ||
|
||
// Connect to Redis | ||
await this.redisClient.connect(); | ||
|
||
// Test the connection | ||
await this.redisClient.ping(); | ||
logger.info("Redis connection test successful"); | ||
} catch (error) { | ||
logger.error("Failed to initialize Redis client:", error); | ||
this.isConnected = false; | ||
throw error; | ||
} finally { | ||
this.connectionPromise = null; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Tighten readiness semantics and reconnection policy.
- Only mark connected on "ready" (not "connect"); optionally wait for "ready" before ping.
- Add reconnectOnError to gracefully handle MOVED/READONLY type errors.
- If re-initializing after a failure, dispose any existing client before creating a new one to avoid leaked sockets/listeners.
Apply this diff:
private async connect(): Promise<void> {
try {
const redisUrl = this.getRedisUrl();
@@
- this.redisClient = new Redis(redisUrl, {
+ if (this.redisClient) {
+ try {
+ await this.redisClient.quit();
+ } catch {
+ this.redisClient.disconnect();
+ }
+ this.redisClient.removeAllListeners();
+ }
+
+ this.redisClient = new Redis(redisUrl, {
lazyConnect: true,
- keepAlive: 30000,
connectTimeout: 10000,
commandTimeout: 5000,
enableOfflineQueue: false,
maxRetriesPerRequest: 3,
+ reconnectOnError: (err) => {
+ const msg = err.message || "";
+ // Retry on common transient/cluster errors
+ if (msg.includes("READONLY") || msg.includes("MOVED") || msg.includes("ASK")) return true;
+ return false;
+ },
});
@@
- this.redisClient.on("connect", () => {
- logger.info("Redis client connected");
- this.isConnected = true;
- });
+ this.redisClient.on("connect", () => {
+ logger.info("Redis client connected");
+ });
this.redisClient.on("ready", () => {
logger.info("Redis client ready");
this.isConnected = true;
});
@@
- // Connect to Redis
+ // Connect to Redis
await this.redisClient.connect();
- // Test the connection
- await this.redisClient.ping();
+ // Test the connection after ready to ensure command flow is OK
+ if (this.redisClient.status !== "ready") {
+ await new Promise<void>((resolve) =>
+ this.redisClient!.once("ready", () => resolve())
+ );
+ }
+ await this.redisClient.ping();
logger.info("Redis connection test successful");
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
private async connect(): Promise<void> { | |
try { | |
const redisUrl = this.getRedisUrl(); | |
if (!redisUrl) { | |
logger.warn("No Redis URL provided, Redis functionality will be disabled"); | |
this.isConnected = false; | |
return; | |
} | |
this.redisClient = new Redis(redisUrl, { | |
lazyConnect: true, | |
keepAlive: 30000, | |
connectTimeout: 10000, | |
commandTimeout: 5000, | |
enableOfflineQueue: false, | |
maxRetriesPerRequest: 3, | |
}); | |
// Set up event listeners | |
this.redisClient.on("connect", () => { | |
logger.info("Redis client connected"); | |
this.isConnected = true; | |
}); | |
this.redisClient.on("ready", () => { | |
logger.info("Redis client ready"); | |
this.isConnected = true; | |
}); | |
this.redisClient.on("error", (error) => { | |
logger.error("Redis client error:", error); | |
this.isConnected = false; | |
}); | |
this.redisClient.on("close", () => { | |
logger.warn("Redis client connection closed"); | |
this.isConnected = false; | |
}); | |
this.redisClient.on("reconnecting", () => { | |
logger.info("Redis client reconnecting..."); | |
this.isConnected = false; | |
}); | |
// Connect to Redis | |
await this.redisClient.connect(); | |
// Test the connection | |
await this.redisClient.ping(); | |
logger.info("Redis connection test successful"); | |
} catch (error) { | |
logger.error("Failed to initialize Redis client:", error); | |
this.isConnected = false; | |
throw error; | |
} finally { | |
this.connectionPromise = null; | |
} | |
} | |
private async connect(): Promise<void> { | |
try { | |
const redisUrl = this.getRedisUrl(); | |
if (!redisUrl) { | |
logger.warn("No Redis URL provided, Redis functionality will be disabled"); | |
this.isConnected = false; | |
return; | |
} | |
// Dispose any prior client to avoid leaking sockets/listeners | |
if (this.redisClient) { | |
try { | |
await this.redisClient.quit(); | |
} catch { | |
this.redisClient.disconnect(); | |
} | |
this.redisClient.removeAllListeners(); | |
} | |
this.redisClient = new Redis(redisUrl, { | |
lazyConnect: true, | |
connectTimeout: 10000, | |
commandTimeout: 5000, | |
enableOfflineQueue: false, | |
maxRetriesPerRequest: 3, | |
reconnectOnError: (err) => { | |
const msg = err.message || ""; | |
// Retry on common transient/cluster errors | |
if (msg.includes("READONLY") || msg.includes("MOVED") || msg.includes("ASK")) return true; | |
return false; | |
}, | |
}); | |
// Set up event listeners | |
this.redisClient.on("connect", () => { | |
logger.info("Redis client connected"); | |
}); | |
this.redisClient.on("ready", () => { | |
logger.info("Redis client ready"); | |
this.isConnected = true; | |
}); | |
this.redisClient.on("error", (error) => { | |
logger.error("Redis client error:", error); | |
this.isConnected = false; | |
}); | |
this.redisClient.on("close", () => { | |
logger.warn("Redis client connection closed"); | |
this.isConnected = false; | |
}); | |
this.redisClient.on("reconnecting", () => { | |
logger.info("Redis client reconnecting..."); | |
this.isConnected = false; | |
}); | |
// Connect to Redis | |
await this.redisClient.connect(); | |
// Test the connection after ready to ensure command flow is OK | |
if (this.redisClient.status !== "ready") { | |
await new Promise<void>((resolve) => | |
this.redisClient.once("ready", () => resolve()) | |
); | |
} | |
await this.redisClient.ping(); | |
logger.info("Redis connection test successful"); | |
} catch (error) { | |
logger.error("Failed to initialize Redis client:", error); | |
this.isConnected = false; | |
throw error; | |
} finally { | |
this.connectionPromise = null; | |
} | |
} |
…tor-decorators-pkg
Description
Type of Change
Screenshots and Media (if applicable)
Test Scenarios
References
Summary by CodeRabbit
New Features
Improvements
Chores