feat(serve): add cemdev build tag for live-reload of chrome UI#269
feat(serve): add cemdev build tag for live-reload of chrome UI#269bennypowers wants to merge 1 commit intomainfrom
Conversation
Enables development mode for iterating on chrome UI components without rebuilding the binary. When built with -tags cemdev, the server: - Reads internal modules from disk instead of embed.FS - Watches serve/middleware/routes/templates/elements/ for changes - Auto-transpiles TypeScript via esbuild - Broadcasts reload to WebSocket clients Usage: make dev-serve PROJECT=../path/to/project Implementation uses build tags to conditionally compile: - internal_modules_prod.go (default): reads from embed.FS - internal_modules_dev.go (cemdev): reads from disk + file watcher Assisted-By: Claude Sonnet 4.5 <noreply@anthropic.com>
📝 WalkthroughWalkthroughIntroduces a development mode workflow using the Changes
Sequence DiagramsequenceDiagram
participant Client as Browser Client
participant HTTP as HTTP Server
participant Routes as routes Handler
participant DevWatch as Dev Watcher (cemdev)
participant Transpile as esbuild Transpiler
participant FS as File System
participant WS as WebSocket Manager
Client->>HTTP: Request internal module<br/>(templates/js/cem-element.js)
HTTP->>Routes: serveInternalModules()
Routes->>Routes: ReadInternalModule(path)<br/>[overridden in dev mode]
alt Production Build (!cemdev)
Routes->>Routes: Read from embedded FS
else Dev Build (cemdev)
Routes->>FS: Read from disk via<br/>runtime.Caller()
end
Routes-->>HTTP: Module bytes
HTTP-->>Client: 200 OK + module
FS->>DevWatch: File change event<br/>(.ts files in elements/)
DevWatch->>DevWatch: Debounce (300ms)
DevWatch->>Transpile: TranspileElements(elementsDir)
Transpile->>FS: Read .ts files<br/>(exclude .test.ts)
Transpile->>Transpile: esbuild process<br/>(ES2020, inline sourcemaps)
Transpile->>FS: Write .js outputs
alt Transpile Success
DevWatch->>WS: Broadcast reload message
WS-->>Client: WebSocket reload signal
Client->>Client: Refresh browser
else Transpile Error
DevWatch->>WS: Broadcast error message
WS-->>Client: WebSocket error signal
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
LSP Benchmark Results
View this benchmark run in GitHub Actions 💡 Tip: Raw JSON results are available in workflow artifacts if needed. Generate Benchmarks
Perf/kb delta ratio: 1.00x 👍 View this benchmark run in GitHub Actions 💡 Tip: Raw JSON outputs are available in workflow artifacts if needed. |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
serve/internal/elements/transpile.go (1)
51-54: Unnecessary slice copy.The
copy(entryPoints, tsFiles)is redundant sincetsFilescan be used directly asEntryPoints. This eliminates an allocation.♻️ Simplify by using tsFiles directly
- // Build entry points for esbuild - entryPoints := make([]string, len(tsFiles)) - copy(entryPoints, tsFiles) - // Run esbuild to transpile all files result := api.Build(api.BuildOptions{ - EntryPoints: entryPoints, + EntryPoints: tsFiles, Outdir: sourceDir,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@serve/internal/elements/transpile.go` around lines 51 - 54, The allocation and copy into the local variable entryPoints is unnecessary; instead of creating entryPoints and calling copy(entryPoints, tsFiles), use the existing tsFiles slice directly where EntryPoints is required (remove the make/copy lines and pass tsFiles as the EntryPoints value). Update references that use entryPoints (the variable in transpile.go) to use tsFiles so no redundant allocation occurs.serve/DEV_MODE.md (1)
36-41: Add language specifier to fenced code block.The log output code block is missing a language specifier. Use
textorplaintextfor log output to satisfy markdown linters and improve rendering consistency.📝 Add language specifier
The server will log: -``` +```text [INFO] Dev mode: reading internal modules from disk [INFO] Dev mode: watching elements directory for changes</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@serve/DEV_MODE.mdaround lines 36 - 41, The fenced code block that shows the
server log output (the block containing "[INFO] Dev mode: reading internal
modules from disk" and "[INFO] Dev mode: watching elements directory for
changes") is missing a language specifier; update the opening fence fromtotext (or ```plaintext) so markdown linters and renderers treat it as plain
text and the logs render consistently.</details> </blockquote></details> <details> <summary>serve/internal_modules_dev_test.go (1)</summary><blockquote> `91-95`: **Consider using `ServerConfig.Reload = true` instead of manual initialization.** Based on context snippet 1, `wsManager` is only initialized when `config.Reload` is true. Rather than manually initializing `wsManager` after construction, consider using `NewServerWithConfig` with `Reload: true` to get the same behavior as production. <details> <summary>♻️ Suggested alternative</summary> ```go // Instead of NewServer(0) + manual wsManager init: server, err := NewServerWithConfig(ServerConfig{ Port: 0, Reload: true, }) ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@serve/internal_modules_dev_test.go` around lines 91 - 95, The test manually initializes server.wsManager after calling NewServer(0); instead call NewServerWithConfig with ServerConfig{Port: 0, Reload: true} so the server is constructed with reload behavior and wsManager set up consistently with production; replace the NewServer(0) + manual server.wsManager = newWebSocketManager() / SetLogger(...) sequence with a single call to NewServerWithConfig(ServerConfig{Port:0, Reload:true}) and remove the manual initialization code, keeping references to server, NewServerWithConfig, ServerConfig, and wsManager to locate the change. ``` </details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Inline comments:
In@serve/internal_modules_dev.go:
- Around line 131-133: The debounce Stop() call on debounceTimer can return
false while the timer's callback is already queued/running, allowing an
in-flight transpile to run concurrently; protect the transpile path by
introducing synchronization (e.g., a mutex or an in-flight flag) around the
callback and any shared state so callbacks that start after Stop() complete
cannot run concurrently with a new transpile; specifically, guard the code
invoked by the timer callback (the transpile invocation and any shared variables
accessed there) with a sync.Mutex or atomic boolean and check/lock in both the
timer callback and the code path that resets or restarts debounceTimer to ensure
only one transpile runs at a time.- Around line 99-163: The watcher goroutine currently only exits when
watcher.Events/Errors close, creating a leak because watcher.Close() is deferred
until goroutine exit; modify the goroutine loop to also select on the server
shutdown signal (s.shutdown or server.shutdown) and handle shutdown by stopping
any active debounceTimer (call Stop()), calling watcher.Close() (once) and
returning; add a new casecase <-server.shutdown:(orcase <-s.shutdown:)
alongside the existing select so the goroutine terminates when the server is
closing, ensuring you clean up debounceTimer and avoid double-closing the
watcher.- Around line 141-142: The current use of fmt.Sprintf to build errMsg
interpolates err directly into a JSON string (the fmt.Sprintf(...) that is then
passed to server.wsManager.Broadcast), which can produce malformed or unsafe
JSON if err contains quotes or backslashes; fix it by constructing a struct or
map (e.g., fields "type","title","message") and marshal it with json.Marshal to
produce a safe []byte, then pass that byte slice to server.wsManager.Broadcast
instead of the fmt.Sprintf-produced string so the error text in err is properly
escaped.
Nitpick comments:
In@serve/DEV_MODE.md:
- Around line 36-41: The fenced code block that shows the server log output (the
block containing "[INFO] Dev mode: reading internal modules from disk" and
"[INFO] Dev mode: watching elements directory for changes") is missing a
language specifier; update the opening fence fromtotext (orlogs render consistently. In `@serve/internal_modules_dev_test.go`: - Around line 91-95: The test manually initializes server.wsManager after calling NewServer(0); instead call NewServerWithConfig with ServerConfig{Port: 0, Reload: true} so the server is constructed with reload behavior and wsManager set up consistently with production; replace the NewServer(0) + manual server.wsManager = newWebSocketManager() / SetLogger(...) sequence with a single call to NewServerWithConfig(ServerConfig{Port:0, Reload:true}) and remove the manual initialization code, keeping references to server, NewServerWithConfig, ServerConfig, and wsManager to locate the change. In `@serve/internal/elements/transpile.go`: - Around line 51-54: The allocation and copy into the local variable entryPoints is unnecessary; instead of creating entryPoints and calling copy(entryPoints, tsFiles), use the existing tsFiles slice directly where EntryPoints is required (remove the make/copy lines and pass tsFiles as the EntryPoints value). Update references that use entryPoints (the variable in transpile.go) to use tsFiles so no redundant allocation occurs.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID:
78b0c678-5bf6-4ac5-a801-249f4d60b891📒 Files selected for processing (11)
Makefileserve/DEV_MODE.mdserve/internal/elements/transpile.goserve/internal/elements/transpile_test.goserve/internal_modules_dev.goserve/internal_modules_dev_test.goserve/internal_modules_prod.goserve/internal_modules_test.goserve/middleware/routes/internal_modules.goserve/middleware/routes/routes.goserve/server.go
| go func() { | ||
| defer watcher.Close() | ||
|
|
||
| // Debounce timer to avoid rebuilding on every file save | ||
| var debounceTimer *time.Timer | ||
| const debounceDelay = 300 * time.Millisecond | ||
|
|
||
| for { | ||
| select { | ||
| case event, ok := <-watcher.Events: | ||
| if !ok { | ||
| return | ||
| } | ||
|
|
||
| // Only react to TypeScript file changes | ||
| if !strings.HasSuffix(event.Name, ".ts") { | ||
| continue | ||
| } | ||
|
|
||
| // Ignore test files | ||
| if strings.HasSuffix(event.Name, ".test.ts") { | ||
| continue | ||
| } | ||
|
|
||
| // Only rebuild on Write events | ||
| if event.Op&fsnotify.Write != fsnotify.Write { | ||
| continue | ||
| } | ||
|
|
||
| server.logger.Debug("Element file changed: %s", event.Name) | ||
|
|
||
| // Reset debounce timer | ||
| if debounceTimer != nil { | ||
| debounceTimer.Stop() | ||
| } | ||
|
|
||
| debounceTimer = time.AfterFunc(debounceDelay, func() { | ||
| server.logger.Info("Transpiling elements...") | ||
| if err := elements.TranspileElements(elementsDir); err != nil { | ||
| server.logger.Error("Failed to transpile elements: %v", err) | ||
| // Broadcast error to clients | ||
| if server.wsManager != nil { | ||
| errMsg := fmt.Sprintf(`{"type":"error","title":"Element Transpilation Error","message":"Failed to transpile elements: %v"}`, err) | ||
| _ = server.wsManager.Broadcast([]byte(errMsg)) | ||
| } | ||
| return | ||
| } | ||
|
|
||
| server.logger.Info("Elements transpiled successfully, reloading clients...") | ||
|
|
||
| // Trigger reload | ||
| if server.wsManager != nil { | ||
| reloadMsg := []byte(`{"type":"reload"}`) | ||
| _ = server.wsManager.Broadcast(reloadMsg) | ||
| } | ||
| }) | ||
|
|
||
| case err, ok := <-watcher.Errors: | ||
| if !ok { | ||
| return | ||
| } | ||
| server.logger.Error("Element watcher error: %v", err) | ||
| } | ||
| } | ||
| }() |
There was a problem hiding this comment.
Goroutine does not respond to server shutdown signal.
The watcher goroutine only exits when watcher.Events or watcher.Errors channels close, but these channels are only closed when watcher.Close() is called—which is deferred until the goroutine exits. This creates a circular dependency causing the goroutine to leak on server shutdown.
Per context snippet 2, Server.Close() signals shutdown via s.shutdown channel. Add a select case to monitor this.
🔧 Proposed fix to handle shutdown
// Start the watcher goroutine
go func() {
defer watcher.Close()
// Debounce timer to avoid rebuilding on every file save
var debounceTimer *time.Timer
const debounceDelay = 300 * time.Millisecond
for {
select {
+ case <-server.Done():
+ server.logger.Debug("Dev watcher shutting down")
+ if debounceTimer != nil {
+ debounceTimer.Stop()
+ }
+ return
+
case event, ok := <-watcher.Events:
if !ok {
return
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| go func() { | |
| defer watcher.Close() | |
| // Debounce timer to avoid rebuilding on every file save | |
| var debounceTimer *time.Timer | |
| const debounceDelay = 300 * time.Millisecond | |
| for { | |
| select { | |
| case event, ok := <-watcher.Events: | |
| if !ok { | |
| return | |
| } | |
| // Only react to TypeScript file changes | |
| if !strings.HasSuffix(event.Name, ".ts") { | |
| continue | |
| } | |
| // Ignore test files | |
| if strings.HasSuffix(event.Name, ".test.ts") { | |
| continue | |
| } | |
| // Only rebuild on Write events | |
| if event.Op&fsnotify.Write != fsnotify.Write { | |
| continue | |
| } | |
| server.logger.Debug("Element file changed: %s", event.Name) | |
| // Reset debounce timer | |
| if debounceTimer != nil { | |
| debounceTimer.Stop() | |
| } | |
| debounceTimer = time.AfterFunc(debounceDelay, func() { | |
| server.logger.Info("Transpiling elements...") | |
| if err := elements.TranspileElements(elementsDir); err != nil { | |
| server.logger.Error("Failed to transpile elements: %v", err) | |
| // Broadcast error to clients | |
| if server.wsManager != nil { | |
| errMsg := fmt.Sprintf(`{"type":"error","title":"Element Transpilation Error","message":"Failed to transpile elements: %v"}`, err) | |
| _ = server.wsManager.Broadcast([]byte(errMsg)) | |
| } | |
| return | |
| } | |
| server.logger.Info("Elements transpiled successfully, reloading clients...") | |
| // Trigger reload | |
| if server.wsManager != nil { | |
| reloadMsg := []byte(`{"type":"reload"}`) | |
| _ = server.wsManager.Broadcast(reloadMsg) | |
| } | |
| }) | |
| case err, ok := <-watcher.Errors: | |
| if !ok { | |
| return | |
| } | |
| server.logger.Error("Element watcher error: %v", err) | |
| } | |
| } | |
| }() | |
| go func() { | |
| defer watcher.Close() | |
| // Debounce timer to avoid rebuilding on every file save | |
| var debounceTimer *time.Timer | |
| const debounceDelay = 300 * time.Millisecond | |
| for { | |
| select { | |
| case <-server.Done(): | |
| server.logger.Debug("Dev watcher shutting down") | |
| if debounceTimer != nil { | |
| debounceTimer.Stop() | |
| } | |
| return | |
| case event, ok := <-watcher.Events: | |
| if !ok { | |
| return | |
| } | |
| // Only react to TypeScript file changes | |
| if !strings.HasSuffix(event.Name, ".ts") { | |
| continue | |
| } | |
| // Ignore test files | |
| if strings.HasSuffix(event.Name, ".test.ts") { | |
| continue | |
| } | |
| // Only rebuild on Write events | |
| if event.Op&fsnotify.Write != fsnotify.Write { | |
| continue | |
| } | |
| server.logger.Debug("Element file changed: %s", event.Name) | |
| // Reset debounce timer | |
| if debounceTimer != nil { | |
| debounceTimer.Stop() | |
| } | |
| debounceTimer = time.AfterFunc(debounceDelay, func() { | |
| server.logger.Info("Transpiling elements...") | |
| if err := elements.TranspileElements(elementsDir); err != nil { | |
| server.logger.Error("Failed to transpile elements: %v", err) | |
| // Broadcast error to clients | |
| if server.wsManager != nil { | |
| errMsg := fmt.Sprintf(`{"type":"error","title":"Element Transpilation Error","message":"Failed to transpile elements: %v"}`, err) | |
| _ = server.wsManager.Broadcast([]byte(errMsg)) | |
| } | |
| return | |
| } | |
| server.logger.Info("Elements transpiled successfully, reloading clients...") | |
| // Trigger reload | |
| if server.wsManager != nil { | |
| reloadMsg := []byte(`{"type":"reload"}`) | |
| _ = server.wsManager.Broadcast(reloadMsg) | |
| } | |
| }) | |
| case err, ok := <-watcher.Errors: | |
| if !ok { | |
| return | |
| } | |
| server.logger.Error("Element watcher error: %v", err) | |
| } | |
| } | |
| }() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@serve/internal_modules_dev.go` around lines 99 - 163, The watcher goroutine
currently only exits when watcher.Events/Errors close, creating a leak because
watcher.Close() is deferred until goroutine exit; modify the goroutine loop to
also select on the server shutdown signal (s.shutdown or server.shutdown) and
handle shutdown by stopping any active debounceTimer (call Stop()), calling
watcher.Close() (once) and returning; add a new case `case <-server.shutdown:`
(or `case <-s.shutdown:`) alongside the existing select so the goroutine
terminates when the server is closing, ensuring you clean up debounceTimer and
avoid double-closing the watcher.
| if debounceTimer != nil { | ||
| debounceTimer.Stop() | ||
| } |
There was a problem hiding this comment.
Potential race: in-flight debounce callback may still execute after Stop().
time.Timer.Stop() returns false if the timer already expired and the callback is queued or running. While unlikely with a 300ms debounce, rapid successive changes could lead to overlapping transpile calls.
Consider using a sync.Mutex or a dedicated debounce channel pattern if concurrent transpilation becomes problematic.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@serve/internal_modules_dev.go` around lines 131 - 133, The debounce Stop()
call on debounceTimer can return false while the timer's callback is already
queued/running, allowing an in-flight transpile to run concurrently; protect the
transpile path by introducing synchronization (e.g., a mutex or an in-flight
flag) around the callback and any shared state so callbacks that start after
Stop() complete cannot run concurrently with a new transpile; specifically,
guard the code invoked by the timer callback (the transpile invocation and any
shared variables accessed there) with a sync.Mutex or atomic boolean and
check/lock in both the timer callback and the code path that resets or restarts
debounceTimer to ensure only one transpile runs at a time.
| errMsg := fmt.Sprintf(`{"type":"error","title":"Element Transpilation Error","message":"Failed to transpile elements: %v"}`, err) | ||
| _ = server.wsManager.Broadcast([]byte(errMsg)) |
There was a problem hiding this comment.
JSON string is vulnerable to injection if error message contains special characters.
The error message is interpolated directly into a JSON string. If err contains quotes or backslashes, the resulting JSON will be malformed.
🛡️ Proposed fix using json.Marshal
+import "encoding/json"
// Inside the debounce callback:
-errMsg := fmt.Sprintf(`{"type":"error","title":"Element Transpilation Error","message":"Failed to transpile elements: %v"}`, err)
-_ = server.wsManager.Broadcast([]byte(errMsg))
+type errorPayload struct {
+ Type string `json:"type"`
+ Title string `json:"title"`
+ Message string `json:"message"`
+}
+payload := errorPayload{
+ Type: "error",
+ Title: "Element Transpilation Error",
+ Message: fmt.Sprintf("Failed to transpile elements: %v", err),
+}
+if msgBytes, err := json.Marshal(payload); err == nil {
+ _ = server.wsManager.Broadcast(msgBytes)
+}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@serve/internal_modules_dev.go` around lines 141 - 142, The current use of
fmt.Sprintf to build errMsg interpolates err directly into a JSON string (the
fmt.Sprintf(...) that is then passed to server.wsManager.Broadcast), which can
produce malformed or unsafe JSON if err contains quotes or backslashes; fix it
by constructing a struct or map (e.g., fields "type","title","message") and
marshal it with json.Marshal to produce a safe []byte, then pass that byte slice
to server.wsManager.Broadcast instead of the fmt.Sprintf-produced string so the
error text in err is properly escaped.
Summary
cemdevbuild tag for live-reload of chrome UI assets during developmentserve/middleware/routes/templates/elements/and auto-transpiles on changeserve/internal/elements/transpile.goUsage
Test plan
go test ./serve/ -run TestReadInternalModule_Production- production mode reads from embed.FSgo test -tags cemdev ./serve/ -run TestReadInternalModule_Dev- dev mode reads from diskgo test -tags cemdev ./serve/ -run TestSetupDevWatcher_Dev- dev watcher sets up correctlygo test ./serve/internal/elements/- transpilation testsGenerated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
Documentation