Skip to content

feat(serve): add cemdev build tag for live-reload of chrome UI#269

Open
bennypowers wants to merge 1 commit intomainfrom
feat/cemdev-live-reload
Open

feat(serve): add cemdev build tag for live-reload of chrome UI#269
bennypowers wants to merge 1 commit intomainfrom
feat/cemdev-live-reload

Conversation

@bennypowers
Copy link
Copy Markdown
Owner

@bennypowers bennypowers commented Mar 19, 2026

Summary

  • Adds cemdev build tag for live-reload of chrome UI assets during development
  • Watches TypeScript files in serve/middleware/routes/templates/elements/ and auto-transpiles on change
  • Broadcasts reload signal to WebSocket clients when files change
  • Reusable esbuild transpilation logic in serve/internal/elements/transpile.go

Usage

make dev-serve PROJECT=../path/to/project

Test plan

  • go test ./serve/ -run TestReadInternalModule_Production - production mode reads from embed.FS
  • go test -tags cemdev ./serve/ -run TestReadInternalModule_Dev - dev mode reads from disk
  • go test -tags cemdev ./serve/ -run TestSetupDevWatcher_Dev - dev watcher sets up correctly
  • go test ./serve/internal/elements/ - transpilation tests

Generated with Claude Code

Summary by CodeRabbit

Release Notes

  • New Features

    • Added development mode with live-reload capability for UI asset development
    • TypeScript files automatically compile when modified during development sessions
    • Browser automatically refreshes when UI changes are detected
  • Documentation

    • Added development workflow documentation covering setup, usage, and troubleshooting

Enables development mode for iterating on chrome UI components without
rebuilding the binary. When built with -tags cemdev, the server:

- Reads internal modules from disk instead of embed.FS
- Watches serve/middleware/routes/templates/elements/ for changes
- Auto-transpiles TypeScript via esbuild
- Broadcasts reload to WebSocket clients

Usage:
  make dev-serve PROJECT=../path/to/project

Implementation uses build tags to conditionally compile:
- internal_modules_prod.go (default): reads from embed.FS
- internal_modules_dev.go (cemdev): reads from disk + file watcher

Assisted-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 19, 2026

📝 Walkthrough

Walkthrough

Introduces a development mode workflow using the cemdev build tag that enables live-reloading of embedded Chrome UI assets. The implementation includes TypeScript transpilation via esbuild, file watching with WebSocket notifications, and a module-reading abstraction layer that switches between disk-based (dev) and embedded (production) asset sources.

Changes

Cohort / File(s) Summary
Build Configuration
Makefile
Added dev-serve phony target that builds with cemdev tag, depends on generate, and ensures dist/ directory exists.
Development Documentation
serve/DEV_MODE.md
New guide describing the cemdev build-tag workflow for live-reloading: file watching over TypeScript templates, transpilation via esbuild, and WebSocket-based browser reload signaling.
TypeScript Transpilation
serve/internal/elements/transpile.go, serve/internal/elements/transpile_test.go
Added TranspileElements(sourceDir string) error function that recursively processes .ts files (excluding .test.ts) using esbuild, with comprehensive tests covering single/multiple files, test file filtering, and subdirectory transpilation.
Dev Mode Runtime
serve/internal_modules_dev.go, serve/internal_modules_dev_test.go
New cemdev-tagged file implementing readInternalModule() to read assets from disk and setupDevWatcher() to monitor templates/elements/**/*.ts for changes, transpile on write, debounce (300ms), and broadcast reload/error signals via WebSocket.
Production Mode Runtime
serve/internal_modules_prod.go
New !cemdev-tagged file providing production implementations: readInternalModule() reads from embedded filesystem; setupDevWatcher() is a no-op.
Shared Module Tests
serve/internal_modules_test.go
New production-mode tests verifying that embedded internal modules load correctly and setupDevWatcher succeeds without side effects in production builds.
Module Reading Abstraction
serve/middleware/routes/internal_modules.go
Introduced ReadInternalModule function variable (defaults to reading from InternalModules embedded FS) allowing runtime override for dev/test modes.
Route Integration
serve/middleware/routes/routes.go
Updated serveInternalModules() to call ReadInternalModule(path) instead of directly accessing embedded InternalModules.ReadFile(path).
Server Integration
serve/server.go
Modified Server.Start() to invoke setupDevWatcher(s) early in startup (after cache stats, before HTTP server begins); logs any watcher setup errors without failing server start.

Sequence Diagram

sequenceDiagram
    participant Client as Browser Client
    participant HTTP as HTTP Server
    participant Routes as routes Handler
    participant DevWatch as Dev Watcher (cemdev)
    participant Transpile as esbuild Transpiler
    participant FS as File System
    participant WS as WebSocket Manager

    Client->>HTTP: Request internal module<br/>(templates/js/cem-element.js)
    HTTP->>Routes: serveInternalModules()
    Routes->>Routes: ReadInternalModule(path)<br/>[overridden in dev mode]
    alt Production Build (!cemdev)
        Routes->>Routes: Read from embedded FS
    else Dev Build (cemdev)
        Routes->>FS: Read from disk via<br/>runtime.Caller()
    end
    Routes-->>HTTP: Module bytes
    HTTP-->>Client: 200 OK + module

    FS->>DevWatch: File change event<br/>(.ts files in elements/)
    DevWatch->>DevWatch: Debounce (300ms)
    DevWatch->>Transpile: TranspileElements(elementsDir)
    Transpile->>FS: Read .ts files<br/>(exclude .test.ts)
    Transpile->>Transpile: esbuild process<br/>(ES2020, inline sourcemaps)
    Transpile->>FS: Write .js outputs
    alt Transpile Success
        DevWatch->>WS: Broadcast reload message
        WS-->>Client: WebSocket reload signal
        Client->>Client: Refresh browser
    else Transpile Error
        DevWatch->>WS: Broadcast error message
        WS-->>Client: WebSocket error signal
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~30 minutes

Possibly related PRs

  • PR #132: Extends the existing dev server functionality that this PR builds upon with the new cemdev build-tag, file watcher, and transpilation pipeline.
  • PR #186: Modifies file-watcher and watcher-based server lifecycle logic that overlaps with the dev watcher setup and shutdown/cleanup behavior introduced here.
  • PR #135: Changes to routes/templates and internal module handling directly relate to the new ReadInternalModule abstraction and asset-reading override mechanism.

Poem

🐰 A dev mode hops along the path,
TypeScript transpiles, avoiding wrath,
File changes trigger esbuild's dance,
WebSockets whisper reload's chance,
From disk or embed, modules stay bright,
Live-reloading everything just right! ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding a cemdev build tag for live-reload functionality of the Chrome UI, which is the primary purpose of this pull request.
Docstring Coverage ✅ Passed Docstring coverage is 83.33% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/cemdev-live-reload
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown

LSP Benchmark Results

Benchmark PR Mean (ms) Base Mean (ms) Delta Success Rate Status
Startup 3.817527 3.653194 0.16 (4.5%) ➖ 100%
Hover 0.632563 0.557862 0.07 (13.4%) 🐢 100%
Completion 2.560421 2.413456 0.15 (6.1%) ⚠️ 100%
Diagnostics 2039.881795 2039.597761 0.28 (0.0%) ➖ 100%
Attribute Hover 0 0 0.00 (0.0%) ➖ 100%
References 33.2644 32.731 0.53 (1.6%) ➖ 100%

View this benchmark run in GitHub Actions

💡 Tip: Raw JSON results are available in workflow artifacts if needed.


Generate Benchmarks

Branch Total Time (s) # Runs Avg Time/run (s) Output Size (kb) Perf/kb (s/kb)
Base main 4.27014 6 0.71169 156 0.0273727
PR feat/cemdev-live-reload 4.26072 6 0.71012 156 0.0273123
Δ -0.0094 0 -0.0016 0 -0.0001 👍

Perf/kb delta ratio: 1.00x 👍

View this benchmark run in GitHub Actions

💡 Tip: Raw JSON outputs are available in workflow artifacts if needed.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
serve/internal/elements/transpile.go (1)

51-54: Unnecessary slice copy.

The copy(entryPoints, tsFiles) is redundant since tsFiles can be used directly as EntryPoints. This eliminates an allocation.

♻️ Simplify by using tsFiles directly
-	// Build entry points for esbuild
-	entryPoints := make([]string, len(tsFiles))
-	copy(entryPoints, tsFiles)
-
 	// Run esbuild to transpile all files
 	result := api.Build(api.BuildOptions{
-		EntryPoints: entryPoints,
+		EntryPoints: tsFiles,
 		Outdir:      sourceDir,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@serve/internal/elements/transpile.go` around lines 51 - 54, The allocation
and copy into the local variable entryPoints is unnecessary; instead of creating
entryPoints and calling copy(entryPoints, tsFiles), use the existing tsFiles
slice directly where EntryPoints is required (remove the make/copy lines and
pass tsFiles as the EntryPoints value). Update references that use entryPoints
(the variable in transpile.go) to use tsFiles so no redundant allocation occurs.
serve/DEV_MODE.md (1)

36-41: Add language specifier to fenced code block.

The log output code block is missing a language specifier. Use text or plaintext for log output to satisfy markdown linters and improve rendering consistency.

📝 Add language specifier
 The server will log:
-```
+```text
 [INFO] Dev mode: reading internal modules from disk
 [INFO] Dev mode: watching elements directory for changes
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @serve/DEV_MODE.md around lines 36 - 41, The fenced code block that shows the
server log output (the block containing "[INFO] Dev mode: reading internal
modules from disk" and "[INFO] Dev mode: watching elements directory for
changes") is missing a language specifier; update the opening fence from totext (or ```plaintext) so markdown linters and renderers treat it as plain
text and the logs render consistently.


</details>

</blockquote></details>
<details>
<summary>serve/internal_modules_dev_test.go (1)</summary><blockquote>

`91-95`: **Consider using `ServerConfig.Reload = true` instead of manual initialization.**

Based on context snippet 1, `wsManager` is only initialized when `config.Reload` is true. Rather than manually initializing `wsManager` after construction, consider using `NewServerWithConfig` with `Reload: true` to get the same behavior as production.



<details>
<summary>♻️ Suggested alternative</summary>

```go
// Instead of NewServer(0) + manual wsManager init:
server, err := NewServerWithConfig(ServerConfig{
    Port:   0,
    Reload: true,
})
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@serve/internal_modules_dev_test.go` around lines 91 - 95, The test manually
initializes server.wsManager after calling NewServer(0); instead call
NewServerWithConfig with ServerConfig{Port: 0, Reload: true} so the server is
constructed with reload behavior and wsManager set up consistently with
production; replace the NewServer(0) + manual server.wsManager =
newWebSocketManager() / SetLogger(...) sequence with a single call to
NewServerWithConfig(ServerConfig{Port:0, Reload:true}) and remove the manual
initialization code, keeping references to server, NewServerWithConfig,
ServerConfig, and wsManager to locate the change.
```

</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>🤖 Prompt for all review comments with AI agents</summary>

Verify each finding against the current code and only fix it if needed.

Inline comments:
In @serve/internal_modules_dev.go:

  • Around line 131-133: The debounce Stop() call on debounceTimer can return
    false while the timer's callback is already queued/running, allowing an
    in-flight transpile to run concurrently; protect the transpile path by
    introducing synchronization (e.g., a mutex or an in-flight flag) around the
    callback and any shared state so callbacks that start after Stop() complete
    cannot run concurrently with a new transpile; specifically, guard the code
    invoked by the timer callback (the transpile invocation and any shared variables
    accessed there) with a sync.Mutex or atomic boolean and check/lock in both the
    timer callback and the code path that resets or restarts debounceTimer to ensure
    only one transpile runs at a time.
  • Around line 99-163: The watcher goroutine currently only exits when
    watcher.Events/Errors close, creating a leak because watcher.Close() is deferred
    until goroutine exit; modify the goroutine loop to also select on the server
    shutdown signal (s.shutdown or server.shutdown) and handle shutdown by stopping
    any active debounceTimer (call Stop()), calling watcher.Close() (once) and
    returning; add a new case case <-server.shutdown: (or case <-s.shutdown:)
    alongside the existing select so the goroutine terminates when the server is
    closing, ensuring you clean up debounceTimer and avoid double-closing the
    watcher.
  • Around line 141-142: The current use of fmt.Sprintf to build errMsg
    interpolates err directly into a JSON string (the fmt.Sprintf(...) that is then
    passed to server.wsManager.Broadcast), which can produce malformed or unsafe
    JSON if err contains quotes or backslashes; fix it by constructing a struct or
    map (e.g., fields "type","title","message") and marshal it with json.Marshal to
    produce a safe []byte, then pass that byte slice to server.wsManager.Broadcast
    instead of the fmt.Sprintf-produced string so the error text in err is properly
    escaped.

Nitpick comments:
In @serve/DEV_MODE.md:

  • Around line 36-41: The fenced code block that shows the server log output (the
    block containing "[INFO] Dev mode: reading internal modules from disk" and
    "[INFO] Dev mode: watching elements directory for changes") is missing a
    language specifier; update the opening fence from totext (or
logs render consistently.

In `@serve/internal_modules_dev_test.go`:
- Around line 91-95: The test manually initializes server.wsManager after
calling NewServer(0); instead call NewServerWithConfig with ServerConfig{Port:
0, Reload: true} so the server is constructed with reload behavior and wsManager
set up consistently with production; replace the NewServer(0) + manual
server.wsManager = newWebSocketManager() / SetLogger(...) sequence with a single
call to NewServerWithConfig(ServerConfig{Port:0, Reload:true}) and remove the
manual initialization code, keeping references to server, NewServerWithConfig,
ServerConfig, and wsManager to locate the change.

In `@serve/internal/elements/transpile.go`:
- Around line 51-54: The allocation and copy into the local variable entryPoints
is unnecessary; instead of creating entryPoints and calling copy(entryPoints,
tsFiles), use the existing tsFiles slice directly where EntryPoints is required
(remove the make/copy lines and pass tsFiles as the EntryPoints value). Update
references that use entryPoints (the variable in transpile.go) to use tsFiles so
no redundant allocation occurs.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 78b0c678-5bf6-4ac5-a801-249f4d60b891

📥 Commits

Reviewing files that changed from the base of the PR and between 49db066 and 876d498.

📒 Files selected for processing (11)
  • Makefile
  • serve/DEV_MODE.md
  • serve/internal/elements/transpile.go
  • serve/internal/elements/transpile_test.go
  • serve/internal_modules_dev.go
  • serve/internal_modules_dev_test.go
  • serve/internal_modules_prod.go
  • serve/internal_modules_test.go
  • serve/middleware/routes/internal_modules.go
  • serve/middleware/routes/routes.go
  • serve/server.go

Comment on lines +99 to +163
go func() {
defer watcher.Close()

// Debounce timer to avoid rebuilding on every file save
var debounceTimer *time.Timer
const debounceDelay = 300 * time.Millisecond

for {
select {
case event, ok := <-watcher.Events:
if !ok {
return
}

// Only react to TypeScript file changes
if !strings.HasSuffix(event.Name, ".ts") {
continue
}

// Ignore test files
if strings.HasSuffix(event.Name, ".test.ts") {
continue
}

// Only rebuild on Write events
if event.Op&fsnotify.Write != fsnotify.Write {
continue
}

server.logger.Debug("Element file changed: %s", event.Name)

// Reset debounce timer
if debounceTimer != nil {
debounceTimer.Stop()
}

debounceTimer = time.AfterFunc(debounceDelay, func() {
server.logger.Info("Transpiling elements...")
if err := elements.TranspileElements(elementsDir); err != nil {
server.logger.Error("Failed to transpile elements: %v", err)
// Broadcast error to clients
if server.wsManager != nil {
errMsg := fmt.Sprintf(`{"type":"error","title":"Element Transpilation Error","message":"Failed to transpile elements: %v"}`, err)
_ = server.wsManager.Broadcast([]byte(errMsg))
}
return
}

server.logger.Info("Elements transpiled successfully, reloading clients...")

// Trigger reload
if server.wsManager != nil {
reloadMsg := []byte(`{"type":"reload"}`)
_ = server.wsManager.Broadcast(reloadMsg)
}
})

case err, ok := <-watcher.Errors:
if !ok {
return
}
server.logger.Error("Element watcher error: %v", err)
}
}
}()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Goroutine does not respond to server shutdown signal.

The watcher goroutine only exits when watcher.Events or watcher.Errors channels close, but these channels are only closed when watcher.Close() is called—which is deferred until the goroutine exits. This creates a circular dependency causing the goroutine to leak on server shutdown.

Per context snippet 2, Server.Close() signals shutdown via s.shutdown channel. Add a select case to monitor this.

🔧 Proposed fix to handle shutdown
 	// Start the watcher goroutine
 	go func() {
 		defer watcher.Close()
 
 		// Debounce timer to avoid rebuilding on every file save
 		var debounceTimer *time.Timer
 		const debounceDelay = 300 * time.Millisecond
 
 		for {
 			select {
+			case <-server.Done():
+				server.logger.Debug("Dev watcher shutting down")
+				if debounceTimer != nil {
+					debounceTimer.Stop()
+				}
+				return
+
 			case event, ok := <-watcher.Events:
 				if !ok {
 					return
 				}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
go func() {
defer watcher.Close()
// Debounce timer to avoid rebuilding on every file save
var debounceTimer *time.Timer
const debounceDelay = 300 * time.Millisecond
for {
select {
case event, ok := <-watcher.Events:
if !ok {
return
}
// Only react to TypeScript file changes
if !strings.HasSuffix(event.Name, ".ts") {
continue
}
// Ignore test files
if strings.HasSuffix(event.Name, ".test.ts") {
continue
}
// Only rebuild on Write events
if event.Op&fsnotify.Write != fsnotify.Write {
continue
}
server.logger.Debug("Element file changed: %s", event.Name)
// Reset debounce timer
if debounceTimer != nil {
debounceTimer.Stop()
}
debounceTimer = time.AfterFunc(debounceDelay, func() {
server.logger.Info("Transpiling elements...")
if err := elements.TranspileElements(elementsDir); err != nil {
server.logger.Error("Failed to transpile elements: %v", err)
// Broadcast error to clients
if server.wsManager != nil {
errMsg := fmt.Sprintf(`{"type":"error","title":"Element Transpilation Error","message":"Failed to transpile elements: %v"}`, err)
_ = server.wsManager.Broadcast([]byte(errMsg))
}
return
}
server.logger.Info("Elements transpiled successfully, reloading clients...")
// Trigger reload
if server.wsManager != nil {
reloadMsg := []byte(`{"type":"reload"}`)
_ = server.wsManager.Broadcast(reloadMsg)
}
})
case err, ok := <-watcher.Errors:
if !ok {
return
}
server.logger.Error("Element watcher error: %v", err)
}
}
}()
go func() {
defer watcher.Close()
// Debounce timer to avoid rebuilding on every file save
var debounceTimer *time.Timer
const debounceDelay = 300 * time.Millisecond
for {
select {
case <-server.Done():
server.logger.Debug("Dev watcher shutting down")
if debounceTimer != nil {
debounceTimer.Stop()
}
return
case event, ok := <-watcher.Events:
if !ok {
return
}
// Only react to TypeScript file changes
if !strings.HasSuffix(event.Name, ".ts") {
continue
}
// Ignore test files
if strings.HasSuffix(event.Name, ".test.ts") {
continue
}
// Only rebuild on Write events
if event.Op&fsnotify.Write != fsnotify.Write {
continue
}
server.logger.Debug("Element file changed: %s", event.Name)
// Reset debounce timer
if debounceTimer != nil {
debounceTimer.Stop()
}
debounceTimer = time.AfterFunc(debounceDelay, func() {
server.logger.Info("Transpiling elements...")
if err := elements.TranspileElements(elementsDir); err != nil {
server.logger.Error("Failed to transpile elements: %v", err)
// Broadcast error to clients
if server.wsManager != nil {
errMsg := fmt.Sprintf(`{"type":"error","title":"Element Transpilation Error","message":"Failed to transpile elements: %v"}`, err)
_ = server.wsManager.Broadcast([]byte(errMsg))
}
return
}
server.logger.Info("Elements transpiled successfully, reloading clients...")
// Trigger reload
if server.wsManager != nil {
reloadMsg := []byte(`{"type":"reload"}`)
_ = server.wsManager.Broadcast(reloadMsg)
}
})
case err, ok := <-watcher.Errors:
if !ok {
return
}
server.logger.Error("Element watcher error: %v", err)
}
}
}()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@serve/internal_modules_dev.go` around lines 99 - 163, The watcher goroutine
currently only exits when watcher.Events/Errors close, creating a leak because
watcher.Close() is deferred until goroutine exit; modify the goroutine loop to
also select on the server shutdown signal (s.shutdown or server.shutdown) and
handle shutdown by stopping any active debounceTimer (call Stop()), calling
watcher.Close() (once) and returning; add a new case `case <-server.shutdown:`
(or `case <-s.shutdown:`) alongside the existing select so the goroutine
terminates when the server is closing, ensuring you clean up debounceTimer and
avoid double-closing the watcher.

Comment on lines +131 to +133
if debounceTimer != nil {
debounceTimer.Stop()
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential race: in-flight debounce callback may still execute after Stop().

time.Timer.Stop() returns false if the timer already expired and the callback is queued or running. While unlikely with a 300ms debounce, rapid successive changes could lead to overlapping transpile calls.

Consider using a sync.Mutex or a dedicated debounce channel pattern if concurrent transpilation becomes problematic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@serve/internal_modules_dev.go` around lines 131 - 133, The debounce Stop()
call on debounceTimer can return false while the timer's callback is already
queued/running, allowing an in-flight transpile to run concurrently; protect the
transpile path by introducing synchronization (e.g., a mutex or an in-flight
flag) around the callback and any shared state so callbacks that start after
Stop() complete cannot run concurrently with a new transpile; specifically,
guard the code invoked by the timer callback (the transpile invocation and any
shared variables accessed there) with a sync.Mutex or atomic boolean and
check/lock in both the timer callback and the code path that resets or restarts
debounceTimer to ensure only one transpile runs at a time.

Comment on lines +141 to +142
errMsg := fmt.Sprintf(`{"type":"error","title":"Element Transpilation Error","message":"Failed to transpile elements: %v"}`, err)
_ = server.wsManager.Broadcast([]byte(errMsg))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

JSON string is vulnerable to injection if error message contains special characters.

The error message is interpolated directly into a JSON string. If err contains quotes or backslashes, the resulting JSON will be malformed.

🛡️ Proposed fix using json.Marshal
+import "encoding/json"

 // Inside the debounce callback:
-errMsg := fmt.Sprintf(`{"type":"error","title":"Element Transpilation Error","message":"Failed to transpile elements: %v"}`, err)
-_ = server.wsManager.Broadcast([]byte(errMsg))
+type errorPayload struct {
+	Type    string `json:"type"`
+	Title   string `json:"title"`
+	Message string `json:"message"`
+}
+payload := errorPayload{
+	Type:    "error",
+	Title:   "Element Transpilation Error",
+	Message: fmt.Sprintf("Failed to transpile elements: %v", err),
+}
+if msgBytes, err := json.Marshal(payload); err == nil {
+	_ = server.wsManager.Broadcast(msgBytes)
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@serve/internal_modules_dev.go` around lines 141 - 142, The current use of
fmt.Sprintf to build errMsg interpolates err directly into a JSON string (the
fmt.Sprintf(...) that is then passed to server.wsManager.Broadcast), which can
produce malformed or unsafe JSON if err contains quotes or backslashes; fix it
by constructing a struct or map (e.g., fields "type","title","message") and
marshal it with json.Marshal to produce a safe []byte, then pass that byte slice
to server.wsManager.Broadcast instead of the fmt.Sprintf-produced string so the
error text in err is properly escaped.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant