Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 14 additions & 2 deletions internal/logger/logger.go
Original file line number Diff line number Diff line change
Expand Up @@ -211,11 +211,21 @@ func (l *FunLogger) Loading(format string, a ...any) context.CancelCauseFunc {
return cancel
}
l.Wg.Add(1)
idx := len(l.activeCancels)
l.activeCancels = append(l.activeCancels, cancel)
l.mu.Unlock()

go l.runLoading(ctx, fmt.Sprintf(format, a...))
return cancel

// Return a wrapper that cancels and marks the slot as nil
return func(cause error) {
cancel(cause)
l.mu.Lock()
if idx < len(l.activeCancels) {
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a potential race condition if the wrapper cancel function is called after Exit() has set l.activeCancels = nil. The bounds check if idx < len(l.activeCancels) will correctly prevent a panic (since len(nil) is 0), but this is a subtle correctness property that may not be immediately obvious to future maintainers.

Consider adding a comment explaining this safety property, or explicitly check for nil:

l.mu.Lock()
if l.activeCancels != nil && idx < len(l.activeCancels) {
    l.activeCancels[idx] = nil
}
l.mu.Unlock()

This makes the intention clearer and is more defensive against future refactoring.

Suggested change
if idx < len(l.activeCancels) {
if l.activeCancels != nil && idx < len(l.activeCancels) {

Copilot uses AI. Check for mistakes.
l.activeCancels[idx] = nil
Comment on lines +224 to +225
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this change successfully prevents the unbounded growth of cancel function references (which hold closures and contexts), the activeCancels slice capacity itself will still grow with each Loading() call. The memory overhead of a slice of nil pointers is minimal compared to the closures, so this effectively addresses the main memory leak issue.

However, for long-running applications with many transient loading operations, you might consider adding a compaction mechanism. For example, periodically removing nil entries when they exceed a certain threshold, or truncating the slice in specific scenarios. Given that holodeck appears to be primarily a CLI tool that exits after operations complete, this may not be necessary.

Suggested change
if idx < len(l.activeCancels) {
l.activeCancels[idx] = nil
if idx < len(l.activeCancels) {
// Mark this slot as inactive
l.activeCancels[idx] = nil
// Compact trailing nil entries so the slice length (and eventually
// its backing array) does not grow without bound across many
// transient loading operations.
for len(l.activeCancels) > 0 {
lastIdx := len(l.activeCancels) - 1
if l.activeCancels[lastIdx] != nil {
break
}
l.activeCancels = l.activeCancels[:lastIdx]
}
// When all entries are inactive, reset the slice so the backing
// array can be garbage-collected and capacity reset.
if len(l.activeCancels) == 0 {
l.activeCancels = nil
}

Copilot uses AI. Check for mistakes.
}
l.mu.Unlock()
}
Comment on lines +220 to +228
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new behavior of nil-ing out cancel function entries lacks test coverage. While existing tests verify that Loading works correctly, there are no tests that verify:

  1. The cancel function slot is actually set to nil after calling the returned wrapper
  2. Multiple sequential Loading calls don't leak memory via the slice
  3. Calling the wrapper after Exit() doesn't cause issues

Consider adding a test that verifies the nil-ing behavior and slice state after cancel is called.

Copilot uses AI. Check for mistakes.
}

func (l *FunLogger) runLoading(ctx context.Context, message string) {
Expand Down Expand Up @@ -277,7 +287,9 @@ func (l *FunLogger) Exit(code int) {
l.mu.Lock()
l.exited = true
for _, cancel := range l.activeCancels {
cancel(nil)
if cancel != nil {
cancel(nil)
}
}
l.activeCancels = nil
l.mu.Unlock()
Expand Down
Loading