feat(cli): serve databases during suga dev#139
Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughAdds a Docker-backed PostgreSQL lifecycle manager (public DatabaseManager and DatabaseInfo, NewDatabaseManager, Start/Stop/RemoveVolume/CreateDatabase/GetConnectionString/GetPort/GetEnvVarKey/GetDatabases). Updates cli/go.mod with docker, lib/pq, and numerous dependency version bumps. Simulation now creates and starts databases before services, injects DB connection env vars into service intents, tracks service goroutines with a wait group, and exposes Stop to shut down services then databases. Service start/signal handling now propagates intent.Env and avoids signaling nil processes. Dev gains SIGINT/SIGTERM graceful shutdown. Port allocation now treats maxPort as inclusive. Suggested reviewers
Pre-merge checks✅ Passed checks (2 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
Comment |
21c33fe to
ebfe725
Compare
ebfe725 to
48fb8e8
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
cli/go.mod (1)
36-169: Transitive dependency cascade is expected but warrants a spot-check.The large number of indirect updates (container tooling, OpenTelemetry, moby/docker internals, OpenContainers, and protobuf-related packages) is typical when pulling in docker/docker. Confirm that no breaking changes occur in existing integration points (e.g., services/engines using gRPC or observability stacks).
Consider running dependency audits and integration tests locally to catch any version conflicts before merge. If you haven't already, use
go mod tidyandgo mod verifyto ensure consistency.cli/internal/simulation/simulation.go (1)
286-303: Clone the intent env map before mutating it
intentCopy.Envstill points at the original service intent map, so any injected DB vars leak back into the shared app spec. Please allocate a new map and copy the existing entries before adding DB-specific values.Apply this diff:
- intentCopy := *serviceIntent - if intentCopy.Env == nil { - intentCopy.Env = make(map[string]string) - } + intentCopy := *serviceIntent + intentCopy.Env = make(map[string]string, len(serviceIntent.Env)) + for k, v := range serviceIntent.Env { + intentCopy.Env[k] = v + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
cli/go.sumis excluded by!**/*.sumgo.work.sumis excluded by!**/*.sum
📒 Files selected for processing (6)
cli/go.mod(5 hunks)cli/internal/netx/net.go(1 hunks)cli/internal/simulation/database/database.go(1 hunks)cli/internal/simulation/service/service.go(1 hunks)cli/internal/simulation/simulation.go(6 hunks)cli/pkg/app/suga.go(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-23T23:36:19.429Z
Learnt from: davemooreuws
Repo: nitrictech/suga PR: 74
File: cli/internal/simulation/simulation.go:85-85
Timestamp: 2025-09-23T23:36:19.429Z
Learning: In the Suga CLI, there are two different contexts for SugaIntro Dashboard output: 1) Simulation server (simulation.go) where Dashboard was correctly removed because it was non-functional, and 2) Development WebSocket server (suga.go) where Dashboard URL is valid and needed for syncing with the dashboard using team slug and port parameters.
Applied to files:
cli/pkg/app/suga.go
📚 Learning: 2025-09-23T23:36:19.429Z
Learnt from: davemooreuws
Repo: nitrictech/suga PR: 74
File: cli/internal/simulation/simulation.go:85-85
Timestamp: 2025-09-23T23:36:19.429Z
Learning: In the Suga CLI, the Dashboard output should be removed from simulation context (simulation.go) but kept for the edit command context (suga.go). The edit command has a valid use case for showing the Dashboard URL with /dev.
Applied to files:
cli/pkg/app/suga.go
🧬 Code graph analysis (2)
cli/internal/simulation/database/database.go (3)
cli/internal/netx/net.go (3)
ReservedPort(52-52)ReservePort(68-70)GetNextPort(72-93)cli/pkg/schema/database.go (1)
DatabaseIntent(3-9)cli/internal/style/style.go (1)
Cyan(19-19)
cli/internal/simulation/simulation.go (3)
cli/internal/netx/net.go (1)
ReservedPort(52-52)cli/internal/simulation/service/service.go (2)
ServiceSimulation(20-37)NewServiceSimulation(272-292)cli/internal/simulation/database/database.go (2)
DatabaseManager(33-41)NewDatabaseManager(50-89)
🔇 Additional comments (3)
cli/go.mod (3)
14-14: lib/pq version is appropriate for PostgreSQL support.Version v1.10.9 is a stable, well-maintained PostgreSQL driver release. No concerns here.
26-26: Updated direct dependencies look reasonable.Testify (v1.11.1), golang.org/x/net (v0.43.0), and google.golang.org/grpc (v1.75.0) are all reasonable updates. Confirm compatibility in adjacent service/engine code if changes affect gRPC or networking.
Also applies to: 29-29, 30-30
8-8: No issues found—docker/docker v28.5.1 is secure and compatible.Security advisory check confirms v28.5.1 has no applicable CVEs. The
+incompatiblesuffix is standard for this package and poses no compatibility risk.
48fb8e8 to
1db3e74
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
cli/go.mod (1)
8-8: Current choice is sound; considergo-dockerclientfor lighter footprint.Docker v28.5.1 was released 2025‑10‑08, so this is current. The
+incompatiblemarker is standard—the docker/docker library doesn't follow semantic versioning by design. The direct usage incli/internal/simulation/database/database.go(container, image, mount, volume, client APIs) justifies the dependency.That said, fsouza/go-dockerclient is a community Go client for Docker Engine API and is widely used as a lighter alternative if the team wants to minimize transitive dependencies or avoid the
+incompatiblemarker.cli/internal/simulation/database/database.go (2)
171-195: Consider exponential backoff for retries.Fixed-interval polling works, but exponential backoff would be more robust for slower environments.
Apply this diff:
func (m *DatabaseManager) waitForPostgres() error { connStr := fmt.Sprintf("host=localhost port=%d user=%s password=%s dbname=%s sslmode=disable", m.port, postgresUser, postgresPassword, postgresDB) maxRetries := 30 + backoff := 100 * time.Millisecond for i := 0; i < maxRetries; i++ { db, err := sql.Open("postgres", connStr) if err != nil { - time.Sleep(500 * time.Millisecond) + time.Sleep(backoff) + backoff = min(backoff*2, 2*time.Second) continue } err = db.Ping() db.Close() if err == nil { return nil } - time.Sleep(500 * time.Millisecond) + time.Sleep(backoff) + backoff = min(backoff*2, 2*time.Second) } return fmt.Errorf("PostgreSQL did not become ready within timeout") }
279-286: Consider returning databases in deterministic order.Map iteration is non-deterministic. Sort the result if consistent ordering matters for logs, tests, or env var injection.
Apply this diff:
+import ( + "sort" + // ... other imports +) func (m *DatabaseManager) GetDatabases() []string { names := make([]string, 0, len(m.databases)) for name := range m.databases { names = append(names, name) } + sort.Strings(names) return names }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
cli/go.sumis excluded by!**/*.sumgo.work.sumis excluded by!**/*.sum
📒 Files selected for processing (6)
cli/go.mod(5 hunks)cli/internal/netx/net.go(1 hunks)cli/internal/simulation/database/database.go(1 hunks)cli/internal/simulation/service/service.go(1 hunks)cli/internal/simulation/simulation.go(6 hunks)cli/pkg/app/suga.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- cli/pkg/app/suga.go
- cli/internal/netx/net.go
- cli/internal/simulation/simulation.go
🧰 Additional context used
🧬 Code graph analysis (1)
cli/internal/simulation/database/database.go (2)
cli/internal/netx/net.go (3)
ReservedPort(52-52)ReservePort(68-70)GetNextPort(72-93)cli/pkg/schema/database.go (1)
DatabaseIntent(3-9)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build (windows-latest, amd64)
- GitHub Check: Build (macos-latest, arm64)
- GitHub Check: Security Scan
- GitHub Check: Test
🔇 Additional comments (9)
cli/go.mod (1)
29-30: Verify significant version bumps don't introduce breaking changes.The updates to
golang.org/x/net v0.43.0,google.golang.org/grpc v1.75.0, andtestify v1.11.1are sizable bumps. Confirm that the gRPC upgrade especially hasn't broken existing proto-based service communication or transport logic in the codebase.Also applies to: 26-26
cli/internal/simulation/database/database.go (8)
24-29: Constants look good.Hardcoded credentials are acceptable for local dev, and the pinned image tag ensures reproducibility.
31-45: Struct design is sound.Proper encapsulation with exported types and unexported fields.
47-79: Constructor logic is correct.Port reservation with fallback is well-handled. The background context has no cancellation, but that's acceptable for this lifecycle manager.
81-169: Start method looks solid.Volume creation, image pull, and container lifecycle are properly managed. AutoRemove is enabled, which keeps the environment clean.
197-229: Database creation is properly secured.Good fix using
pq.QuoteIdentifieron line 212 to safely handle special characters in database names. This addresses the previous review concern about kebab-case names.
242-258: Cleanup methods are correct.Stop handles missing containerID gracefully, and RemoveVolume's force flag is appropriate given the documented data loss warning.
260-277: Accessors look good.Connection string formatting and lookups are straightforward.
288-309: Volume name sanitization is thorough.Handles edge cases correctly and follows Docker volume naming constraints.
1db3e74 to
1750255
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
cli/internal/simulation/database/database.go (1)
231-240: Use type assertion for error code checking.String matching is fragile. The past review comment already suggested checking the PostgreSQL error code
42P04via type assertion against*pq.Error. This is more reliable than string matching.Apply the fix from the past review:
func isDatabaseExistsError(err error) bool { - // PostgreSQL error code 42P04 is "duplicate_database" if err == nil { return false } - errMsg := err.Error() - return strings.Contains(errMsg, "already exists") || - strings.Contains(errMsg, "duplicate database") + // PostgreSQL error code 42P04 is "duplicate_database" + if pqErr, ok := err.(*pq.Error); ok { + return pqErr.Code == "42P04" + } + return false }
🧹 Nitpick comments (3)
cli/internal/simulation/database/database.go (3)
31-39: Consider context cancellation for cleanup.The
ctxfield stores a background context without a cancel function. This prevents graceful cancellation of long-running Docker operations. Consider storing both context and cancel function, or deriving child contexts with timeouts for individual operations.type DatabaseManager struct { dockerClient *client.Client containerID string volumeName string port netx.ReservedPort databases map[string]*DatabaseInfo ctx context.Context + cancel context.CancelFunc }Then in the constructor and Stop method:
// In NewDatabaseManager: ctx, cancel := context.WithCancel(context.Background()) return &DatabaseManager{ // ... ctx: ctx, cancel: cancel, } // In Stop: if m.cancel != nil { m.cancel() }
100-109: Consider showing image pull progress.The message warns users this may take a moment, but the pull progress is discarded. For a better developer experience, consider writing the progress to stdout or showing a simple progress indicator.
- // Wait for image pull to complete - _, _ = io.Copy(io.Discard, reader) + // Show image pull progress + _, _ = io.Copy(io.Stdout, reader)Or use a custom progress writer if you want formatted output.
171-195: Extract retry configuration to constants.The retry count and sleep duration are magic numbers. Define them as package constants for clarity and easier tuning.
+const ( + postgresReadyMaxRetries = 30 + postgresReadyRetryDelay = 500 * time.Millisecond +) + func (m *DatabaseManager) waitForPostgres() error { connStr := fmt.Sprintf("host=localhost port=%d user=%s password=%s dbname=%s sslmode=disable", m.port, postgresUser, postgresPassword, postgresDB) - maxRetries := 30 - for i := 0; i < maxRetries; i++ { + for i := 0; i < postgresReadyMaxRetries; i++ { db, err := sql.Open("postgres", connStr) if err != nil { - time.Sleep(500 * time.Millisecond) + time.Sleep(postgresReadyRetryDelay) continue } err = db.Ping() db.Close() if err == nil { return nil } - time.Sleep(500 * time.Millisecond) + time.Sleep(postgresReadyRetryDelay) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
cli/go.sumis excluded by!**/*.sumgo.work.sumis excluded by!**/*.sum
📒 Files selected for processing (6)
cli/go.mod(5 hunks)cli/internal/netx/net.go(1 hunks)cli/internal/simulation/database/database.go(1 hunks)cli/internal/simulation/service/service.go(1 hunks)cli/internal/simulation/simulation.go(6 hunks)cli/pkg/app/suga.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- cli/internal/simulation/simulation.go
- cli/internal/netx/net.go
- cli/pkg/app/suga.go
- cli/internal/simulation/service/service.go
🧰 Additional context used
🧬 Code graph analysis (1)
cli/internal/simulation/database/database.go (3)
cli/internal/netx/net.go (5)
ReservedPort(52-52)ReservePort(68-70)GetNextPort(72-93)MinPort(46-50)MaxPort(40-44)cli/pkg/schema/database.go (1)
DatabaseIntent(3-9)cli/internal/style/style.go (1)
Cyan(19-19)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build (windows-latest, amd64)
- GitHub Check: Build (macos-latest, arm64)
- GitHub Check: Security Scan
- GitHub Check: Test
🔇 Additional comments (10)
cli/go.mod (3)
8-8: Verify the+incompatiblesuffix is intentional.The
docker/docker v28.5.1+incompatiblesuffix indicates a non-semantic versioned module. This is a known quirk of the Docker SDK, but confirm this is the expected/endorsed approach for your project's dependency strategy.
36-167: Verify OpenTelemetry prometheus exporter breaking change.The indirect dependency updates are confirmed as side effects of
docker/docker v28.5.1+incompatible. Research shows OpenTelemetry v1.38.0 (the last release supporting Go 1.23) removes theotel_scope_infometric from the prometheus exporter—a breaking change. Protobuf v1.36.8 and Docker/containerd transitive dependencies are stable with no known breaking changes. Verify your service doesn't depend on the removedotel_scope_infometric; otherwise, this update is safe.
8-8: New dependencies are secure.Docker Engine v28.5.1 (released 2025-10-08) has no known unpatched CVEs, and lib/pq v1.10.9 has no published security advisories. Both versions are safe to use.
cli/internal/simulation/database/database.go (7)
1-22: Imports look good.All imports are properly used throughout the file.
24-29: Constants are appropriate for local development.Hardcoded credentials are acceptable for local dev environments. The custom PostgreSQL image includes UUID v7 support.
47-79: Constructor initialization is solid.Port allocation with fallback and volume name sanitization are well-handled.
197-229: Database creation logic is correct.Proper use of
pq.QuoteIdentifierprevents SQL injection and handles special characters in database names. Error handling appropriately treats "already exists" as a non-error.
242-258: Container lifecycle methods are correct.Stop with timeout and RemoveVolume with force flag are appropriately implemented for the local development use case.
260-286: Accessor methods are well-implemented.Clean getters with appropriate defaults and return types.
288-309: Volume name sanitization correctly handles Docker constraints.The implementation properly enforces Docker volume naming rules with appropriate fallbacks.
01fcf38 to
942236e
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
cli/internal/simulation/simulation.go (1)
285-307: Past issue addressed; consider documenting access semantics.The empty
envKeyvalidation at lines 298-300 correctly addresses the past review concern—the service now fails fast with a clear error.However, the access control logic (lines 295-296) could benefit from a brief comment explaining when
dbIntent.Accessmight be nil versus an empty map, and what each means for service access.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
cli/go.sumis excluded by!**/*.sumgo.work.sumis excluded by!**/*.sum
📒 Files selected for processing (6)
cli/go.mod(5 hunks)cli/internal/netx/net.go(1 hunks)cli/internal/simulation/database/database.go(1 hunks)cli/internal/simulation/service/service.go(1 hunks)cli/internal/simulation/simulation.go(6 hunks)cli/pkg/app/suga.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- cli/internal/simulation/service/service.go
- cli/internal/netx/net.go
- cli/pkg/app/suga.go
- cli/internal/simulation/database/database.go
🧰 Additional context used
🧬 Code graph analysis (1)
cli/internal/simulation/simulation.go (3)
cli/internal/netx/net.go (1)
ReservedPort(52-52)cli/internal/simulation/service/service.go (2)
ServiceSimulation(20-37)NewServiceSimulation(272-292)cli/internal/simulation/database/database.go (2)
DatabaseManager(32-39)NewDatabaseManager(48-79)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build (windows-latest, amd64)
🔇 Additional comments (5)
cli/internal/simulation/simulation.go (4)
16-19: LGTM - necessary imports.The
timeanddatabaseimports support the new database lifecycle features.
43-43: LGTM - field addition.The
databaseManagerfield appropriately holds the database lifecycle manager.
100-135: LGTM - database initialization workflow.The
startDatabasesmethod properly creates and starts the PostgreSQL instance, then provisions each database. Error handling and status output are appropriate.
416-422: LGTM - correct startup ordering.Starting databases before services ensures connection strings are available when services initialize.
cli/go.mod (1)
8-14: LGTM - necessary dependencies for database support.The Docker client and PostgreSQL driver additions are appropriate for the new database management features.
942236e to
b2fb48c
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cli/internal/simulation/simulation.go (1)
280-329: Stop races with startServices map writesIf SIGINT lands while
startServicesis still assigning intos.services, the newStopwill iterate the same map concurrently and triggerfatal error: concurrent map iteration and map write. I hit this by ctrl‑C’ing immediately after launchingdev. Please stage writes into a local map (or guard the field with a mutex) and only publish it once the map is fully built before Stop can see it.- eventChans := []<-chan service.ServiceEvent{} - - for serviceName, serviceIntent := range serviceIntents { + eventChans := []<-chan service.ServiceEvent{} + services := make(map[string]*service.ServiceSimulation, len(serviceIntents)) + + for serviceName, serviceIntent := range serviceIntents { … - s.services[serviceName] = simulatedService + services[serviceName] = simulatedService fmt.Fprintf(output, "%s Starting %s\n", greenCheck, styledName(serviceName, style.Teal)) } - for _, simulatedService := range s.services { + s.services = services + + for _, simulatedService := range services {
♻️ Duplicate comments (1)
cli/internal/simulation/simulation.go (1)
469-478: Sleeping isn’t a shutdown strategyRelying on a fixed 2 s nap still leaves child processes running if they ignore SIGINT, and we end up tearing down the database underneath them. Please wait for each
svc.GetCmd().Wait()(with a timeout and SIGKILL fallback) so we know the processes are gone before stopping Postgres. Same point was raised earlier and still applies.
🧹 Nitpick comments (1)
cli/pkg/app/suga.go (1)
659-681: Stop delivering signals once we exit DevAdd
defer signal.Stop(sigChan)right aftersignal.Notifyso the runtime stops queueing interrupts into a channel nobody reads afterDevreturns.sigChan := make(chan os.Signal, 1) - signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM) + signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM) + defer signal.Stop(sigChan)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
cli/go.sumis excluded by!**/*.sumgo.work.sumis excluded by!**/*.sum
📒 Files selected for processing (6)
cli/go.mod(5 hunks)cli/internal/netx/net.go(1 hunks)cli/internal/simulation/database/database.go(1 hunks)cli/internal/simulation/service/service.go(1 hunks)cli/internal/simulation/simulation.go(6 hunks)cli/pkg/app/suga.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- cli/internal/simulation/database/database.go
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-23T23:36:19.429Z
Learnt from: davemooreuws
Repo: nitrictech/suga PR: 74
File: cli/internal/simulation/simulation.go:85-85
Timestamp: 2025-09-23T23:36:19.429Z
Learning: In the Suga CLI, there are two different contexts for SugaIntro Dashboard output: 1) Simulation server (simulation.go) where Dashboard was correctly removed because it was non-functional, and 2) Development WebSocket server (suga.go) where Dashboard URL is valid and needed for syncing with the dashboard using team slug and port parameters.
Applied to files:
cli/pkg/app/suga.go
📚 Learning: 2025-09-23T23:36:19.429Z
Learnt from: davemooreuws
Repo: nitrictech/suga PR: 74
File: cli/internal/simulation/simulation.go:85-85
Timestamp: 2025-09-23T23:36:19.429Z
Learning: In the Suga CLI, the Dashboard output should be removed from simulation context (simulation.go) but kept for the edit command context (suga.go). The edit command has a valid use case for showing the Dashboard URL with /dev.
Applied to files:
cli/pkg/app/suga.go
🧬 Code graph analysis (1)
cli/internal/simulation/simulation.go (3)
cli/internal/netx/net.go (1)
ReservedPort(52-52)cli/internal/simulation/service/service.go (2)
ServiceSimulation(20-37)NewServiceSimulation(272-292)cli/internal/simulation/database/database.go (2)
DatabaseManager(32-39)NewDatabaseManager(48-79)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build (windows-latest, amd64)
- GitHub Check: Security Scan
- GitHub Check: Test
🔇 Additional comments (3)
cli/internal/netx/net.go (1)
133-150: Inclusive max range makes senseLetting the search hit
options.maxPortprevents us from skipping the upper bound when every lower port is exhausted. Looks good.cli/go.mod (1)
5-31: Dependency set matches the new DB flowPulling in
docker/dockerandlib/pqhere lines up with the DatabaseManager work. No issues spotted.cli/internal/simulation/service/service.go (1)
214-221: Env ordering looks goodAdding user-provided entries before the framework-controlled
PORT/SUGA_SERVICE_ADDRESSensures our defaults still win when keys collide.
4a75b5d to
bdfba8a
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
cli/internal/simulation/simulation.go (1)
462-484: Consider adding a timeout toservicesWg.Wait().While the WaitGroup approach properly replaces the fixed sleep from the previous review, a hung service could block shutdown indefinitely. Consider wrapping the wait in a timeout (e.g., 30 seconds) and proceeding to database shutdown regardless.
Example pattern:
done := make(chan struct{}) go func() { s.servicesWg.Wait() close(done) }() select { case <-done: // All services stopped case <-time.After(30 * time.Second): fmt.Println("Warning: Timed out waiting for services to stop") }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
cli/go.sumis excluded by!**/*.sumgo.work.sumis excluded by!**/*.sum
📒 Files selected for processing (6)
cli/go.mod(5 hunks)cli/internal/netx/net.go(1 hunks)cli/internal/simulation/database/database.go(1 hunks)cli/internal/simulation/service/service.go(1 hunks)cli/internal/simulation/simulation.go(7 hunks)cli/pkg/app/suga.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- cli/internal/simulation/service/service.go
- cli/pkg/app/suga.go
- cli/internal/netx/net.go
🧰 Additional context used
🧬 Code graph analysis (2)
cli/internal/simulation/simulation.go (3)
cli/internal/netx/net.go (1)
ReservedPort(52-52)cli/internal/simulation/service/service.go (2)
ServiceSimulation(20-37)NewServiceSimulation(272-292)cli/internal/simulation/database/database.go (2)
DatabaseManager(32-39)NewDatabaseManager(48-79)
cli/internal/simulation/database/database.go (3)
cli/internal/netx/net.go (5)
ReservedPort(52-52)ReservePort(68-70)GetNextPort(72-93)MinPort(46-50)MaxPort(40-44)cli/pkg/schema/database.go (1)
DatabaseIntent(3-9)cli/internal/style/style.go (1)
Cyan(19-19)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Test
- GitHub Check: Build (macos-latest, arm64)
- GitHub Check: Security Scan
🔇 Additional comments (12)
cli/go.mod (1)
8-8: LGTM! Dependencies align with database feature.The Docker client and PostgreSQL driver additions support the new database management functionality.
Also applies to: 14-14
cli/internal/simulation/simulation.go (4)
40-44: LGTM! Fields support database lifecycle and shutdown coordination.
101-136: LGTM! Database initialization sequence is correct.Creates the manager, starts the container, and provisions each database before services initialize.
286-309: LGTM! Environment injection correctly validates and injects DB credentials.The empty
envKeyvalidation (line 299-301) addresses the previous review concern.
320-329: LGTM! WaitGroup correctly tracks service goroutines.This enables graceful shutdown coordination in the Stop method.
cli/internal/simulation/database/database.go (7)
24-29: LGTM! Constants appropriate for local development.Hard-coded credentials are acceptable for a local simulation environment.
31-45: LGTM! Struct design is clean with appropriate encapsulation.
47-79: LGTM! Constructor properly initializes Docker client and reserves ports.The fallback port allocation (5432 → 5432-6000 range) is a good UX touch.
82-169: LGTM! Start sequence is robust with proper readiness checking.The
AutoRemove: truesetting ensures containers are cleaned up automatically.
171-229: LGTM! Readiness check and database creation are correctly implemented.The
pq.QuoteIdentifierusage (line 212) properly handles special characters in database names, addressing the previous review.
231-238: LGTM! Error detection uses proper type assertion.Checking the PostgreSQL error code
42P04is more reliable than string matching, addressing the previous review.
240-307: LGTM! Lifecycle and accessor methods are well-implemented.The
sanitizeVolumeNameregex properly handles Docker volume naming constraints.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
cli/internal/simulation/database/database.go (2)
47-79: Consider accepting context as a parameter.Line 49 creates a background context that cannot be cancelled. If the caller needs to cancel database operations during shutdown, there's no propagation path. Consider accepting a
context.Contextparameter so lifecycle can be controlled externally.-func NewDatabaseManager(projectName, dataDir string) (*DatabaseManager, error) { - ctx := context.Background() +func NewDatabaseManager(ctx context.Context, projectName, dataDir string) (*DatabaseManager, error) {
100-109: Consider showing image pull progress.Line 109 discards all pull output, leaving users staring at a static message during potentially lengthy downloads. Consider streaming progress or showing periodic status updates for better UX.
cli/internal/simulation/simulation.go (1)
462-484: Add timeout to service shutdown.Line 473 waits indefinitely for services to exit. If a service hangs or ignores the interrupt signal,
Stop()will hang forever. Consider adding a timeout:// Wait for all service goroutines to complete -s.servicesWg.Wait() +done := make(chan struct{}) +go func() { + s.servicesWg.Wait() + close(done) +}() + +select { +case <-done: + // Services stopped cleanly +case <-time.After(30 * time.Second): + fmt.Println("Warning: Some services did not stop within timeout") +}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
cli/go.sumis excluded by!**/*.sumgo.work.sumis excluded by!**/*.sum
📒 Files selected for processing (6)
cli/go.mod(5 hunks)cli/internal/netx/net.go(1 hunks)cli/internal/simulation/database/database.go(1 hunks)cli/internal/simulation/service/service.go(1 hunks)cli/internal/simulation/simulation.go(7 hunks)cli/pkg/app/suga.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- cli/pkg/app/suga.go
- cli/internal/simulation/service/service.go
- cli/internal/netx/net.go
🧰 Additional context used
🧬 Code graph analysis (2)
cli/internal/simulation/database/database.go (3)
cli/internal/netx/net.go (5)
ReservedPort(52-52)ReservePort(68-70)GetNextPort(72-93)MinPort(46-50)MaxPort(40-44)cli/pkg/schema/database.go (1)
DatabaseIntent(3-9)cli/internal/style/style.go (1)
Cyan(19-19)
cli/internal/simulation/simulation.go (3)
cli/internal/netx/net.go (1)
ReservedPort(52-52)cli/internal/simulation/service/service.go (2)
ServiceSimulation(21-38)NewServiceSimulation(275-295)cli/internal/simulation/database/database.go (2)
DatabaseManager(32-39)NewDatabaseManager(48-79)
🔇 Additional comments (11)
cli/go.mod (1)
8-8: Dependency additions look good.The additions of
docker/dockerandlib/pqalign with the PostgreSQL lifecycle management introduced in this PR. The+incompatiblesuffix on docker/docker is expected due to that library's versioning scheme.Also applies to: 14-14
cli/internal/simulation/database/database.go (5)
171-195: Retry logic is sound.The 15-second timeout (30 retries × 500ms) is reasonable for local PostgreSQL startup, and connections are properly closed on each attempt.
197-229: Database creation logic is correct.The use of
pq.QuoteIdentifierhandles special characters safely, and the error checking properly distinguishes duplicate database errors from genuine failures.
240-256: Cleanup methods look good.The 10-second stop timeout is reasonable, and early return when containerID is empty prevents unnecessary errors. Force removal in
RemoveVolumeis appropriate for cleanup operations.
258-284: Accessor methods are well-designed.Connection strings use hardcoded credentials which is acceptable for local development. The empty string return from
GetEnvVarKeyis properly validated by callers in simulation.go.
286-307: Sanitization logic is thorough.Handles Docker volume naming constraints comprehensively, including edge cases like names starting with special characters or empty strings.
cli/internal/simulation/simulation.go (5)
40-44: Field additions support graceful shutdown.The
servicesWgWaitGroup enables proper coordination during shutdown, addressing the synchronization concerns from previous reviews.
101-136: Database initialization is well-structured.Proper error handling and user feedback throughout the database startup sequence.
286-307: Database access injection is correct.Lines 299-301 properly validate that
env_var_keyis configured before injecting connection strings, with a clear error message. This addresses the concern from previous reviews.
321-329: WaitGroup usage is correct.Proper Add/Done pairing ensures
Stop()can reliably wait for all service goroutines to complete.
419-425: Startup ordering is correct.Starting databases before services ensures connection strings are available when services initialize.
jyecusch
left a comment
There was a problem hiding this comment.
No issues testing this locally, looks good. Mostly just a suggestion to clean up comments
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
cli/internal/simulation/database/database.go (1)
159-183: Consider connection reuse in retry loop.Opening and closing a new connection on each retry is inefficient. Consider opening once and retrying just the Ping() call, closing only on the final failure or after success.
func (m *DatabaseManager) waitForPostgres() error { connStr := fmt.Sprintf("host=localhost port=%d user=%s password=%s dbname=%s sslmode=disable", m.port, postgresUser, postgresPassword, postgresDB) maxRetries := 30 + db, err := sql.Open("postgres", connStr) + if err != nil { + return fmt.Errorf("failed to open database: %w", err) + } + defer db.Close() + for i := 0; i < maxRetries; i++ { - db, err := sql.Open("postgres", connStr) - if err != nil { - time.Sleep(500 * time.Millisecond) - continue - } - - err = db.Ping() - db.Close() - - if err == nil { + if err := db.Ping(); err == nil { return nil } time.Sleep(500 * time.Millisecond) } return fmt.Errorf("PostgreSQL did not become ready within timeout") }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cli/internal/simulation/database/database.go(1 hunks)cli/internal/simulation/simulation.go(7 hunks)cli/pkg/app/suga.go(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-23T23:36:19.429Z
Learnt from: davemooreuws
Repo: nitrictech/suga PR: 74
File: cli/internal/simulation/simulation.go:85-85
Timestamp: 2025-09-23T23:36:19.429Z
Learning: In the Suga CLI, there are two different contexts for SugaIntro Dashboard output: 1) Simulation server (simulation.go) where Dashboard was correctly removed because it was non-functional, and 2) Development WebSocket server (suga.go) where Dashboard URL is valid and needed for syncing with the dashboard using team slug and port parameters.
Applied to files:
cli/pkg/app/suga.go
📚 Learning: 2025-09-23T23:36:19.429Z
Learnt from: davemooreuws
Repo: nitrictech/suga PR: 74
File: cli/internal/simulation/simulation.go:85-85
Timestamp: 2025-09-23T23:36:19.429Z
Learning: In the Suga CLI, the Dashboard output should be removed from simulation context (simulation.go) but kept for the edit command context (suga.go). The edit command has a valid use case for showing the Dashboard URL with /dev.
Applied to files:
cli/pkg/app/suga.go
🧬 Code graph analysis (2)
cli/internal/simulation/simulation.go (3)
cli/internal/netx/net.go (1)
ReservedPort(52-52)cli/internal/simulation/service/service.go (2)
ServiceSimulation(21-38)NewServiceSimulation(275-295)cli/internal/simulation/database/database.go (2)
DatabaseManager(32-39)NewDatabaseManager(48-74)
cli/internal/simulation/database/database.go (2)
cli/internal/netx/net.go (5)
ReservedPort(52-52)ReservePort(68-70)GetNextPort(72-93)MinPort(46-50)MaxPort(40-44)cli/pkg/schema/database.go (1)
DatabaseIntent(3-9)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build (windows-latest, amd64)
- GitHub Check: Security Scan
- GitHub Check: Test
🔇 Additional comments (11)
cli/internal/simulation/database/database.go (5)
24-29: LGTM - Constants are well-defined.The hardcoded PostgreSQL image version is acceptable for now, though as noted in previous feedback, making this configurable could be valuable for testing against different versions.
48-74: LGTM - Constructor properly handles port allocation and setup.Port allocation with fallback range and volume name sanitization are both handled correctly.
186-215: LGTM - Database creation properly handles identifiers and duplicates.Using
pq.QuoteIdentifierprevents SQL injection and handles special characters like hyphens in resource names. Error detection via type assertion is robust.
217-224: LGTM - Type assertion correctly identifies duplicate database errors.Checking error code 42P04 via type assertion is more reliable than string matching.
272-289: LGTM - Volume name sanitization is thorough.Handles lowercase conversion, invalid characters, leading special characters, and empty results appropriately.
cli/pkg/app/suga.go (1)
657-681: LGTM - Graceful shutdown properly implemented.The signal handling with deferred cleanup ensures resources are released on both normal and interrupted termination. Starting the server in a goroutine with error channel allows proper coordination between server lifecycle and signals.
cli/internal/simulation/simulation.go (5)
101-135: LGTM - Database initialization and status reporting.Creates the database manager, starts PostgreSQL, and creates individual databases with clear status output.
285-306: LGTM - Database environment injection with proper validation.Cloning the service intent and injecting connection strings for accessible databases is correct. The validation at lines 297-299 ensures we fail fast with a clear error if
env_var_keyis missing, preventing invalid environment variables.
319-327: LGTM - WaitGroup properly coordinates service lifecycle.Adding to the WaitGroup before starting the goroutine and deferring Done() ensures proper tracking of service shutdown.
417-423: LGTM - Databases started before services.Starting databases before services ensures connection strings are available when services initialize.
460-481: LGTM - Graceful shutdown with proper ordering.Signaling services first, waiting for them to exit via WaitGroup, then stopping the database manager ensures clean shutdown without severing active database connections.
Co-authored-by: Jye Cusch <jye.cusch@gmail.com>
d47ebaa to
5bd2d84
Compare
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
cli/internal/simulation/database/database.go (1)
92-101: Image pull feedback discarded.The
io.Copy(io.Discard, reader)on line 101 means users won't see download progress. For large images or slow connections, this could feel like the command is hanging. Consider streaming progress to stdout for better UX.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
cli/go.sumis excluded by!**/*.sumgo.work.sumis excluded by!**/*.sum
📒 Files selected for processing (6)
cli/go.mod(5 hunks)cli/internal/netx/net.go(1 hunks)cli/internal/simulation/database/database.go(1 hunks)cli/internal/simulation/service/service.go(1 hunks)cli/internal/simulation/simulation.go(7 hunks)cli/pkg/app/suga.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- cli/internal/netx/net.go
- cli/internal/simulation/service/service.go
- cli/internal/simulation/simulation.go
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-23T23:36:19.429Z
Learnt from: davemooreuws
Repo: nitrictech/suga PR: 74
File: cli/internal/simulation/simulation.go:85-85
Timestamp: 2025-09-23T23:36:19.429Z
Learning: In the Suga CLI, there are two different contexts for SugaIntro Dashboard output: 1) Simulation server (simulation.go) where Dashboard was correctly removed because it was non-functional, and 2) Development WebSocket server (suga.go) where Dashboard URL is valid and needed for syncing with the dashboard using team slug and port parameters.
Applied to files:
cli/pkg/app/suga.go
📚 Learning: 2025-09-23T23:36:19.429Z
Learnt from: davemooreuws
Repo: nitrictech/suga PR: 74
File: cli/internal/simulation/simulation.go:85-85
Timestamp: 2025-09-23T23:36:19.429Z
Learning: In the Suga CLI, the Dashboard output should be removed from simulation context (simulation.go) but kept for the edit command context (suga.go). The edit command has a valid use case for showing the Dashboard URL with /dev.
Applied to files:
cli/pkg/app/suga.go
🧬 Code graph analysis (1)
cli/internal/simulation/database/database.go (2)
cli/internal/netx/net.go (5)
ReservedPort(52-52)ReservePort(68-70)GetNextPort(72-93)MinPort(46-50)MaxPort(40-44)cli/pkg/schema/database.go (1)
DatabaseIntent(3-9)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Test
- GitHub Check: Security Scan
🔇 Additional comments (8)
cli/pkg/app/suga.go (1)
658-681: Clean graceful shutdown implementation.Signal handling is properly set up, the deferred cleanup ensures
Stop()runs on all exit paths, and the goroutine + select pattern correctly handles both server errors and interrupt signals.cli/internal/simulation/database/database.go (5)
47-74: Solid constructor with good port allocation fallback.The Docker client setup is standard, and the port reservation strategy (default 5432 with fallback range) handles port conflicts gracefully.
103-156: Container lifecycle properly orchestrated.Config is sensible (AutoRemove for cleanup, 127.0.0.1 binding for security, volume mount for persistence), and the readiness wait ensures the database is usable before returning.
185-224: Database creation is safe and idempotent.
pq.QuoteIdentifier(line 199) correctly handles special characters in database names, and the type-assertion-based error detection (lines 220-221) is robust. Previous review concerns about SQL injection and fragile string matching have been properly addressed.
226-270: Helper methods are clean and correct.Stop uses an appropriate timeout, connection string formatting is correct for local dev (sslmode=disable is acceptable here), and all getters work as expected.
272-289: Volume name sanitization handles edge cases well.The regex replacement, first-character validation, and fallback ensure Docker-compatible volume names for any project name input.
cli/go.mod (2)
8-8: Dependencies verified as current and secure.lib/pq v1.10.9 is the latest stable release, and docker/docker v28.5.1 has no known security advisories. Both versions support the PR's objective without security concerns.
8-8: No issues found—docker/docker v28.5.1+incompatible is compatible with Go 1.24.3.Docker v28.5.1 was rebuilt with Go 1.24.8, which shares the same major.minor version as the project's Go 1.24.3, so no compatibility issues are expected. The
+incompatiblesuffix is expected (as noted in the review). Code inspection shows standard docker client initialization with no transitive dependency conflicts.
No longer needed now that the CLI can serve local databases
|
🎉 This PR is included in version 0.5.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Fixes: NIT-457