[Disk Manager] Refactor Filestore client logs#5330
Conversation
|
Copypasted config from mr #3139 so the commit message is kind strange. Will be changed during squash |
There was a problem hiding this comment.
Pull request overview
This PR refactors logging in the Filestore Go public SDK client by replacing the previous context-aware Log/Logger(level) design with a single Logger interface exposing level-specific *f methods, and updates Disk Manager’s NFS client wiring accordingly.
Changes:
- Replaced
Log/per-level logger selection with a unifiedLoggerinterface (addsLOG_TRACE, removes syslog support). - Updated Filestore SDK clients (grpc + durable) and constructors to accept/use the new
Logger. - Updated Disk Manager NFS client factory/endpoint health-check to pass the Disk Manager logger directly; removed the custom wrapper logger types.
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| cloud/filestore/public/sdk/go/client/ya.make | Removes log_test.go from test sources. |
| cloud/filestore/public/sdk/go/client/log_test.go | Deletes tests for the previous logging API. |
| cloud/filestore/public/sdk/go/client/log.go | Introduces new Logger interface + default logger implementation; adds LOG_TRACE. |
| cloud/filestore/public/sdk/go/client/grpc.go | Switches request logging to the new Logger calls; updates ctor signature. |
| cloud/filestore/public/sdk/go/client/durable.go | Switches retry logging to the new Logger calls; updates ctor signatures. |
| cloud/filestore/public/sdk/go/client/client.go | Updates top-level SDK constructors to accept a Logger. |
| cloud/filestore/public/sdk/go/client/durable_test.go | Adjusts tests to pass the new Logger parameter name/type. |
| cloud/disk_manager/internal/pkg/clients/nfs/factory.go | Removes custom log wrapper and passes logging.GetLogger(ctx) to the SDK. |
| cloud/disk_manager/internal/pkg/clients/nfs/endpoint_picker.go | Passes logging.GetLogger(ctx) to the SDK for health checks. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| f.metrics.OnError(err) | ||
| }, | ||
| }, | ||
| NewNfsClientLog(nfs_client.LOG_DEBUG), | ||
| logging.GetLogger(ctx), | ||
| ) |
There was a problem hiding this comment.
logging.GetLogger(ctx) can return nil, and the updated filestore SDK logger interface no longer takes context, so any context-bound fields (request/operation IDs added via logging.WithCommonFields / ctxlog.WithFields) won’t be attached to logs emitted by the SDK. Consider either (1) binding ctx fields into the logger before passing it (e.g., via log.With(logger, ctxlog.ContextFields(ctx)...)), and/or (2) ensuring a non-nil fallback logger so health-check/client creation can’t panic when ctx lacks a logger.
| if e.level < level { | ||
| return | ||
| } | ||
|
|
||
| func (l *debugLogger) Print(ctx context.Context, v ...interface{}) { | ||
| _ = l.writer.Debug(fmt.Sprint(v...)) | ||
| e.logger.Printf(format, args...) |
There was a problem hiding this comment.
defaultLogger.write dereferences e.logger without guarding against a nil *log.Logger. Since NewLog accepts a nil logger and the rest of the client now unconditionally calls logger.Debugf/Infof/etc, passing nil will cause a runtime panic. Consider either rejecting nil in NewLog/NewGrpcClient (return an error or panic early with a clear message) or substituting a no-op logger / io.Discard-backed logger when nil is provided.
| } | ||
| return nil | ||
| func (e *defaultLogger) Fatalf(format string, args ...interface{}) { | ||
| log.Fatalf(format, args...) |
There was a problem hiding this comment.
defaultLogger.Fatalf uses the package-level log.Fatalf instead of the configured underlying logger. This bypasses the logger’s output/prefix/flags and makes Fatalf behave inconsistently with the other methods; it also ignores the configured log level filter in write(). Prefer delegating to e.logger (or to write + os.Exit) so Fatalf uses the same sink and formatting as the rest of this logger implementation.
| log.Fatalf(format, args...) | |
| e.write(LOG_ERROR, format, args...) | |
| os.Exit(1) |
| client.setupHeaders(ctx, req) | ||
|
|
||
| if logger := client.log.Logger(LOG_DEBUG); logger != nil { | ||
| logger.Printf(ctx, "%s #%d sending request", requestName(req), requestID) | ||
| } | ||
|
|
||
| client.logger.Debugf("%s #%d sending request", requestName(req), requestID) | ||
| ctx, cancel := context.WithTimeout(ctx, client.timeout) | ||
| defer cancel() |
There was a problem hiding this comment.
executeRequest now unconditionally calls client.logger.Debugf/Warnf. If NewGrpcClient is called with a nil Logger (e.g., a context without a logger where logging.GetLogger(ctx) returns nil), this will panic at runtime. Consider normalizing a nil logger to a no-op logger inside NewGrpcClient/NewGrpcEndpointClient (or returning an error) so the client is safe to use when logging is not configured.
| client.logger.Infof( | ||
| "%s%s completed (retries: %d, duration: %v)", | ||
| requestName(req), | ||
| requestDetails(req), | ||
| retryCount, |
There was a problem hiding this comment.
durableClient.executeRequest now unconditionally calls client.logger.Infof/Errorf/Warnf. If the durable client is constructed with a nil Logger, these calls will panic. Consider ensuring NewDurableClient/NewDurableEndpointClient replace a nil logger with a no-op logger (or otherwise enforce non-nil) to keep retries safe even when logging is disabled/unconfigured.
|
Note This is an automated comment that will be appended during run. 🟢 linux-x86_64-relwithdebinfo target: cloud/tasks/,cloud/storage/ (test time: 366s): all tests PASSED for commit 59d0d29.
🟢 linux-x86_64-relwithdebinfo target: cloud/disk_manager/ (test time: 1855s): all tests PASSED for commit 59d0d29.
🟢 linux-x86_64-relwithdebinfo target: cloud/filestore/ (test time: 7052s): all tests PASSED for commit 59d0d29.
🟢 linux-x86_64-relwithdebinfo target: cloud/blockstore/ (test time: 11282s): all tests PASSED for commit 59d0d29.
|
No description provided.