Skip to content

feat: implementing deadlines CRUD#559

Merged
arthurkz merged 21 commits intodevelopfrom
feature/RPRTR-377
Mar 9, 2026
Merged

feat: implementing deadlines CRUD#559
arthurkz merged 21 commits intodevelopfrom
feature/RPRTR-377

Conversation

@arthurkz
Copy link
Contributor

@arthurkz arthurkz commented Mar 5, 2026

Pull Request Checklist

Pull Request Type

  • Manager
  • Worker
  • Frontend
  • Infrastructure
  • Packages
  • Pipeline
  • Tests
  • Documentation

Checklist

Please check each item after it's completed.

  • I have tested these changes locally.
  • I have updated the documentation accordingly.
  • I have added necessary comments to the code, especially in complex areas.
  • I have ensured that my changes adhere to the project's coding standards.
  • I have checked for any potential security issues.
  • I have ensured that all tests pass.
  • I have updated the version appropriately (if applicable).
  • I have confirmed this code is ready for review.

Additional Notes

Obs: Please, always remember to target your PR to develop branch instead of main.

arthurkz and others added 7 commits March 4, 2026 13:56
…e tests

Introduces the deadline entity with full CRUD operations (create, get all, update, delete) and a deliver workflow that marks deadlines as delivered with timestamp tracking. Includes MongoDB repository with filtering (active, type, date range), pagination with total count, soft-delete support, and proper indexes. Improves HTTP body parser to return 400 for all malformed JSON instead of 500. Adds unit, fuzz, property, integration, and chaos tests.

X-Lerian-Ref: 0x1
fix: reduce idempotency TTL from 24h to 30s
# Conflicts:
#	components/manager/internal/adapters/http/in/routes.go
@arthurkz arthurkz requested a review from a team as a code owner March 5, 2026 17:43
@coderabbitai
Copy link

coderabbitai bot commented Mar 5, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 16214cef-7d3d-421e-b3bd-635ffdccb4a2

📥 Commits

Reviewing files that changed from the base of the PR and between d47f618 and 9fab850.

📒 Files selected for processing (1)
  • pkg/net/http/http-utils.go

Walkthrough

Adds a deadline management feature: new deadline domain model with validation, recurrence and status logic; MongoDB persistence with a Deadline repository, soft-delete, index management, and multi-tenant collection resolution; UseCase methods CreateDeadline, GetAllDeadlines, UpdateDeadlineByID, DeleteDeadlineByID, DeliverDeadline; HTTP DeadlineHandler and routes under /v1/deadlines; extended query parsing for deadline filters; new constants and error mappings; IdempotencyTTL reduced; .trivyignore updated; and extensive unit, integration, fuzz, chaos, and property tests for the feature.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Handler as DeadlineHandler
    participant Service as UseCase
    participant TemplateRepo as Template Repository
    participant DeadlineRepo as Deadline Repository
    participant MongoDB

    Client->>Handler: POST /v1/deadlines (CreateDeadlineInput)
    Handler->>Handler: Validate payload
    Handler->>Service: CreateDeadline(ctx, input)
    alt TemplateID provided
        Service->>TemplateRepo: FindByID(templateID)
        TemplateRepo->>MongoDB: Query template
        MongoDB-->>TemplateRepo: Template doc
        TemplateRepo-->>Service: Template entity
        Service->>Service: Set TemplateName on entity
    end
    Service->>DeadlineRepo: Create(ctx, model)
    DeadlineRepo->>MongoDB: Insert document
    MongoDB-->>DeadlineRepo: Insert result
    DeadlineRepo-->>Service: Created deadline
    Service-->>Handler: Deadline entity
    Handler-->>Client: 201 Created (deadline)
Loading
sequenceDiagram
    participant Client
    participant Handler as DeadlineHandler
    participant Service as UseCase
    participant DeadlineRepo as Deadline Repository
    participant MongoDB

    Client->>Handler: GET /v1/deadlines?limit=10&page=1&active=true&type=task
    Handler->>Handler: Parse & validate query params
    Handler->>Service: GetAllDeadlines(ctx, filters)
    Service->>DeadlineRepo: FindList(ctx, filters)
    DeadlineRepo->>MongoDB: Find with filters & pagination
    MongoDB-->>DeadlineRepo: Documents
    DeadlineRepo-->>Service: []*Deadline
    Service->>DeadlineRepo: Count(ctx, filters)
    DeadlineRepo->>MongoDB: Count matching docs
    MongoDB-->>DeadlineRepo: Total count
    DeadlineRepo-->>Service: count
    Service-->>Handler: Deadlines + count
    Handler-->>Client: 200 OK (paginated list)
Loading
sequenceDiagram
    participant Client
    participant Handler as DeadlineHandler
    participant Service as UseCase
    participant DeadlineRepo as Deadline Repository
    participant MongoDB

    Client->>Handler: PATCH /v1/deadlines/{id}/deliver (DeliverDeadlineInput)
    Handler->>Handler: Validate ID & payload
    Handler->>Service: DeliverDeadline(ctx, id, input)
    Service->>Service: Build update patch (delivered_at set/cleared)
    Service->>DeadlineRepo: Update(ctx, id, patch)
    DeadlineRepo->>MongoDB: Update document
    MongoDB-->>DeadlineRepo: Update result
    DeadlineRepo-->>Service: success
    Service->>DeadlineRepo: FindByID(ctx, id)
    DeadlineRepo->>MongoDB: Query by ID
    MongoDB-->>DeadlineRepo: Updated document
    DeadlineRepo-->>Service: Deadline entity
    Service-->>Handler: Updated deadline
    Handler-->>Client: 200 OK (updated deadline)
Loading
🚥 Pre-merge checks | ✅ 1 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Description check ❓ Inconclusive The PR description follows the required template structure with the PR Type section checked (Manager) and includes the full checklist. However, none of the checklist items are marked as completed, and the Additional Notes section lacks meaningful context about the implementation details, testing performed, or any specific concerns for reviewers. Complete the checklist items (mark them as done once verified) and add implementation details in the Additional Notes section, such as key features implemented, testing approach, and any known limitations or areas requiring attention.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: implementing deadlines CRUD' clearly and concisely describes the main change—adding CRUD operations for deadlines in the Manager component. It follows conventional commit format and accurately reflects the primary scope of the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 28

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.trivyignore:
- Line 12: The .trivyignore entry for CVE-2026-25882 should not be permanently
suppressed without documentation; either remove the line or replace it with a
documented suppression: add a comment immediately above the CVE-2026-25882 entry
that records the affected package and version, the reason/justification, any
compensating controls, and a removal deadline or tracking ticket ID (e.g.,
"removal-by: YYYY-MM-DD" or "ticket: `#1234`") so scanners can re-evaluate later;
ensure the exact token "CVE-2026-25882" remains identifiable for Trivy when you
reintroduce the entry.

In `@components/manager/internal/adapters/http/in/deadline_test.go`:
- Around line 44-57: The test middleware setupDeadlineContextMiddleware uses
plain string keys ("logger", "tracer", "requestId") when storing values in the
context; change these to unexported typed key types (e.g., define private types
like type ctxKey string and const keys such as loggerKey ctxKey = "logger",
tracerKey ctxKey = "tracer", requestIDKey ctxKey = "requestId") and use those
keys when calling context.WithValue and c.SetUserContext so keys cannot collide
at runtime; if the production helper commons.NewTrackingFromContext expects the
same string keys, either update the test to use the same typed keys as
production or align production to use the typed keys so both use the same unique
identifiers.
- Around line 619-628: The test expects a 404 for a repository "deadline not
found" error but the handler currently returns 500; update the delete flow to
map the repository's not-found result to HTTP 404. Locate the repository call
Delete(...) on the deadline.MockRepository and the HTTP delete handler (e.g.,
DeleteDeadline / DeleteDeadlineHandler or the method named Delete on the
deadline HTTP adapter) and change the error handling so that when the repo
returns the "deadline not found" error (or the repository's ErrNotFound
sentinel), the handler/service writes http.StatusNotFound instead of
http.StatusInternalServerError; keep all other errors returning 500. Ensure the
mapping is applied in the service layer if a service wrapper exists between the
handler and repository.
- Around line 379-384: The error mapping incorrectly maps ErrInvalidSortOrder
and ErrPaginationLimitExceeded to EntityNotFoundError causing 404s; update the
mapping in pkg/errors.go so both ErrInvalidSortOrder and
ErrPaginationLimitExceeded are mapped to ValidationError (same category used for
ErrInvalidDateRange and ErrMetadataKeyLengthExceeded) to ensure these validation
failures produce 400 Bad Request responses; locate the error-to-domain mapping
logic (references to ErrInvalidSortOrder, ErrPaginationLimitExceeded,
EntityNotFoundError, ValidationError) and replace the two EntityNotFoundError
mappings with ValidationError.

In `@components/manager/internal/adapters/http/in/routes.go`:
- Around line 92-98: The deadline routes are missing the multi-tenant middleware
application; update each deadline route declaration that registers handlers
(f.Post("/v1/deadlines", ... , deadlineHandler.CreateDeadline),
f.Get("/v1/deadlines", ... , deadlineHandler.GetAllDeadlines),
f.Put("/v1/deadlines/:id", ... , deadlineHandler.UpdateDeadlineByID),
f.Delete("/v1/deadlines/:id", ... , deadlineHandler.DeleteDeadlineByID), and
f.Patch("/v1/deadlines/:id/deliver", ... , deadlineHandler.DeliverDeadline)) to
include WhenEnabled(ttMiddleware) in the middleware chain so they match
template/report/data-source routes; also review UpdateDeadlineByID (currently
registered with PUT) and consider changing it to PATCH to align with other
partial-update endpoints if updates are partial.

In `@components/manager/internal/services/create-deadline.go`:
- Around line 67-69: The business error call when tmpl == nil passes "template"
as the format string itself, causing missing variadic args and malformed
messages; update the ValidateBusinessError invocation (used in
create-deadline.go) to supply a proper format string and the "template" value as
a formatting argument (i.e. use constant ErrEntityNotFound with a format like
"%s" or "%v" and pass "template" as the corresponding arg) so
ValidateBusinessError receives both format and args; target the
ValidateBusinessError(...) call and the ErrEntityNotFound usage near the tmpl ==
nil check.
- Around line 20-30: In CreateDeadline (UseCase.CreateDeadline) add a nil-input
guard before any access to input.Name/input.Type (i.e., before the tracer
span.SetAttributes call): if input == nil, record the condition on the
span/logger (e.g., set an attribute or log that input is nil) and return a clear
error (e.g., errors.New("deadline input is nil") or the existing domain
validation error) so the function does not panic when a nil payload is received.

In `@components/manager/internal/services/deadline-service_test.go`:
- Around line 430-747: Add explicit table-driven test cases where the input
payload is nil to ensure UpdateDeadlineByID and the Deliver method do not panic
and return the expected business error (e.g., constant.ErrBadRequest).
Specifically, in TestUseCase_UpdateDeadlineByID add a test entry with input:
nil, mockSetup returning a deadline.NewMockRepository(ctrl) (no repo calls
expected), expectErr: true and errContains: constant.ErrBadRequest.Error(), and
assert result is nil; do the same pattern for the Deliver test (the other test
block referenced around lines 836-1003) by adding a nil-input case that expects
the same error and no repo interactions. Ensure you reference and exercise the
UseCase.UpdateDeadlineByID and UseCase.Deliver (or DeliverDeadline) methods so
the nil-guard behavior is locked in.

In `@components/manager/internal/services/deliver-deadline.go`:
- Around line 26-36: The DeliverDeadline handler dereferences input and
input.Delivered without nil checks, which can panic; update
UseCase.DeliverDeadline to first validate that input != nil and input.Delivered
!= nil (return an appropriate error) before calling span.SetAttributes or any
other dereference, and apply the same guards before any further uses later in
the method (e.g., the code around the second dereference at the block covering
lines 44-49) so all accesses to input.Delivered are safe.

In `@components/manager/internal/services/update-deadline-by-id.go`:
- Around line 26-100: Refactor UpdateDeadlineByID by extracting the mapping and
validation logic into one or two helpers (e.g., buildDeadlineUpdateFields(ctx
context.Context, span trace.Span, input *deadline.UpdateDeadlineInput) (bson.M,
error) or split to validateDeadlineFields and mapDeadlineFields) so the main
function only handles tracing, calling the helper(s), applying
setFields["updated_at"] = time.Now(), and executing the DB update; move checks
that use deadline.ValidTypes, deadline.ValidFrequencies,
deadline.ValidColorRegex and calls to
libOpentelemetry.HandleSpanBusinessErrorEvent/pkg.ValidateBusinessError into the
helper(s) so behavior is unchanged but cyclomatic complexity in
UpdateDeadlineByID is reduced. Ensure helpers return the same error types and
preserve reqId/span attributes and log context.
- Around line 58-60: When input.TemplateID is changed the denormalized
template_name can become stale; in the update path that builds setFields (the
block checking input.TemplateID != nil where setFields["template_id"] =
*input.TemplateID) also update or clear setFields["template_name"] to keep data
consistent: fetch the new template's name (via the repository/service used
elsewhere in this file) and assign setFields["template_name"] = fetchedName, or
set it to nil/empty if the template no longer exists. Ensure this logic lives
alongside the input.TemplateID handling so template_id and template_name remain
in sync.
- Around line 26-46: The UpdateDeadlineByID function currently dereferences
fields on the pointer parameter input (e.g., input.Name, input.Description)
without a nil check; add an early guard at the start of UpdateDeadlineByID to
verify input != nil and return a clear error (or wrapped domain error) if nil to
avoid panics, then proceed to build setFields only after the nil check; update
any tests/handlers that expect a specific error type/message accordingly.

In `@pkg/constant/redis.go`:
- Around line 18-19: The IdempotencyTTL constant (IdempotencyTTL) is too short
(30s) and can allow duplicates or premature lock expiry for create endpoints;
change it to a much longer default (e.g., hours) or make it configurable and use
a per-endpoint TTL: replace the hardcoded IdempotencyTTL with a configurable
value (env/config-backed) or increase the constant to a safe default (e.g.,
24*time.Hour) and update any places that rely on IdempotencyTTL to accept an
injected TTL (or call a getter like GetIdempotencyTTL) so callers (especially
create handlers) can override shorter/longer values as needed.

In `@pkg/mongodb/deadline/deadline_fuzz_test.go`:
- Around line 138-140: Update the comment for FuzzNewDeadline_Color to
accurately describe validation: state that empty color is rejected and only
colors matching the hex regex (ValidColorRegex) accepted; mention that the fuzz
test verifies NewDeadline enforces the ValidColorRegex hex format rather than
accepting any non-empty string. Ensure the comment references NewDeadline and
ValidColorRegex so it's clear what behavior is being tested.

In `@pkg/mongodb/deadline/deadline.mongodb.go`:
- Around line 84-102: Child spans (e.g., the span started as spanFindOne via
tracer.Start(ctx, "repository.deadline.find_by_id_exec")) are only ended on
success paths, leaving traces open on early returns; immediately call defer
spanFindOne.End() right after creating each child span so it always closes
regardless of return path, and apply the same pattern for the other child spans
referenced in this file (the spans started around lines noted in the review: the
spans at the blocks that use tracer.Start, the variables named like spanFindOne,
spanFindAll, etc.); keep existing libOpentelemetry.HandleSpanError calls but
ensure the defer End is placed immediately after each tracer.Start to guarantee
closure on errors and normal exits.
- Around line 38-43: The constructor NewDeadlineMongoDBRepository currently
dereferences mc immediately and will panic if mc is nil; update the function to
validate the incoming mc parameter (and optionally mc.Database) at the start and
return a clear error if mc is nil before using mc to build the
DeadlineMongoDBRepository or calling r.connection.GetDB; make sure all
references in this function (NewDeadlineMongoDBRepository,
DeadlineMongoDBRepository, mc, r.connection.GetDB) are guarded by that nil-check
so no dereference occurs when mc is nil.
- Around line 158-160: Clamp and sanitize pagination inputs before creating the
FindOptions: ensure filters.Limit is at least 1 (replace or default
negative/zero Limit with a safe minimum) and ensure filters.Page is at least 1,
then compute skip as int64((page-1)*limit) and set opts :=
options.FindOptions{Limit: &limit, Skip: &skip}; update the variables used in
the existing code (limit, skip, opts) so negative skip and unbounded queries
cannot occur.
- Around line 171-199: The cursor opened by coll.Find (cur) is not guaranteed to
be closed on early returns (e.g., if cur.Decode fails); after obtaining cur (the
result of coll.Find in the function that uses DeadlineMongoDBModel, spanFind and
span) add a defensive defer to close the cursor (e.g., if cur != nil { defer
func(){ if err := cur.Close(ctx); err != nil {
libOpentelemetry.HandleSpanError(&span, "Failed to close cursor", err) } }() })
so the cursor is always closed even on decode/iteration errors; you can keep or
remove the existing explicit close at the end to avoid double-logging but ensure
the deferred close is present and uses libOpentelemetry.HandleSpanError to
report close errors.

In `@pkg/mongodb/deadline/indexes.go`:
- Around line 48-55: The compound index that includes "_id" and "deleted_at" is
redundant because MongoDB already indexes "_id"; update the index definition
that builds Keys: bson.D { {Key: "_id", ...}, {Key: "deleted_at", ...} } to
remove the "_id" entry so it only indexes "deleted_at", and update the index
Options.SetName (currently "idx_deadline_id_deleted") to a name reflecting only
deleted_at (e.g., "idx_deadline_deleted_at") to avoid unnecessary write overhead
from the redundant _id inclusion.
- Around line 103-108: Remove treating "IndexOptionsConflict" as a successful
creation path: in the error handling inside the index-creation block that
references constant.MongoCollectionDeadline, change the if-condition so it only
treats "already exists" as benign (strings.Contains(err.Error(), "already
exists")). For "IndexOptionsConflict" detect it explicitly, log an error (with
context) and return the error instead of returning nil so the caller can surface
drift; keep the existing info-log path for the benign "already exists" case.

In `@pkg/net/http/http-utils.go`:
- Around line 112-129: The parseFilterParams function currently swallows invalid
values; change parseFilterParams to return an error (e.g., func
parseFilterParams(qh *QueryHeader, key, value string) error) and validate
inputs: for "active" reject anything not "true" or "false" with a clear error;
for date keys ("start_date","startDate","end_date","endDate" and any
"created_at" handling in the same file) use time.Parse and return its error
instead of discarding it, and propagate that error to the HTTP handler so
malformed filter/date query parameters produce a 400 response; update callers of
parseFilterParams to handle the returned error and convert it to a client 400.
- Around line 145-147: The metadata case currently overwrites qh.Metadata on
each match so only the last metadata.* param is kept; modify the handling in
that branch to initialize qh.Metadata if nil and then add the parsed key/value
into the existing bson.M instead of replacing it (e.g., if qh.Metadata == nil
set qh.Metadata = &bson.M{} then cast/dereference and assign (*qh.Metadata)[key]
= value), and still set qh.UseMetadata = true; update the code paths referencing
qh.Metadata accordingly.

In `@tests/chaos/chaos-deadline-mongodb_test.go`:
- Around line 339-357: The test currently only logs results from the
high-latency POST and can falsely pass if latency injection failed; update the
Phase 3 block that uses reqCtx3Create, cancel3Create, cli.Request and
latencyPayload to assert expected failure: require that either err is non-nil
(indicating timeout) or the returned HTTP code signals an error (e.g., >=500 or
other project-specific failure code); if neither condition holds, call t.Fatalf
or t.Errorf to fail the test with a clear message including code and
latencyDuration so the test fails when latency injection was ineffective.
- Around line 122-124: After injecting the MongoDB toxic with
chaosutil.InjectConnectionLoss(mongoProxy), immediately register a t.Cleanup
that removes the toxic (e.g., call the corresponding chaosutil removal function
such as chaosutil.RemoveConnectionLoss/RemoveToxic for mongoProxy) so toxics are
always cleaned even if the test fails before Phase 4; place the t.Cleanup right
after the InjectConnectionLoss call (after require.NoError) and ensure the
cleanup handler logs or ignores removal errors safely. Apply the same pattern to
the other injection sites referenced (the blocks around the other occurrences).
- Line 535: The log prints failureCount but doesn't assert degradation; after
the existing t.Logf("Phase 3 (Verify Failure): %d/%d operations failed during
network partition", failureCount, totalOps) add an assertion that at least one
operation failed (e.g., check failureCount > 0) to fail the test when no
degradation was observed; reference the test's t, failureCount and totalOps
variables (or use your test assertion helper like require.Greater(t,
failureCount, 0, "...") if available).

In `@tests/fuzzy/fuzz-deadline-payload_test.go`:
- Around line 73-76: The test currently treats transport errors as acceptable by
logging them and returning (the if err != nil { t.Logf(...); return } branches),
which can mask real failures; update those branches to fail the test instead
(e.g., use t.Fatalf or t.Fatalf-style failure) so any non-nil err in the request
path causes the test to fail; apply this change to the three occurrences that
log-and-return (the err check blocks around t.Logf in
fuzz-deadline-payload_test.go, including the similar blocks at the other two
locations mentioned).
- Around line 66-70: The test currently skips inputs where
json.Unmarshal([]byte(payloadJSON), &payload) fails, so malformed JSON branches
never exercise HTTP request handling; instead, when json.Unmarshal errors, send
the HTTP request using the original raw payloadJSON as the request body (i.e.,
treat payloadJSON as the raw bytes to post) rather than returning; update the
code around payloadJSON/json.Unmarshal/payload to fall through to the
request-sending logic on unmarshal error (and apply the same change to the other
occurrences at the commented regions corresponding to lines 132-134 and 186-188)
so the server’s handling of invalid JSON (400s, etc.) is exercised.

In `@tests/integration/deadlines-integration_test.go`:
- Around line 124-126: The assignment to listPath uses fmt.Sprintf with no
format verbs (listPath := fmt.Sprintf("/v1/deadlines?limit=50&page=1")), which
is unnecessary; replace the fmt.Sprintf call with the raw string literal
"/v1/deadlines?limit=50&page=1" in the same variable assignment (look for
listPath in the test function in deadlines-integration_test.go) to simplify the
code and avoid the needless formatting call.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: a72942b7-77be-4355-b424-91bf70022476

📥 Commits

Reviewing files that changed from the base of the PR and between 9a10a9c and 174d5bd.

📒 Files selected for processing (29)
  • .trivyignore
  • components/manager/internal/adapters/http/in/deadline.go
  • components/manager/internal/adapters/http/in/deadline_test.go
  • components/manager/internal/adapters/http/in/routes.go
  • components/manager/internal/bootstrap/config.go
  • components/manager/internal/bootstrap/init_helpers.go
  • components/manager/internal/services/create-deadline.go
  • components/manager/internal/services/create-report_test.go
  • components/manager/internal/services/create-template_test.go
  • components/manager/internal/services/deadline-service_test.go
  • components/manager/internal/services/delete-deadline-by-id.go
  • components/manager/internal/services/deliver-deadline.go
  • components/manager/internal/services/get-all-deadlines.go
  • components/manager/internal/services/service.go
  • components/manager/internal/services/update-deadline-by-id.go
  • pkg/constant/mongo.go
  • pkg/constant/redis.go
  • pkg/mongodb/deadline/deadline.go
  • pkg/mongodb/deadline/deadline.mock.go
  • pkg/mongodb/deadline/deadline.mongodb.go
  • pkg/mongodb/deadline/deadline_fuzz_test.go
  • pkg/mongodb/deadline/indexes.go
  • pkg/net/http/http-utils.go
  • pkg/net/http/with-body.go
  • pkg/net/http/with-body_test.go
  • tests/chaos/chaos-deadline-mongodb_test.go
  • tests/fuzzy/fuzz-deadline-payload_test.go
  • tests/integration/deadlines-integration_test.go
  • tests/property/property-deadline_test.go

@github-actions
Copy link

github-actions bot commented Mar 5, 2026

🔒 Security Scan Results — manager

Filesystem Scan

✅ No vulnerabilities or secrets found.

Docker Image Scan

✅ No vulnerabilities found.

All security checks passed.

@lerian-studio
Copy link
Contributor

This PR is very large (29 files, 6039 lines changed). Consider breaking it into smaller PRs for easier review.

@lerian-studio
Copy link
Contributor

Consider updating CHANGELOG.md to document this change. If this change doesn't need a changelog entry, add the skip-changelog label.

@lerian-studio
Copy link
Contributor

lerian-studio commented Mar 5, 2026

📊 Unit Test Coverage Report: reporter-manager

Metric Value
Overall Coverage 88.1% ✅ PASS
Threshold 85%

Coverage by Package

Package Coverage
github.com/LerianStudio/reporter/components/manager/internal/adapters/http/in 85.6%
github.com/LerianStudio/reporter/components/manager/internal/services 90.7%

Generated by Go PR Analysis workflow

@lerian-studio
Copy link
Contributor

lerian-studio commented Mar 5, 2026

📊 Unit Test Coverage Report: reporter-worker

Metric Value
Overall Coverage 90.9% ✅ PASS
Threshold 85%

Coverage by Package

Package Coverage
github.com/LerianStudio/reporter/components/worker/internal/services 93.0%

Generated by Go PR Analysis workflow

arthurkz added 5 commits March 6, 2026 17:45
…ate route

Fix validation errors returning wrong HTTP status (404 instead of 400), add multi-tenant middleware to deadline routes, and change update endpoint from PUT to PATCH for consistency with partial update semantics.

X-Lerian-Ref: 0x1
…lity

Enforce dayOfMonth required for monthly/semiannual/annual, monthsOfYear required for semiannual/annual. Reject incompatible fields (e.g., dayOfMonth with daily frequency). Add specific error codes TPL-0049 through TPL-0052.

X-Lerian-Ref: 0x1
Implement lazy recurrence advancement in ToEntity(): when a recurring deadline is delivered and dueDate has passed, automatically advance to next occurrence and reset deliveredAt. Add status query parameter filter (pending/overdue/delivered) translating computed status into MongoDB conditions.

X-Lerian-Ref: 0x1
…index drift handling

Move description, status, and created_at parsing to parseFilterParams to bring ValidateParameters below gocyclo threshold. Add recreateIndexes helper for handling IndexOptionsConflict by dropping and recreating conflicting indexes.

X-Lerian-Ref: 0x1
…os tests

Add t.Cleanup to remove toxics on early failure, assert degradation during network partition, and assert latency observed during high latency injection.

X-Lerian-Ref: 0x1
@lerian-studio
Copy link
Contributor

Consider updating CHANGELOG.md to document this change. If this change doesn't need a changelog entry, add the skip-changelog label.

@lerian-studio
Copy link
Contributor

This PR is very large (32 files, 6846 lines changed). Consider breaking it into smaller PRs for easier review.

@github-actions
Copy link

github-actions bot commented Mar 6, 2026

🔒 Security Scan Results — worker

Filesystem Scan

✅ No vulnerabilities or secrets found.

Docker Image Scan

✅ No vulnerabilities found.

All security checks passed.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (8)
tests/fuzzy/fuzz-deadline-payload_test.go (2)

72-76: ⚠️ Potential issue | 🟠 Major

Still unresolved: transport errors are treated as a passing fuzz case.

The Logf/return branches let these tests go green when the manager is unreachable or the client times out. Make this path fatal so the harness cannot silently hide endpoint failures.

Also applies to: 136-140, 190-194

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/fuzzy/fuzz-deadline-payload_test.go` around lines 72 - 76, The test
currently treats transport/client errors from cli.Request as acceptable by
calling t.Logf and returning, which hides endpoint failures; change those
branches in fuzz-deadline-payload_test.go (the places calling cli.Request and
then t.Logf("Request error (acceptable): %v", err) followed by return) to fail
the test instead (use t.Fatalf or t.Fatal with the error) so transport errors
are fatal; update all similar occurrences (the other blocks around lines noted
in the comment) to use the same failing behavior rather than silent log+return.

66-70: ⚠️ Potential issue | 🟠 Major

Still unresolved: these fuzzers never send raw parser edge cases.

Malformed JSON still returns before any request, and successful json.Unmarshal calls canonicalize the payload before dispatch. Line 166's duplicate-key seed, for example, is collapsed to a single delivered value, so the HTTP JSON parser is not actually being fuzzed. Use the original payloadJSON as the request body, or add a helper that accepts raw bytes.

Also applies to: 131-134, 166-166, 185-188

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/fuzzy/fuzz-deadline-payload_test.go` around lines 66 - 70, The fuzzer
currently json.Unmarshal's payloadJSON into variable payload (using
json.Unmarshal) which both skips malformed-JSON inputs and canonicalizes
duplicate keys, so the HTTP JSON parser is never exercised; change the test
helper that builds/sends the HTTP request (references to payloadJSON and the
json.Unmarshal call) to use the original payloadJSON bytes as the request body
(or add a helper that accepts raw []byte) instead of the unmarshaled payload,
ensuring malformed JSON and duplicate-key edge cases reach the HTTP JSON parser.
tests/chaos/chaos-deadline-mongodb_test.go (1)

360-372: ⚠️ Potential issue | 🟠 Major

Assert degraded behavior for the POST latency phase.

This phase only logs. A fast/successful POST /v1/deadlines makes the test pass even when the create path shows no observable latency impact.

🛠️ Example fix
+startCreate := time.Now()
 code, _, err = cli.Request(reqCtx3Create, "POST", "/v1/deadlines", headers, latencyPayload)
+createDuration := time.Since(startCreate)
 if err != nil {
 	t.Logf("Phase 3 (Verify Failure): POST /v1/deadlines timed out or failed: %v", err)
 } else {
-	t.Logf("Phase 3 (Verify Failure): POST /v1/deadlines returned code=%d under high latency", code)
+	t.Logf("Phase 3 (Verify Failure): POST /v1/deadlines returned code=%d in %v under high latency", code, createDuration)
 }
+
+require.True(t, err != nil || code >= 500 || createDuration > 4*time.Second,
+	"POST /v1/deadlines showed no degradation under injected latency (code=%d, duration=%v)", code, createDuration)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/chaos/chaos-deadline-mongodb_test.go` around lines 360 - 372, Phase 3
currently only logs results; change it to assert degraded behavior by verifying
the POST under latency either fails or is measurably slower: use
createDeadlineChaosPayload as latencyPayload, call cli.Request with
reqCtx3Create but record the request duration and assert that either err != nil
(or the response code is not 2xx) OR the elapsed time exceeds an explicit
threshold (e.g., >10s) to fail the test on unexpected fast/successful responses;
update the test assertions around cli.Request and reqCtx3Create to enforce this
degraded/timeout expectation instead of just logging.
components/manager/internal/services/create-deadline.go (1)

22-32: ⚠️ Potential issue | 🔴 Critical

Guard input before recording span attributes.

Line 30 dereferences input before any nil check. A nil payload will panic the service instead of returning a validation error.

🛠️ Example fix
 func (uc *UseCase) CreateDeadline(ctx context.Context, input *deadline.CreateDeadlineInput) (*deadline.Deadline, error) {
 	logger, tracer, reqId, _ := commons.NewTrackingFromContext(ctx)

 	ctx, span := tracer.Start(ctx, "service.deadline.create")
 	defer span.End()
+
+	if input == nil {
+		err := pkg.ValidateBusinessError(constant.ErrMissingRequiredFields, constant.MongoCollectionDeadline)
+		libOpentelemetry.HandleSpanBusinessErrorEvent(&span, "Nil create deadline input", err)
+		return nil, err
+	}

 	span.SetAttributes(
 		attribute.String("app.request.request_id", reqId),
 		attribute.String("app.request.deadline_name", input.Name),
 		attribute.String("app.request.deadline_type", input.Type),
 	)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/manager/internal/services/create-deadline.go` around lines 22 -
32, The code in UseCase.CreateDeadline calls span.SetAttributes with fields from
input before checking for nil, which can panic; update CreateDeadline to
validate input (e.g., check if input == nil) immediately after obtaining
ctx/tracer/reqId from commons.NewTrackingFromContext and before calling
span.SetAttributes, returning a validation error if nil, or alternatively guard
each attribute extraction (from input.Name and input.Type) with a nil check so
span.SetAttributes never dereferences a nil input.
pkg/mongodb/deadline/deadline.mongodb.go (2)

72-78: ⚠️ Potential issue | 🟠 Major

Guard mc before dereferencing it.

mc.Database is read unconditionally here, so NewDeadlineMongoDBRepository(nil) still panics during bootstrap instead of returning a startup error.

Suggested fix
 func NewDeadlineMongoDBRepository(mc *libMongo.MongoConnection) (*DeadlineMongoDBRepository, error) {
+	if mc == nil {
+		return nil, fmt.Errorf("mongodb connection is nil")
+	}
+
 	r := &DeadlineMongoDBRepository{
 		connection: mc,
 		Database:   mc.Database,
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/mongodb/deadline/deadline.mongodb.go` around lines 72 - 78,
NewDeadlineMongoDBRepository currently dereferences mc.DB unguarded; add a
nil-check at the start of NewDeadlineMongoDBRepository to return a clear error
when mc is nil, then only access mc.Database and assign connection after
verifying mc != nil and after confirming r.connection.GetDB(ctx) succeeds (i.e.,
reference mc, DeadlineMongoDBRepository, connection.GetDB and Database to locate
the code), ensuring the function returns an error instead of panicking when
NewDeadlineMongoDBRepository(nil) is called.

194-196: ⚠️ Potential issue | 🟠 Major

Clamp Page and Limit before computing skip.

Page <= 0 yields a negative skip, and Limit <= 0 turns this into an unbounded read. The HTTP handler validates query params, but this repository is still callable from other code paths.

Suggested fix
-	limit := int64(filters.Limit)
-	skip := int64(filters.Page*filters.Limit - filters.Limit)
-	opts := options.FindOptions{Limit: &limit, Skip: &skip}
+	page := filters.Page
+	if page < 1 {
+		page = 1
+	}
+
+	limitValue := filters.Limit
+	if limitValue < 1 {
+		limitValue = 10
+	}
+
+	limit := int64(limitValue)
+	skip := int64((page - 1) * limitValue)
+	opts := options.FindOptions{Limit: &limit, Skip: &skip}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/mongodb/deadline/deadline.mongodb.go` around lines 194 - 196, Clamp
filters.Page and filters.Limit to safe minimums before computing skip: ensure
limit = max(filters.Limit, 1) and page = max(filters.Page, 1), then compute skip
as int64((page-1)*limit) and set opts := options.FindOptions{Limit: &limit,
Skip: &skip}. Update the code around the limit/skip calculation (references:
filters.Limit, filters.Page, the local variables limit and skip, and
options.FindOptions) so negative or zero values cannot produce a negative skip
or an unbounded find.
pkg/mongodb/deadline/indexes.go (1)

118-133: ⚠️ Potential issue | 🟠 Major

Handle IndexOptionsConflict before the generic already exists branch.

These predicates overlap. When the conflict error string also contains already exists, the current order returns nil and leaves the drifted definition in place, so recreateIndexes never runs.

Suggested fix
 	indexNames, err := coll.Indexes().CreateMany(ctx, indexes)
 	if err != nil {
-		if strings.Contains(err.Error(), "already exists") {
-			logger.Infof("Indexes for %s already exist (detected during creation)", constant.MongoCollectionDeadline)
-			return nil
-		}
-
 		if strings.Contains(err.Error(), "IndexOptionsConflict") {
 			logger.Infof("Index definition drift detected for %s, dropping conflicting indexes and recreating", constant.MongoCollectionDeadline)
 
 			if errRecreate := recreateIndexes(ctx, coll, indexes, logger); errRecreate != nil {
 				libOpentelemetry.HandleSpanError(&span, "Failed to recreate indexes after conflict", errRecreate)
 				return errRecreate
 			}
 
 			return nil
 		}
+
+		if strings.Contains(err.Error(), "already exists") {
+			logger.Infof("Indexes for %s already exist (detected during creation)", constant.MongoCollectionDeadline)
+			return nil
+		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/mongodb/deadline/indexes.go` around lines 118 - 133, The error handling
for CreateMany on coll.Indexes() incorrectly checks "already exists" before
"IndexOptionsConflict", causing IndexOptionsConflict cases that also contain
"already exists" to short-circuit and skip recreateIndexes; update the error
branch order so the strings.Contains(err.Error(), "IndexOptionsConflict") check
runs before the "already exists" check, ensuring the logger message about drift
and the call to recreateIndexes(ctx, coll, indexes, logger) (and its error
handling via libOpentelemetry.HandleSpanError(&span, ...)) execute when
appropriate; locate the logic around coll.Indexes().CreateMany, the err
variable, and the recreateIndexes call to make this change.
components/manager/internal/services/update-deadline-by-id.go (1)

94-110: ⚠️ Potential issue | 🔴 Critical

Reject a nil update payload before building BSON fields.

buildDeadlineUpdateFields dereferences input immediately, so UpdateDeadlineByID(ctx, id, nil) still panics. Add an early bad-request guard before calling the helper; the table tests in components/manager/internal/services/deadline-service_test.go still do not lock this path in.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/manager/internal/services/update-deadline-by-id.go` around lines
94 - 110, UpdateDeadlineByID must reject a nil input before calling
buildDeadlineUpdateFields because that helper dereferences input and causes a
panic; add an early nil-check at the start of UpdateDeadlineByID (before calling
buildDeadlineUpdateFields) that logs the bad request, calls
libOpentelemetry.HandleSpanBusinessErrorEvent on the span with a descriptive
message like "Empty deadline update payload", and returns an appropriate
bad-request error (consistent with the service's error types) so nil inputs are
handled safely; reference UpdateDeadlineByID, buildDeadlineUpdateFields, and
libOpentelemetry.HandleSpanBusinessErrorEvent when implementing the guard.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@components/manager/internal/services/create-deadline.go`:
- Around line 39-45: The constructor error from deadline.NewDeadline may be
wrapped (e.g., wrapping deadline.ErrMissingRequiredFields) so pass a normalized
sentinel to pkg.ValidateBusinessError; update the error handling to detect
wrapped sentinel errors using errors.Is(err, deadline.ErrMissingRequiredFields)
and replace err with the sentinel (deadline.ErrMissingRequiredFields) before
calling pkg.ValidateBusinessError and
libOpentelemetry.HandleSpanBusinessErrorEvent so ValidateBusinessError mapping
returns the correct 4xx business error while preserving the span event flow.

In `@components/manager/internal/services/update-deadline-by-id.go`:
- Around line 57-70: The current code only validates when input.Frequency is
provided and only builds a $set, which can leave incompatible recurrence fields
in the DB; modify the update flow in update-deadline-by-id.go to first load the
existing deadline (e.g., call the repository method that returns the current
Deadline), merge its schedule with the incoming input (use input.Frequency,
input.DayOfMonth, input.MonthsOfYear to compute the post-update schedule), call
deadline.ValidateScheduleFields with the merged values (instead of only when
input.Frequency != nil) and return pkg.ValidateBusinessError on error, then
construct the update document to include both $set for fields that remain valid
and $unset for recurrence fields that no longer apply (clear day_of_month or
months_of_year when frequency changes) so the DB never retains incompatible
fields after update.

In `@pkg/mongodb/deadline/deadline.go`:
- Around line 84-103: ValidateScheduleFields currently only enforces
applicability/presence but must also validate numeric ranges: in
ValidateScheduleFields add a check after the applicability checks that if
dayOfMonth != nil it is between 1 and 31 (return a descriptive error, e.g.,
constant.ErrDayOfMonthOutOfRange or add one), and if len(monthsOfYear) > 0
iterate monthsOfYear and ensure each month is between 1 and 12 (return
constant.ErrMonthsOfYearOutOfRange or add one); keep these checks regardless of
requireMissing and use the same function/method name (ValidateScheduleFields)
and the same frequency maps (FrequenciesWithDayOfMonth,
FrequenciesWithMonthsOfYear) so invalid numeric values like 0 or 13 are rejected
instead of being silently normalized by time.Date.
- Around line 217-285: The code drops the original clock time because
clampDayToMonth always returns midnight, so update NextDueDate and
clampDayToMonth to preserve the original time-of-day and nanoseconds: change
clampDayToMonth signature to accept the original dueDate (or explicitly hour,
min, sec, nsec) and when constructing the result use time.Date(year, month, day,
dueDate.Hour(), dueDate.Minute(), dueDate.Second(), dueDate.Nanosecond(), loc).
Update all calls in NextDueDate (monthly branch and the semiannual/annual
branches that call clampDayToMonth) to pass the original dueDate (or its time
components) so the new due retains the same clock time and location while still
clamping the day to the month end.

In `@pkg/net/http/http-utils.go`:
- Around line 183-186: The code silently ignores uuid.Parse errors for the
"template_id" query (case key == "template_id") which broadens the query; change
it to validate and reject malformed UUIDs the same way other params do by
returning a 400 error. Update the handling in the switch branch that sets
qh.TemplateID so that if uuid.Parse(value) returns an error you propagate/return
a validation error (or set the same parse error variable and follow the existing
400-response path) instead of ignoring it; keep successful parses assigning
qh.TemplateID = parsedID. Ensure you use the same error message/response
mechanism used for other invalid query params so behavior is consistent.

In `@tests/chaos/chaos-deadline-mongodb_test.go`:
- Around line 198-206: The current health-check block (using reqCtxHealth,
cancelHealth and cli.Request) only logs failures via t.Log/t.Logf; change it to
assert and fail the test when /health is not 200 or returns an error (e.g., use
t.Fatalf or t.Fatalf-like assertion) so any regression during the fault window
causes the test to fail; apply the same change to the analogous blocks at the
other locations (around the latency and network-partition phases referenced by
the reviewer) so those phases also fail on non-200 or error responses instead of
merely logging.

In `@tests/fuzzy/fuzz-deadline-payload_test.go`:
- Around line 18-20: The comment on FuzzDeadline_CreatePayload is incorrect
because the test only asserts no 5xx/panics — update the test to either (A) add
explicit 4xx assertions for the fixed invalid corpus by identifying those seed
inputs and asserting responses are in the 4xx range, or (B) if you don't want to
add those checks, change the comment to only promise "no 5xx/panics" to match
the current implementation; modify FuzzDeadline_CreatePayload and its seed
handling accordingly so the comment and behavior are consistent.

In `@tests/integration/deadlines-integration_test.go`:
- Around line 224-240: The test only checks non-empty results so update it to
verify the filter actually works: have createDeadlineViaAPI return the created
deadline's unique identifier (or ensure the payload contains a unique field you
can assert), then after calling cli.Request for "/v1/deadlines?status=pending"
unmarshal into deadlineListResponse and (1) assert that every item in
filtered.Items has Status == "pending", and (2) assert that one of
filtered.Items matches the created deadline's ID or unique field. Use
createDeadlineViaAPI, cli.Request, deadlineListResponse and filtered.Items to
locate and implement these checks.

---

Duplicate comments:
In `@components/manager/internal/services/create-deadline.go`:
- Around line 22-32: The code in UseCase.CreateDeadline calls span.SetAttributes
with fields from input before checking for nil, which can panic; update
CreateDeadline to validate input (e.g., check if input == nil) immediately after
obtaining ctx/tracer/reqId from commons.NewTrackingFromContext and before
calling span.SetAttributes, returning a validation error if nil, or
alternatively guard each attribute extraction (from input.Name and input.Type)
with a nil check so span.SetAttributes never dereferences a nil input.

In `@components/manager/internal/services/update-deadline-by-id.go`:
- Around line 94-110: UpdateDeadlineByID must reject a nil input before calling
buildDeadlineUpdateFields because that helper dereferences input and causes a
panic; add an early nil-check at the start of UpdateDeadlineByID (before calling
buildDeadlineUpdateFields) that logs the bad request, calls
libOpentelemetry.HandleSpanBusinessErrorEvent on the span with a descriptive
message like "Empty deadline update payload", and returns an appropriate
bad-request error (consistent with the service's error types) so nil inputs are
handled safely; reference UpdateDeadlineByID, buildDeadlineUpdateFields, and
libOpentelemetry.HandleSpanBusinessErrorEvent when implementing the guard.

In `@pkg/mongodb/deadline/deadline.mongodb.go`:
- Around line 72-78: NewDeadlineMongoDBRepository currently dereferences mc.DB
unguarded; add a nil-check at the start of NewDeadlineMongoDBRepository to
return a clear error when mc is nil, then only access mc.Database and assign
connection after verifying mc != nil and after confirming
r.connection.GetDB(ctx) succeeds (i.e., reference mc, DeadlineMongoDBRepository,
connection.GetDB and Database to locate the code), ensuring the function returns
an error instead of panicking when NewDeadlineMongoDBRepository(nil) is called.
- Around line 194-196: Clamp filters.Page and filters.Limit to safe minimums
before computing skip: ensure limit = max(filters.Limit, 1) and page =
max(filters.Page, 1), then compute skip as int64((page-1)*limit) and set opts :=
options.FindOptions{Limit: &limit, Skip: &skip}. Update the code around the
limit/skip calculation (references: filters.Limit, filters.Page, the local
variables limit and skip, and options.FindOptions) so negative or zero values
cannot produce a negative skip or an unbounded find.

In `@pkg/mongodb/deadline/indexes.go`:
- Around line 118-133: The error handling for CreateMany on coll.Indexes()
incorrectly checks "already exists" before "IndexOptionsConflict", causing
IndexOptionsConflict cases that also contain "already exists" to short-circuit
and skip recreateIndexes; update the error branch order so the
strings.Contains(err.Error(), "IndexOptionsConflict") check runs before the
"already exists" check, ensuring the logger message about drift and the call to
recreateIndexes(ctx, coll, indexes, logger) (and its error handling via
libOpentelemetry.HandleSpanError(&span, ...)) execute when appropriate; locate
the logic around coll.Indexes().CreateMany, the err variable, and the
recreateIndexes call to make this change.

In `@tests/chaos/chaos-deadline-mongodb_test.go`:
- Around line 360-372: Phase 3 currently only logs results; change it to assert
degraded behavior by verifying the POST under latency either fails or is
measurably slower: use createDeadlineChaosPayload as latencyPayload, call
cli.Request with reqCtx3Create but record the request duration and assert that
either err != nil (or the response code is not 2xx) OR the elapsed time exceeds
an explicit threshold (e.g., >10s) to fail the test on unexpected
fast/successful responses; update the test assertions around cli.Request and
reqCtx3Create to enforce this degraded/timeout expectation instead of just
logging.

In `@tests/fuzzy/fuzz-deadline-payload_test.go`:
- Around line 72-76: The test currently treats transport/client errors from
cli.Request as acceptable by calling t.Logf and returning, which hides endpoint
failures; change those branches in fuzz-deadline-payload_test.go (the places
calling cli.Request and then t.Logf("Request error (acceptable): %v", err)
followed by return) to fail the test instead (use t.Fatalf or t.Fatal with the
error) so transport errors are fatal; update all similar occurrences (the other
blocks around lines noted in the comment) to use the same failing behavior
rather than silent log+return.
- Around line 66-70: The fuzzer currently json.Unmarshal's payloadJSON into
variable payload (using json.Unmarshal) which both skips malformed-JSON inputs
and canonicalizes duplicate keys, so the HTTP JSON parser is never exercised;
change the test helper that builds/sends the HTTP request (references to
payloadJSON and the json.Unmarshal call) to use the original payloadJSON bytes
as the request body (or add a helper that accepts raw []byte) instead of the
unmarshaled payload, ensuring malformed JSON and duplicate-key edge cases reach
the HTTP JSON parser.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: c0693690-ac10-4284-83b8-018d027eaa01

📥 Commits

Reviewing files that changed from the base of the PR and between 174d5bd and 60a7b0f.

📒 Files selected for processing (16)
  • components/manager/internal/adapters/http/in/deadline.go
  • components/manager/internal/adapters/http/in/deadline_test.go
  • components/manager/internal/adapters/http/in/report_test.go
  • components/manager/internal/adapters/http/in/routes.go
  • components/manager/internal/services/create-deadline.go
  • components/manager/internal/services/deadline-service_test.go
  • components/manager/internal/services/update-deadline-by-id.go
  • pkg/constant/errors.go
  • pkg/errors.go
  • pkg/mongodb/deadline/deadline.go
  • pkg/mongodb/deadline/deadline.mongodb.go
  • pkg/mongodb/deadline/indexes.go
  • pkg/net/http/http-utils.go
  • tests/chaos/chaos-deadline-mongodb_test.go
  • tests/fuzzy/fuzz-deadline-payload_test.go
  • tests/integration/deadlines-integration_test.go

arthurkz added 7 commits March 9, 2026 08:54
…logic

Add dayOfMonth (1-31) and monthsOfYear (1-12) range checks in ValidateScheduleFields. Introduce merge-based schedule validation on update that fetches current state before validating, with automatic cleanup of orphaned recurrence fields. Fix ValidateBusinessError to use errors.Is() iteration instead of direct map lookup so wrapped sentinel errors are correctly mapped to business responses.

X-Lerian-Ref: 0x1
…orward

clampDayToMonth now receives the original dueDate as reference and preserves hour, minute, second, and nanosecond instead of resetting to midnight. Prevents nearly a day of drift on each recurrence advancement.

X-Lerian-Ref: 0x1
Return ErrInvalidTemplateID instead of silently ignoring invalid UUIDs in the template_id query parameter, matching the validation behavior of all other query params.

X-Lerian-Ref: 0x1
recreateIndexes now tries CreateOne per index individually and only drops the specific index that conflicted. Non-conflicting indexes, including the unique constraint, remain in place throughout, avoiding windows where concurrent writes could bypass uniqueness protection.

X-Lerian-Ref: 0x1
Chaos tests now fail on /health regression during fault window instead of only logging. Fuzz test comment corrected and new TestDeadline_CreatePayload_InvalidSeeds added to assert 4xx for known-invalid payloads. Integration status filter test now verifies all returned items match the requested status and that the created deadline is present. Update trivy ignore list.

X-Lerian-Ref: 0x1
Add ErrDueDateInPast validation in NewDeadline and buildDeadlineUpdateFields. Deadlines with a dueDate before time.Now() are rejected with a 400 response, preventing creation of already-overdue deadlines.

X-Lerian-Ref: 0x1
@arthurkz arthurkz requested a review from a team as a code owner March 9, 2026 13:02
@lerian-studio
Copy link
Contributor

Consider updating CHANGELOG.md to document this change. If this change doesn't need a changelog entry, add the skip-changelog label.

@lerian-studio
Copy link
Contributor

This PR is very large (34 files, 7228 lines changed). Consider breaking it into smaller PRs for easier review.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

♻️ Duplicate comments (3)
.trivyignore (1)

12-12: ⚠️ Potential issue | 🟠 Major

Previous concern about CVE-2026-25882 remains unaddressed.

This CVE is still suppressed without documentation. As flagged in the previous review, please add a comment above this entry documenting:

  • Affected package and version
  • Justification for suppression
  • Compensating controls (if any)
  • Removal deadline or tracking ticket

Without this context, security scan signal is degraded and future maintainers cannot assess whether the suppression is still valid.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.trivyignore at line 12, The CVE suppression entry "CVE-2026-25882" in
.trivyignore needs an explanatory comment immediately above it: add the affected
package name and version, a concise justification for why the CVE is being
suppressed, any compensating controls or mitigation steps in place, and a
removal deadline or tracking ticket reference (e.g., "remove by YYYY-MM-DD or
ticket `#1234`"). Edit the .trivyignore file to insert these four pieces of
metadata as plain-text comments directly above the "CVE-2026-25882" line so
future reviewers can assess and track the suppression.
tests/fuzzy/fuzz-deadline-payload_test.go (2)

74-78: ⚠️ Potential issue | 🟠 Major

Fail fast on transport errors.

Logging and returning here lets the fuzz targets pass when the manager is down or the connection drops mid-run. A non-nil request error should fail the test so the harness does not hide infrastructure regressions.

Also applies to: 138-142, 192-196

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/fuzzy/fuzz-deadline-payload_test.go` around lines 74 - 78, The test
currently treats transport errors from cli.Request as acceptable by logging and
returning (using t.Logf and return), which masks infrastructure failures; update
each occurrence of the cli.Request error handling in
tests/fuzzy/fuzz-deadline-payload_test.go (including the blocks around
cli.Request at the shown lines and the similar blocks at lines 138-142 and
192-196) to fail the test immediately on a non-nil error (use t.Fatalf / t.Fatal
with the error) instead of t.Logf and return so transport/connection errors
cause test failures.

68-72: ⚠️ Potential issue | 🟡 Minor

Exercise malformed JSON instead of returning early.

These branches still skip the request on json.Unmarshal errors, so the handlers never see invalid bodies. Post the raw payloadJSON in those cases so parser-hardening paths are actually fuzzed.

Also applies to: 133-135, 187-189

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/fuzzy/fuzz-deadline-payload_test.go` around lines 68 - 72, When
json.Unmarshal of payloadJSON fails, don’t return early; instead treat the raw
payloadJSON as the request body so handlers receive malformed JSON. In the fuzz
test(s) replace the early return in the json.Unmarshal error branch with logic
that posts or passes []byte(payloadJSON) (the raw input) to the same
handler/path used for successfully parsed payloads (instead of using the parsed
payload variable), ensuring code paths that harden JSON parsing are exercised;
apply the same change for the other occurrences that use json.Unmarshal in this
test file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.trivyignore:
- Around line 13-14: The .trivyignore currently lists CVE-2026-22184 and
CVE-2026-27171 without justification; add a comment block immediately above each
CVE line that documents the affected package and version, the specific reason
for suppression, any compensating controls, and a removal deadline or tracking
ticket (e.g., "affected: package@version", "reason: ...", "controls: ...",
"removal-by: YYYY-MM-DD" or "ticket: `#1234`"); ensure the comment blocks are
plain-line comments compatible with .trivyignore and appear directly above the
corresponding CVE entries (refer to CVE-2026-22184 and CVE-2026-27171 in the
diff).

In `@components/manager/docker-compose.yml`:
- Around line 48-50: The plugin-auth-network block redundantly sets name:
plugin-auth-network and lacks a descriptive comment; remove the explicit name
field from the plugin-auth-network network to match other external networks and
add a short comment (similar to the infra-network comment) explaining its
purpose (e.g., "external network for plugin auth services") so the style and
documentation are consistent with infra-network and other network entries.

In `@components/manager/internal/services/deadline-service_test.go`:
- Around line 1529-1540: The test case "daily - advances to next day" uses
time.Now() which can be flaky; replace all time.Now() calls in that test entry
with a fixed reference time (e.g., a literal time.Date used by other tests) and
compute DueDate, now, expectedDueDate relative to that fixed base so
truncation/addition behavior is deterministic—update the Deadline literal fields
(DueDate, DeliveredAt) and the test's
now/expectedDueDate/expectedDelivered/expectedStatus values accordingly to use
the fixed time reference.

In `@pkg/errors.go`:
- Around line 605-609: The current lookup loops over errorMap using
errors.Is(err, sentinel) which is O(n) per call; change to a two-phase approach
in the same function: first attempt a direct map lookup using errorMap[err] (or
a type/key-based direct check) and return immediately if found, then only if
that fails fall back to iterating errorMap with errors.Is to match wrapped
errors; alternatively, order errorMap iteration so the most frequent sentinels
are checked first or add a small cached fast-path for the hot validation errors
to avoid repeated full scans (refer to symbols errorMap, errors.Is, err,
mappedError).

In `@pkg/mongodb/deadline/deadline.go`:
- Around line 27-33: The Repository interface currently leaks an HTTP transport
type by using http.QueryHeader in FindList and Count; replace that by adding a
domain-level filter/input type (e.g., type DeadlineFilters struct { /* fields:
status, dueBefore, dueAfter, limit, offset, etc. */ }) in this package and
change Repository.FindList(ctx context.Context, filters http.QueryHeader) and
Repository.Count(ctx context.Context, filters http.QueryHeader) to use
FindList(ctx context.Context, filters DeadlineFilters) and Count(ctx
context.Context, filters DeadlineFilters). Update the HTTP adapter/handler to
map incoming pkg/net/http.QueryHeader (the Swagger DTO) into DeadlineFilters
before calling the repository, and adjust any callers/tests (also around the
other occurrences noted) to construct and pass DeadlineFilters instead of
QueryHeader.
- Around line 95-115: Enforce that monthsOfYear contains the correct number of
distinct months for the given frequency: inside the existing monthsOfYear
validation block (referencing monthsOfYear and frequency and
FrequenciesWithMonthsOfYear), check uniqueness (e.g., build a set) and then
enforce that for frequency "annual" len(uniqueMonths)==1 and for "semiannual"
len(uniqueMonths)==2; if the counts or duplicates violate the rule return a
clear error (add or reuse constants, e.g., constant.ErrMonthsOfYearCountInvalid
or ErrMonthsOfYearDuplicate) so invalid schedules are rejected before
persisting.

In `@pkg/net/http/http-utils.go`:
- Around line 138-151: The switch that parses "start_date"/"startDate" and
"end_date"/"endDate" can let camelCase overwrite snake_case; before that switch
(or inside normalizeParams), canonicalize keys so snake_case wins: for each pair
check if params contains "start_date" and "startDate" (and "end_date"/"endDate")
and ensure the value used for parsing (used to set qh.StartDate and qh.EndDate)
is taken from the snake_case key if present (or delete the camelCase entry),
then proceed to time.Parse as currently implemented; reference normalizeParams,
the switch clause handling "start_date"/"startDate" and "end_date"/"endDate",
and qh.StartDate/qh.EndDate.
- Around line 112-155: parseFilterParams currently treats unknown query keys as
no-ops, letting typos silently pass; update parseFilterParams to return a
parameter validation error for any unrecognized key by adding a default case
that returns pkg.ValidateBusinessError(constant.ErrInvalidQueryParameter, "",
key) (include the original key string in the error). Also mirror this behavior
in the other parameter-parsing helper mentioned (the same validation pattern
used in ValidateParameters / the related parse function around lines 208-210) so
any unsupported filter/sort param yields a 400 instead of being ignored.

In `@tests/fuzzy/fuzz-deadline-payload_test.go`:
- Line 1: The test TestDeadline_CreatePayload_InvalidSeeds is placed under a
file with the //go:build fuzz tag so it won't run in the default CI; either
remove the build tag from tests/fuzzy/fuzz-deadline-payload_test.go or move the
TestDeadline_CreatePayload_InvalidSeeds function into a non-fuzz test file
(e.g., tests/fuzzy/deadline_payload_test.go) so it runs with the standard `go
test` invocation, or alternatively update the CI/test runner to include `-tags
fuzz` for the default test command; pick one approach and apply it to ensure
TestDeadline_CreatePayload_InvalidSeeds is executed in the normal pipeline.

---

Duplicate comments:
In @.trivyignore:
- Line 12: The CVE suppression entry "CVE-2026-25882" in .trivyignore needs an
explanatory comment immediately above it: add the affected package name and
version, a concise justification for why the CVE is being suppressed, any
compensating controls or mitigation steps in place, and a removal deadline or
tracking ticket reference (e.g., "remove by YYYY-MM-DD or ticket `#1234`"). Edit
the .trivyignore file to insert these four pieces of metadata as plain-text
comments directly above the "CVE-2026-25882" line so future reviewers can assess
and track the suppression.

In `@tests/fuzzy/fuzz-deadline-payload_test.go`:
- Around line 74-78: The test currently treats transport errors from cli.Request
as acceptable by logging and returning (using t.Logf and return), which masks
infrastructure failures; update each occurrence of the cli.Request error
handling in tests/fuzzy/fuzz-deadline-payload_test.go (including the blocks
around cli.Request at the shown lines and the similar blocks at lines 138-142
and 192-196) to fail the test immediately on a non-nil error (use t.Fatalf /
t.Fatal with the error) instead of t.Logf and return so transport/connection
errors cause test failures.
- Around line 68-72: When json.Unmarshal of payloadJSON fails, don’t return
early; instead treat the raw payloadJSON as the request body so handlers receive
malformed JSON. In the fuzz test(s) replace the early return in the
json.Unmarshal error branch with logic that posts or passes []byte(payloadJSON)
(the raw input) to the same handler/path used for successfully parsed payloads
(instead of using the parsed payload variable), ensuring code paths that harden
JSON parsing are exercised; apply the same change for the other occurrences that
use json.Unmarshal in this test file.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 3f797b4e-4b2e-49fb-b8a9-7de83a202cd5

📥 Commits

Reviewing files that changed from the base of the PR and between 60a7b0f and bca7809.

📒 Files selected for processing (14)
  • .trivyignore
  • components/manager/docker-compose.yml
  • components/manager/internal/adapters/http/in/deadline_test.go
  • components/manager/internal/services/deadline-service_test.go
  • components/manager/internal/services/update-deadline-by-id.go
  • pkg/constant/errors.go
  • pkg/errors.go
  • pkg/mongodb/deadline/deadline.go
  • pkg/mongodb/deadline/indexes.go
  • pkg/net/http/http-utils.go
  • pkg/net/http/http-utils_test.go
  • tests/chaos/chaos-deadline-mongodb_test.go
  • tests/fuzzy/fuzz-deadline-payload_test.go
  • tests/integration/deadlines-integration_test.go

…rovements

Enforce that semiannual requires exactly 2 months and annual requires exactly 1 month in monthsOfYear. Normalize startDate/endDate query params and reject unknown filter keys with 400. Move invalid-payload seed tests from fuzzy to integration suite. Revert plugin-auth-network from manager compose.

X-Lerian-Ref: 0x1
@lerian-studio
Copy link
Contributor

Consider updating CHANGELOG.md to document this change. If this change doesn't need a changelog entry, add the skip-changelog label.

@lerian-studio
Copy link
Contributor

This PR is very large (35 files, 7267 lines changed). Consider breaking it into smaller PRs for easier review.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (6)
components/manager/internal/services/deadline-service_test.go (2)

633-641: 🧹 Nitpick | 🔵 Trivial

Add explicit nil-input cases for update and deliver.

Both tables still miss the input == nil regression case, so non-panic behavior and the expected bad-request mapping are not locked in.

Also applies to: 1149-1157

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/manager/internal/services/deadline-service_test.go` around lines
633 - 641, Add explicit table-driven test entries where the input pointer is nil
for both the update and deliver tests: in the tests slice that uses input
*deadline.UpdateDeadlineInput (and the analogous deliver tests), add cases with
name like "nil input" that set input to nil, set expectErr to true and
errContains to the expected bad-request message/substring, and ensure
expectedResult is false; use the same mockSetup pattern (mocking with
gomock.Controller) but do not expect any UseCase methods to be called. This
locks in non-panic behavior and the expected bad-request mapping for
UpdateDeadlineInput == nil and the deliver equivalent.

1529-1538: ⚠️ Potential issue | 🟡 Minor

Make this recurrence case deterministic.

Using multiple time.Now() calls here can still flake near midnight and produce mismatched expectations.

Suggested fix
 		{
 			name: "daily - advances to next day",
 			deadline: deadline.Deadline{
-				DueDate:     time.Now().Truncate(24 * time.Hour),
+				DueDate:     time.Date(2026, 3, 10, 0, 0, 0, 0, time.UTC),
 				Frequency:   "daily",
 				DeliveredAt: &delivered,
 				Status:      deadline.StatusDelivered,
 			},
-			now:               time.Now().Truncate(24*time.Hour).Add(12 * time.Hour),
-			expectedDueDate:   time.Now().Truncate(24 * time.Hour).Add(24 * time.Hour),
+			now:               time.Date(2026, 3, 10, 12, 0, 0, 0, time.UTC),
+			expectedDueDate:   time.Date(2026, 3, 11, 0, 0, 0, 0, time.UTC),
 			expectedDelivered: false,
 			expectedStatus:    deadline.StatusPending,
 		},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@components/manager/internal/services/deadline-service_test.go` around lines
1529 - 1538, The test case "daily - advances to next day" uses multiple
time.Now() calls causing flakiness; make it deterministic by defining a single
fixed base time (e.g., base := time.Date(YYYY, MM, DD, 0,0,0,0, time.UTC)) at
the top of the test and replace all time.Now().Truncate(24*time.Hour) uses with
base, set now to base.Add(12*time.Hour), expectedDueDate to
base.Add(24*time.Hour), and ensure DeliveredAt (the delivered variable) is
derived from the same base so Deadline.DueDate, now, expectedDueDate, and
delivered all reference that single base time.
tests/fuzzy/fuzz-deadline-payload_test.go (2)

74-78: ⚠️ Potential issue | 🟠 Major

Fail the fuzz case on transport errors.

Logging and returning here can hide a dead or crashing manager process and let the fuzz target pass without exercising the endpoint.

Suggested fix
-		if err != nil {
-			t.Logf("Request error (acceptable): %v", err)
-			return
-		}
+		if err != nil {
+			t.Fatalf("request transport error: %v", err)
+		}

Also applies to: 138-142, 192-196

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/fuzzy/fuzz-deadline-payload_test.go` around lines 74 - 78, The test
currently swallows transport errors from the HTTP client by logging and
returning when cli.Request(ctx, "POST", "/v1/deadlines", headers, payload)
returns an error; change this so the fuzz case fails instead: replace the
t.Logf+return behavior with a hard test failure (e.g., t.Fatalf or
t.Fatalf-equivalent) that includes the error text so transport/manager crashes
fail the fuzz run; make the same change for the other occurrences noted (the
cli.Request calls around lines 138-142 and 192-196) to ensure all transport
errors cause test failures rather than silent skips.

68-72: ⚠️ Potential issue | 🟠 Major

Don't skip malformed JSON inputs.

These branches return before the request is sent, so invalid-JSON handling is never exercised by the fuzz targets. Keep sending those cases as raw request bodies so the server's parser path is fuzzed too.

Also applies to: 133-136, 187-190

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/fuzzy/fuzz-deadline-payload_test.go` around lines 68 - 72, The current
fuzz test returns when json.Unmarshal fails (using
json.Unmarshal([]byte(payloadJSON), &payload)), which skips sending malformed
JSON to the server; instead, remove the early return on Unmarshal error and
ensure the test still sends the original payloadJSON as the request body (treat
the raw string when Unmarshal fails), so both parsed and raw inputs exercise the
server parser; update the branches around variables payload, payloadJSON and the
json.Unmarshal call (also apply the same change at the other occurrences around
lines noted) to always proceed to build/send the request using payloadJSON when
unmarshalling fails.
pkg/mongodb/deadline/deadline.go (2)

27-30: 🛠️ Refactor suggestion | 🟠 Major

Keep HTTP DTOs out of the repository interface.

Using pkg/net/http.QueryHeader here still leaks transport concerns into the deadline domain/persistence boundary. A deadline-specific filter type in this package would keep the dependency direction cleaner.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/mongodb/deadline/deadline.go` around lines 27 - 30, The Repository
interface currently leaks transport concerns by using http.QueryHeader in
FindList and Count; replace that with a deadline-specific filter type (e.g.,
define type DeadlineFilters struct { /* fields for status, dueBefore, etc. */ }
in the pkg/mongodb/deadline package) and update the interface signatures for
FindList(ctx context.Context, filters DeadlineFilters) ([]*Deadline, error) and
Count(ctx context.Context, filters DeadlineFilters) (int64, error); then adapt
all implementations of Repository and all callers of FindList/Count to
construct/translate into DeadlineFilters (mapping from HTTP QueryHeader at the
transport layer only), removing the direct dependency on
pkg/net/http.QueryHeader from this package.

101-116: ⚠️ Potential issue | 🟠 Major

Validate distinct monthsOfYear, not just slice length.

semiannual with [3,3] currently passes because only len(monthsOfYear) is checked, but NextDueDate will then recur yearly instead of twice a year.

Suggested fix
 	if len(monthsOfYear) > 0 {
 		if !FrequenciesWithMonthsOfYear[frequency] {
 			return constant.ErrMonthsOfYearNotApplicable
 		}
 
+		distinctMonths := make(map[int]struct{}, len(monthsOfYear))
 		for _, m := range monthsOfYear {
 			if m < 1 || m > 12 {
 				return constant.ErrMonthsOfYearOutOfRange
 			}
+			distinctMonths[m] = struct{}{}
 		}
 
-		// Enforce month count matches the declared frequency
+		// Enforce distinct month count matches the declared frequency
 		expected := expectedMonthCount[frequency]
-		if expected > 0 && len(monthsOfYear) != expected {
+		if expected > 0 && len(distinctMonths) != expected {
 			return constant.ErrMonthsOfYearCountMismatch
 		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/mongodb/deadline/deadline.go` around lines 101 - 116, The monthsOfYear
slice must contain distinct month values (e.g., prevent [3,3] for semiannual)
because NextDueDate assumes unique months; after the existing range checks in
the validation block (where you use FrequenciesWithMonthsOfYear and
expectedMonthCount), add a uniqueness check that builds a set/map of seen months
from monthsOfYear and returns an appropriate error if a duplicate is found
(either reuse constant.ErrMonthsOfYearCountMismatch or introduce a new constant
like ErrMonthsOfYearDuplicate), so only distinct month entries are accepted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/net/http/http-utils.go`:
- Around line 174-180: Replace the substring check with a proper prefix match so
only keys that start with "metadata." are treated as metadata: in the code block
handling key checks (where qh.Metadata, qh.UseMetadata and key are used), change
strings.Contains(key, "metadata.") to strings.HasPrefix(key, "metadata.") and
continue to initialize qh.Metadata = &bson.M{} when nil and set
(*qh.Metadata)[key] = value and qh.UseMetadata = true; this ensures keys like
"foo.metadata.bar" no longer pass as metadata filters.

---

Duplicate comments:
In `@components/manager/internal/services/deadline-service_test.go`:
- Around line 633-641: Add explicit table-driven test entries where the input
pointer is nil for both the update and deliver tests: in the tests slice that
uses input *deadline.UpdateDeadlineInput (and the analogous deliver tests), add
cases with name like "nil input" that set input to nil, set expectErr to true
and errContains to the expected bad-request message/substring, and ensure
expectedResult is false; use the same mockSetup pattern (mocking with
gomock.Controller) but do not expect any UseCase methods to be called. This
locks in non-panic behavior and the expected bad-request mapping for
UpdateDeadlineInput == nil and the deliver equivalent.
- Around line 1529-1538: The test case "daily - advances to next day" uses
multiple time.Now() calls causing flakiness; make it deterministic by defining a
single fixed base time (e.g., base := time.Date(YYYY, MM, DD, 0,0,0,0,
time.UTC)) at the top of the test and replace all
time.Now().Truncate(24*time.Hour) uses with base, set now to
base.Add(12*time.Hour), expectedDueDate to base.Add(24*time.Hour), and ensure
DeliveredAt (the delivered variable) is derived from the same base so
Deadline.DueDate, now, expectedDueDate, and delivered all reference that single
base time.

In `@pkg/mongodb/deadline/deadline.go`:
- Around line 27-30: The Repository interface currently leaks transport concerns
by using http.QueryHeader in FindList and Count; replace that with a
deadline-specific filter type (e.g., define type DeadlineFilters struct { /*
fields for status, dueBefore, etc. */ } in the pkg/mongodb/deadline package) and
update the interface signatures for FindList(ctx context.Context, filters
DeadlineFilters) ([]*Deadline, error) and Count(ctx context.Context, filters
DeadlineFilters) (int64, error); then adapt all implementations of Repository
and all callers of FindList/Count to construct/translate into DeadlineFilters
(mapping from HTTP QueryHeader at the transport layer only), removing the direct
dependency on pkg/net/http.QueryHeader from this package.
- Around line 101-116: The monthsOfYear slice must contain distinct month values
(e.g., prevent [3,3] for semiannual) because NextDueDate assumes unique months;
after the existing range checks in the validation block (where you use
FrequenciesWithMonthsOfYear and expectedMonthCount), add a uniqueness check that
builds a set/map of seen months from monthsOfYear and returns an appropriate
error if a duplicate is found (either reuse
constant.ErrMonthsOfYearCountMismatch or introduce a new constant like
ErrMonthsOfYearDuplicate), so only distinct month entries are accepted.

In `@tests/fuzzy/fuzz-deadline-payload_test.go`:
- Around line 74-78: The test currently swallows transport errors from the HTTP
client by logging and returning when cli.Request(ctx, "POST", "/v1/deadlines",
headers, payload) returns an error; change this so the fuzz case fails instead:
replace the t.Logf+return behavior with a hard test failure (e.g., t.Fatalf or
t.Fatalf-equivalent) that includes the error text so transport/manager crashes
fail the fuzz run; make the same change for the other occurrences noted (the
cli.Request calls around lines 138-142 and 192-196) to ensure all transport
errors cause test failures rather than silent skips.
- Around line 68-72: The current fuzz test returns when json.Unmarshal fails
(using json.Unmarshal([]byte(payloadJSON), &payload)), which skips sending
malformed JSON to the server; instead, remove the early return on Unmarshal
error and ensure the test still sends the original payloadJSON as the request
body (treat the raw string when Unmarshal fails), so both parsed and raw inputs
exercise the server parser; update the branches around variables payload,
payloadJSON and the json.Unmarshal call (also apply the same change at the other
occurrences around lines noted) to always proceed to build/send the request
using payloadJSON when unmarshalling fails.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 774759e4-612c-4320-aa68-620a1b1d42e7

📥 Commits

Reviewing files that changed from the base of the PR and between bca7809 and d47f618.

📒 Files selected for processing (8)
  • components/manager/docker-compose.yml
  • components/manager/internal/services/deadline-service_test.go
  • pkg/constant/errors.go
  • pkg/errors.go
  • pkg/mongodb/deadline/deadline.go
  • pkg/net/http/http-utils.go
  • tests/fuzzy/fuzz-deadline-payload_test.go
  • tests/integration/deadlines-invalid-payload_test.go

Replace strings.Contains with strings.HasPrefix so that keys like 'foo.metadata.bar' are correctly rejected instead of being treated as metadata filters.

X-Lerian-Ref: 0x1
@lerian-studio
Copy link
Contributor

This PR is very large (35 files, 7269 lines changed). Consider breaking it into smaller PRs for easier review.

@lerian-studio
Copy link
Contributor

Consider updating CHANGELOG.md to document this change. If this change doesn't need a changelog entry, add the skip-changelog label.

@arthurkz arthurkz merged commit 6192b1f into develop Mar 9, 2026
30 checks passed
@arthurkz arthurkz deleted the feature/RPRTR-377 branch March 9, 2026 18:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants