Conversation
|
Important Review skippedReview was skipped due to path filters ⛔ Files ignored due to path filters (1)
CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
Note
|
| Cohort / File(s) | Summary |
|---|---|
Configuration & Utilities src/v0/destinations/tiktok_audience/config.ts, src/v0/util/recordUtils.js |
Added ACTION_RECORD_MAP, ENDPOINT_PATH, and ENDPOINT; introduced and exported EVENT_TYPES and updated error handling to use it. |
Record Types & Schemas src/v0/destinations/tiktok_audience/recordTypes.ts, src/v0/destinations/tiktok_audience/types.ts |
Added TiktokAudienceRecordRouterRequestSchema and related exported types (Identifier, IdentifiersPayload, SegmentMappingPayload, ProcessTiktokAudienceRecordsResponse, TiktokAudienceRecordRequest); made several previously-exported schemas internal and renamed list-router schema/types to TiktokAudienceListRouterRequestSchema / TiktokAudienceListRequest. |
Record Transform Implementation src/v0/destinations/tiktok_audience/recordTransform.ts |
New module implementing record processing: validation, identifier hashing (md5 / optional sha256), grouping by advertiser/action/idSchema, building segment-mapping payloads, preparing HTTP requests, and aggregating success/failure responses. Exports processTiktokAudienceRecords. |
Destination Transform Routing src/v0/destinations/tiktok_audience/transform.ts |
Refactored routing to batch events and dispatch two paths (record vs audiencelist). processRouterDest now accepts typed events and collects per-event errors; removed legacy process export. |
Sequence Diagram
sequenceDiagram
participant Events as Incoming Events
participant Validator as Schema Validator
participant Preparer as Identifier Preparer
participant Grouper as Grouper
participant Builder as Request Builder
participant TikTok as TikTok API
participant Aggregator as Response Aggregator
Events->>Validator: Validate event(s) (record vs audiencelist)
Validator->>Preparer: Deliver validated record events
Preparer->>Preparer: Hash identifiers (md5 / sha256 per config)
Preparer->>Grouper: Emit IdentifiersPayloads
Grouper->>Grouper: Group by advertiserId / action / idSchema
Grouper->>Builder: Build SegmentMappingPayloads
Builder->>TikTok: Send HTTP requests with tokens & headers
TikTok-->>Aggregator: Return responses
Aggregator->>Events: Return aggregated successful & failed responses
Estimated Code Review Effort
🎯 4 (Complex) | ⏱️ ~45 minutes
Suggestions for improvement:
- Add inline comments in recordTransform around hashing and grouping logic for maintainability.
- Add unit tests for grouping behavior, id-schema permutations, and error paths.
- Validate request size and batching limits against TikTok API constraints to avoid rejections.
🚥 Pre-merge checks | ❌ 3
❌ Failed checks (2 warnings, 1 inconclusive)
| Check name | Status | Explanation | Resolution |
|---|---|---|---|
| Description check | The PR description is incomplete and missing several required sections from the template, including objectives, dependent changes, and impact analysis. | Complete the description by adding sections for broader objectives, any changes to existing capabilities, new dependencies, new utilities, technical considerations, and confirmation of manual testing and checklist items. | |
| Docstring Coverage | Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. | Write docstrings for the functions missing them to satisfy the coverage threshold. | |
| Title check | ❓ Inconclusive | The title 'chore: vdm record v2 changes' is vague and does not clearly describe the specific changes introduced in this PR. | Consider a more descriptive title that specifies what vdm record v2 changes are being made, such as 'chore: add TikTok Audience record v2 transformation pipeline' to better communicate the primary change. |
✏️ Tip: You can configure your own custom pre-merge checks in the settings.
✨ Finishing Touches
🧪 Generate unit tests (beta)
- Create PR with unit tests
- Post copyable unit tests in a comment
- Commit unit tests in branch
feat.tiktok-audience-vdm
📝 Coding Plan
- Generate coding plan for human review comments
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #5033 +/- ##
===========================================
+ Coverage 92.28% 92.29% +0.01%
===========================================
Files 657 659 +2
Lines 35955 36066 +111
Branches 8483 8496 +13
===========================================
+ Hits 33180 33287 +107
- Misses 2537 2541 +4
Partials 238 238 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
shekhar-rudder
left a comment
There was a problem hiding this comment.
Review comments on VDM v2 record implementation.
shekhar-rudder
left a comment
There was a problem hiding this comment.
A few questions and suggestions around the prepareIdentifiersPayload function in transform.record.ts.
🔒 Scanned for secrets using gitleaks 8.29.1
4f335a3 to
4ec4b81
Compare
🔒 Scanned for secrets using gitleaks 8.29.1
4ec4b81 to
1e9b5d0
Compare
shekhar-rudder
left a comment
There was a problem hiding this comment.
A few concerns around type safety, scope of changes in transform.ts, and deletion of test files.
🔒 Scanned for secrets using gitleaks 8.29.1
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/v0/destinations/tiktok_audience/config.ts`:
- Around line 8-11: ACTION_RECORD_MAP maps EVENT_TYPES.DELETE to 'remove', which
conflicts with the list flow that sends ACTION_MAP.remove as 'delete'; update
ACTION_RECORD_MAP so EVENT_TYPES.DELETE maps to 'delete' (matching
ACTION_MAP.remove) to ensure both record and list delete operations send the
same wire value and avoid rejected/ignored batches. Locate the ACTION_RECORD_MAP
constant and change the value for EVENT_TYPES.DELETE from 'remove' to 'delete'.
In `@src/v0/destinations/tiktok_audience/recordTransform.ts`:
- Around line 51-67: The idSchema is built from Object.keys(identifiers).sort()
but batchIdentifiers (identifiersList) is built from iterating
Object.entries(identifiers) and filtering out empty values, so keys can be out
of order or mismatched; change recordTransform to first produce a single sorted,
filteredKeys array from identifiers (filtering out null/empty values), then
build identifiersList (batchIdentifiers) by iterating that sorted filteredKeys
and calling hashIdentifier(key, identifiers[key]) and set idSchema to that same
filteredKeys order, and if filteredKeys is empty reject/return early to avoid
sending an empty payload; update references to identifiersList, idSchema,
batchIdentifiers, and hashIdentifier accordingly.
In `@src/v0/destinations/tiktok_audience/transform.ts`:
- Line 159: The module currently only exports processRouterDest which removes
the legacy module entrypoint used by callers expecting destHandler.process(...);
restore a compatibility export named process that delegates to processRouterDest
(e.g., export const process = processRouterDest or a thin wrapper that forwards
args and returns the same result) so existing callers in destHandler.process
(referenced by src/services/destination/nativeIntegration.ts and
src/services/eventTest/eventTester.ts) continue to work until those callers are
migrated.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 366cb763-c727-46a4-93c4-e14f97163c0e
⛔ Files ignored due to path filters (4)
test/integrations/destinations/tiktok_audience/processor/data-native.tsis excluded by!**/test/**test/integrations/destinations/tiktok_audience/processor/data.tsis excluded by!**/test/**test/integrations/destinations/tiktok_audience/router/data-native.tsis excluded by!**/test/**test/integrations/destinations/tiktok_audience/router/data.tsis excluded by!**/test/**
📒 Files selected for processing (6)
src/v0/destinations/tiktok_audience/config.tssrc/v0/destinations/tiktok_audience/recordTransform.tssrc/v0/destinations/tiktok_audience/recordTypes.tssrc/v0/destinations/tiktok_audience/transform.tssrc/v0/destinations/tiktok_audience/types.tssrc/v0/util/recordUtils.js
🔒 Scanned for secrets using gitleaks 8.29.1
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/v0/destinations/tiktok_audience/recordTransform.ts (1)
31-48:⚠️ Potential issue | 🟠 Major
idSchemaandbatchIdentifiersmay be misaligned due to filtering.The
identifiersListis built by filtering out falsy values (line 33), butidSchemais built from all keys ofidentifiers(line 44). When some identifier values are null or empty, the resultingid_schemawill have more entries than correspondingbatch_datavalues, causing incorrect mapping at the TikTok API.Consider building both from a single filtered list of populated keys:
🔧 Suggested fix
- const identifiersList: Identifier[] = []; - for (const [fieldName, value] of Object.entries(identifiers)) { - if (value) { - identifiersList.push({ - id: hashIdentifier(fieldName, value), - audience_ids: [audienceId], - }); - } - } + const populatedKeys = Object.keys(identifiers) + .filter((key) => identifiers[key]) + .sort(); + + if (populatedKeys.length === 0) { + throw new InstrumentationError('record event must include at least one non-empty identifier'); + } + + const identifiersList: Identifier[] = populatedKeys.map((fieldName) => ({ + id: hashIdentifier(fieldName, identifiers[fieldName]!), + audience_ids: [audienceId], + })); const payload: IdentifiersPayload = { event, batchIdentifiers: identifiersList, - idSchema: Object.keys(identifiers).sort(), + idSchema: populatedKeys, advertiserId, action: ACTION_RECORD_MAP[action], };,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/v0/destinations/tiktok_audience/recordTransform.ts` around lines 31 - 48, The idSchema is currently built from Object.keys(identifiers) while batchIdentifiers (identifiersList) filters out falsy values, causing misalignment; update recordTransform logic so both idSchema and batchIdentifiers are derived from the same filtered set: iterate over Object.entries(identifiers), collect only populated keys into a filteredKeys array while simultaneously pushing to identifiersList (using hashIdentifier and audienceId), then set idSchema to filteredKeys.sort() and batchIdentifiers to identifiersList before constructing the payload in the function that builds IdentifiersPayload (referencing identifiersList, idSchema, batchIdentifiers, hashIdentifier, and ACTION_RECORD_MAP).
🧹 Nitpick comments (1)
src/v0/destinations/tiktok_audience/transform.ts (1)
42-52: Empty objects inbatch_datamay cause API issues.When
trait[destinationField]is falsy, an empty object{}is pushed into the batch data array (line 50). This differs from therecordTransform.tsapproach where falsy identifiers are filtered out. Consider filtering or handling these consistently to avoid sending malformed data to TikTok's API.💡 Suggestion
const hashTraits = (traits: Record<string, string | null>[]) => traits.map((trait) => - destinationFields.map((destinationField) => - trait[destinationField] - ? { - id: hashIdentifier(destinationField, trait[destinationField]!), - audience_ids: [audienceId], - } - : {}, - ), + destinationFields + .filter((destinationField) => trait[destinationField]) + .map((destinationField) => ({ + id: hashIdentifier(destinationField, trait[destinationField]!), + audience_ids: [audienceId], + })), );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/v0/destinations/tiktok_audience/transform.ts` around lines 42 - 52, The current hashTraits function maps falsy trait values to empty objects which can produce empty entries in the batch payload; update hashTraits (and any caller that composes batch_data) to skip falsy identifiers instead of returning {} — e.g., use a flatMap/filter pattern over destinationFields and traits so only entries where trait[destinationField] is truthy produce { id: hashIdentifier(destinationField, trait[destinationField]!), audience_ids: [audienceId] }, ensuring no empty objects are included in the resulting array.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/v0/destinations/tiktok_audience/recordTransform.ts`:
- Around line 95-103: The grouping key is unsafe because key.split('-') will
break when advertiserId contains hyphens; update the grouping to use a robust
stringified key (e.g., JSON.stringify) or a delimiter guaranteed not to appear
in components and then parse it accordingly: when building groupedPayloads
(where groupBy is called on identifiersPayloads) replace the template key with
JSON.stringify({ advertiserId, action, idSchema: idSchema.join(',') }) and then,
inside the loop over Object.entries(groupedPayloads), parse the key with
JSON.parse to extract advertiserId, action and idSchema (then derive
idSchemaList from idSchema.split(',')); reference symbols: groupedPayloads,
groupBy, identifiersPayloads, key.split, advertiserId, action, idSchemaList.
---
Duplicate comments:
In `@src/v0/destinations/tiktok_audience/recordTransform.ts`:
- Around line 31-48: The idSchema is currently built from
Object.keys(identifiers) while batchIdentifiers (identifiersList) filters out
falsy values, causing misalignment; update recordTransform logic so both
idSchema and batchIdentifiers are derived from the same filtered set: iterate
over Object.entries(identifiers), collect only populated keys into a
filteredKeys array while simultaneously pushing to identifiersList (using
hashIdentifier and audienceId), then set idSchema to filteredKeys.sort() and
batchIdentifiers to identifiersList before constructing the payload in the
function that builds IdentifiersPayload (referencing identifiersList, idSchema,
batchIdentifiers, hashIdentifier, and ACTION_RECORD_MAP).
---
Nitpick comments:
In `@src/v0/destinations/tiktok_audience/transform.ts`:
- Around line 42-52: The current hashTraits function maps falsy trait values to
empty objects which can produce empty entries in the batch payload; update
hashTraits (and any caller that composes batch_data) to skip falsy identifiers
instead of returning {} — e.g., use a flatMap/filter pattern over
destinationFields and traits so only entries where trait[destinationField] is
truthy produce { id: hashIdentifier(destinationField, trait[destinationField]!),
audience_ids: [audienceId] }, ensuring no empty objects are included in the
resulting array.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 135db26c-70f5-42c9-86f0-1f9e837dfff0
⛔ Files ignored due to path filters (1)
test/integrations/destinations/tiktok_audience/router/data-native.tsis excluded by!**/test/**
📒 Files selected for processing (4)
src/v0/destinations/tiktok_audience/config.tssrc/v0/destinations/tiktok_audience/recordTransform.tssrc/v0/destinations/tiktok_audience/recordTypes.tssrc/v0/destinations/tiktok_audience/transform.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/v0/destinations/tiktok_audience/config.ts
🔒 Scanned for secrets using gitleaks 8.29.1
There was a problem hiding this comment.
♻️ Duplicate comments (1)
src/v0/destinations/tiktok_audience/recordTransform.ts (1)
30-46:⚠️ Potential issue | 🟠 Major
idSchemaandbatchIdentifiersare derived from different filtered sets, causing potential data mismatch.The
identifiersList(line 30-38) only includes entries wherevalueis truthy, whileidSchema(line 43) is built from all keys viaObject.keys(identifiers).sort(). This mismatch means that if any identifier value is empty/null, theid_schemasent to TikTok will have more fields than the correspondingbatch_dataentries, leading to incorrect API behavior.Both should derive from the same filtered, sorted list of non-empty keys.
🔧 Suggested fix to align idSchema and batchIdentifiers
- const identifiersList: Identifier[] = []; - for (const [fieldName, value] of Object.entries(identifiers)) { - if (value) { - identifiersList.push({ - id: hashIdentifier(fieldName, value), - audience_ids: [audienceId], - }); - } - } + const populatedKeys = Object.keys(identifiers) + .filter((key) => identifiers[key]) + .sort(); + + if (populatedKeys.length === 0) { + throw new InstrumentationError('Record event must include at least one non-empty identifier'); + } + + const identifiersList: Identifier[] = populatedKeys.map((fieldName) => ({ + id: hashIdentifier(fieldName, identifiers[fieldName]), + audience_ids: [audienceId], + })); const payload: IdentifiersPayload = { event, batchIdentifiers: identifiersList, - idSchema: Object.keys(identifiers).sort(), + idSchema: populatedKeys, advertiserId, action: ACTION_RECORD_MAP[action], };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/v0/destinations/tiktok_audience/recordTransform.ts` around lines 30 - 46, The idSchema currently uses Object.keys(identifiers).sort() while batchIdentifiers (identifiersList) only includes truthy values, causing a mismatch; change the code to derive a single filtered, sorted key list from identifiers (e.g., collect keys from Object.entries(identifiers) where value is truthy), use that same sorted list for idSchema, and build identifiersList from those same key/value pairs (using hashIdentifier and audienceId) so IdentifiersPayload.event, batchIdentifiers, and idSchema are consistent with each other (adjust any references in the payload construction around IdentifiersPayload, identifiersList, idSchema, hashIdentifier, and ACTION_RECORD_MAP accordingly).
🧹 Nitpick comments (1)
src/v0/destinations/tiktok_audience/recordTransform.ts (1)
89-115: Good structured grouping approach; consider edge case for malformed events.The array-based grouping (lines 94-100) is a solid improvement over string-key splitting, avoiding delimiter collision issues entirely.
One consideration: at line 113, if an event is so malformed that it lacks a
metadataproperty entirely,handleRtTfSingleEventError(event, error, {})may produce unexpected results since it accessesinput.metadata. If the Zod schema validation is the primary failure path and always runs first, this may be acceptable. However, for defensive programming, you might consider wrapping this with additional safety.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/v0/destinations/tiktok_audience/recordTransform.ts` around lines 89 - 115, The catch block can pass malformed events lacking nested properties into handleRtTfSingleEventError and cause its internals to access undefined fields; update the error-handling call in the events loop (the catch for validateAudienceRecordEvent in the for (const event of events) block) to pass a safe, minimal metadata object (or explicitly safe-wrapped values) instead of the raw empty object, or alternatively harden handleRtTfSingleEventError itself to tolerate missing event.input/metadata via optional chaining/defaults; target validateAudienceRecordEvent, the catch block that pushes to recordResponse.failedResponses, and handleRtTfSingleEventError when making this defensive change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/v0/destinations/tiktok_audience/recordTransform.ts`:
- Around line 30-46: The idSchema currently uses Object.keys(identifiers).sort()
while batchIdentifiers (identifiersList) only includes truthy values, causing a
mismatch; change the code to derive a single filtered, sorted key list from
identifiers (e.g., collect keys from Object.entries(identifiers) where value is
truthy), use that same sorted list for idSchema, and build identifiersList from
those same key/value pairs (using hashIdentifier and audienceId) so
IdentifiersPayload.event, batchIdentifiers, and idSchema are consistent with
each other (adjust any references in the payload construction around
IdentifiersPayload, identifiersList, idSchema, hashIdentifier, and
ACTION_RECORD_MAP accordingly).
---
Nitpick comments:
In `@src/v0/destinations/tiktok_audience/recordTransform.ts`:
- Around line 89-115: The catch block can pass malformed events lacking nested
properties into handleRtTfSingleEventError and cause its internals to access
undefined fields; update the error-handling call in the events loop (the catch
for validateAudienceRecordEvent in the for (const event of events) block) to
pass a safe, minimal metadata object (or explicitly safe-wrapped values) instead
of the raw empty object, or alternatively harden handleRtTfSingleEventError
itself to tolerate missing event.input/metadata via optional chaining/defaults;
target validateAudienceRecordEvent, the catch block that pushes to
recordResponse.failedResponses, and handleRtTfSingleEventError when making this
defensive change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c8e2748d-fa87-49f6-a186-99d879df77e9
📒 Files selected for processing (1)
src/v0/destinations/tiktok_audience/recordTransform.ts
🔒 Scanned for secrets using gitleaks 8.29.1
|



🔒 Scanned for secrets using gitleaks 8.29.1
What are the changes introduced in this PR?
What is the related Linear task?