From 252a7a63d0ea8100f9176c84233f935246d9062b Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Tue, 3 Feb 2026 12:59:10 +0000 Subject: [PATCH 1/2] Initial plan From 5e479456fc50ddbf1b64e0d498e19218de5e4cc2 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Tue, 3 Feb 2026 13:04:35 +0000 Subject: [PATCH 2/2] docs: add write.intermediate.interval.seconds parameter for Redshift and Snowflake v2 sinks Co-authored-by: hzxa21 <5518566+hzxa21@users.noreply.github.com> --- integrations/destinations/redshift.mdx | 7 +++++-- integrations/destinations/snowflake-v2.mdx | 7 +++++-- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/integrations/destinations/redshift.mdx b/integrations/destinations/redshift.mdx index 9ee97718..0d0c577b 100644 --- a/integrations/destinations/redshift.mdx +++ b/integrations/destinations/redshift.mdx @@ -30,7 +30,8 @@ WITH ( | intermediate.table.name | Name of the intermediate table used for upsert mode. No need to fill this out in append-only mode | | auto_schema_change | Enable automatic schema change for upsert sink; adds new columns to target table if needed | | create_table_if_not_exists | Create target table if it does not exist | -| write.target.interval.seconds | Interval in seconds for scheduled writes (default: 3600) | +| write.target.interval.seconds | Interval in seconds for triggering writes to the target table (default: 3600) | +| write.intermediate.interval.seconds | Interval in seconds for triggering writes to intermediate storage (default: 1800) | | batch_insert_rows | Number of rows per batch insert (default: 4096) | **S3 parameters** @@ -107,7 +108,9 @@ In `upsert` mode, performance is optimized through the use of an intermediate ta - An intermediate table is created to stage data before merging it into the target table. If `create_table_if_not_exists` is set to true, the table is automatically named `rw__`. -- Data is periodically merged from the intermediate table into the target table according to the `write.target.interval.seconds` setting. +- Data is written to intermediate storage (S3) at intervals defined by `write.intermediate.interval.seconds` (default: 1800 seconds). + +- Data is periodically merged from the intermediate table into the target table according to the `write.target.interval.seconds` setting (default: 3600 seconds). - By default, an S3 bucket is required to achieve optimal ingestion performance into the intermediate table. diff --git a/integrations/destinations/snowflake-v2.mdx b/integrations/destinations/snowflake-v2.mdx index 92de40a5..0769ca43 100644 --- a/integrations/destinations/snowflake-v2.mdx +++ b/integrations/destinations/snowflake-v2.mdx @@ -31,7 +31,8 @@ WITH ( | table.name | Name of the target table | | database | Snowflake database name | | schema | Snowflake schema name | -| write.target.interval.seconds | Interval in seconds for scheduled writes (default: 3600) | +| write.target.interval.seconds | Interval in seconds for triggering writes to the target table (default: 3600) | +| write.intermediate.interval.seconds | Interval in seconds for triggering writes to intermediate storage (default: 1800) | | warehouse | Snowflake warehouse name | | jdbc.url | JDBC URL to connect to Snowflake | | username | Snowflake username | @@ -179,7 +180,9 @@ In `upsert` mode, performance is optimized through the use of an intermediate ta - An intermediate table is created to stage data before merging it into the target table. If `create_table_if_not_exists` is set to true, the table is automatically named `rw__`. -- Data is periodically merged from the intermediate table into the target table according to the `write.target.interval.seconds` setting. +- Data is written to intermediate storage (S3) at intervals defined by `write.intermediate.interval.seconds` (default: 1800 seconds). + +- Data is periodically merged from the intermediate table into the target table according to the `write.target.interval.seconds` setting (default: 3600 seconds). - By default, an S3 bucket is required to achieve optimal ingestion performance into the intermediate table.