Skip to content

Add integration tests to live spanner migration for checking PostgreSQL data type support#3152

Merged
VardhanThigle merged 10 commits intoGoogleCloudPlatform:mainfrom
nmemond:pg-data-type-tests-live
Feb 23, 2026
Merged

Add integration tests to live spanner migration for checking PostgreSQL data type support#3152
VardhanThigle merged 10 commits intoGoogleCloudPlatform:mainfrom
nmemond:pg-data-type-tests-live

Conversation

@nmemond
Copy link
Contributor

@nmemond nmemond commented Dec 23, 2025

This adds integration tests to check data type mappings when doing a live migration from PostgreSQL to Spanner for both dialects.

Note that some of the type mappings fail to migrate as expected. The checks for those are still included for completeness, but they're ignored to avoid failing the tests.

@nmemond nmemond requested a review from a team as a code owner December 23, 2025 20:20
@gemini-code-assist
Copy link

Summary of Changes

Hello @nmemond, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the testing framework for the 'DataStreamToSpanner' Flex template by introducing dedicated integration tests for PostgreSQL data type migration. The tests are designed to validate how various PostgreSQL data types are mapped and transferred to both standard and PostgreSQL dialect Spanner databases during a live migration. This ensures the reliability and correctness of data transformations, with a pragmatic approach to handle currently unsupported types without blocking the test suite.

Highlights

  • Comprehensive Data Type Testing: New integration tests have been added to thoroughly check the mapping and migration of a wide array of PostgreSQL data types to Spanner.
  • Dual Dialect Support: The tests cover both standard Spanner and PostgreSQL dialect Spanner, ensuring compatibility across different Spanner configurations.
  • Graceful Handling of Unsupported Types: Known unsupported data type mappings are explicitly ignored in the tests to prevent failures, while still being tracked for completeness.
  • Enhanced SQL Script Execution: The base test utility now supports executing 'SELECT' statements within SQL scripts, which is crucial for managing replication slots during testing.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@codecov
Copy link

codecov bot commented Dec 24, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 55.18%. Comparing base (cd616f0) to head (6d5ddb0).
⚠️ Report is 53 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff              @@
##               main    #3152      +/-   ##
============================================
+ Coverage     51.03%   55.18%   +4.14%     
+ Complexity     5173     2288    -2885     
============================================
  Files           978      492     -486     
  Lines         60490    28862   -31628     
  Branches       6638     3058    -3580     
============================================
- Hits          30871    15927   -14944     
+ Misses        27455    11962   -15493     
+ Partials       2164      973    -1191     
Components Coverage Δ
spanner-templates 71.86% <ø> (+1.04%) ⬆️
spanner-import-export ∅ <ø> (∅)
spanner-live-forward-migration 79.82% <ø> (-0.02%) ⬇️
spanner-live-reverse-replication 77.40% <ø> (-0.06%) ⬇️
spanner-bulk-migration 87.92% <ø> (-0.02%) ⬇️
see 506 files with indirect coverage changes
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@VardhanThigle
Copy link
Contributor

Failure in asserts

pool-1-thread-60] INFO com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT - Asserting type: boolean_to_string
[pool-1-thread-60] INFO org.apache.beam.it.gcp.spanner.SpannerResourceManager - Loading columns [id, col] from teleport3.testpos_20251225_064337_vk3rlh.t_boolean_to_string
[pool-1-thread-60] INFO org.apache.beam.it.gcp.spanner.SpannerResourceManager - Loaded 3 records from teleport3.testpos_20251225_064337_vk3rlh.t_boolean_to_string
[pool-1-thread-60] INFO com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT - Found row: [1, false]
[pool-1-thread-60] INFO com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT - Found row: [2, true]
[pool-1-thread-60] INFO com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT - Found row: [3, null]
[pool-1-thread-60] INFO com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT - Asserting type: json_to_string
[pool-1-thread-60] INFO org.apache.beam.it.gcp.spanner.SpannerResourceManager - Loading columns [id, col] from teleport3.testpos_20251225_064337_vk3rlh.t_json_to_string
[pool-1-thread-60] INFO org.apache.beam.it.gcp.spanner.SpannerResourceManager - Loaded 3 records from teleport3.testpos_20251225_064337_vk3rlh.t_json_to_string
[pool-1-thread-60] INFO com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT - Found row: [1, {"duplicate_key": 2}]
[pool-1-thread-60] INFO com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT - Found row: [2, {"null_key": null}]
[pool-1-thread-60] INFO com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT - Found row: [3, null]
Error:  Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 1,665.32 s <<< FAILURE! - in com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT
[INFO] Running com.google.cloud.teleport.v2.templates.SeparateShadowTableDatabaseStringOverridesIT
[pool-1-thread-38] INFO org.apache.beam.it.gcp.TemplateTestBase - Starting integration test com.google.cloud.teleport.v2.templates.SeparateShadowTableDatabaseStringOverridesIT.migrationTestWithRenameTableAndColumns
Dec 25, 2025 6:40:23 AM com.google.api.client.googleapis.services.AbstractGoogleClient <init>
WARNING: Application name is not set. Call Builder#setApplicationName.
[pool-1-thread-38] INFO org.apache.beam.it.gcp.spanner.SpannerResourceManager - Not creating Spanner instance - reusing static teleport2
[pool-1-thread-38] INFO org.apache.beam.it.gcp.spanner.SpannerResourceManager - Creating database shadow__20251225_064337_z9eumg in instance teleport2.

@nmemond nmemond force-pushed the pg-data-type-tests-live branch from 1505c37 to b720ad6 Compare January 2, 2026 17:03
@VardhanThigle
Copy link
Contributor

VardhanThigle commented Jan 6, 2026

There seems to be a Assert in matching

PostgreSQLDatastreamToSpannerDataTypesIT.testPostgreSqlDataTypes

expected that contains unordered record (and case insensitive) {COL={"duplicate_key":1}, ID=1}, but only had [{col={"duplicate_key":2}, id=1}, {col={"null_key":null}, id=2}, {col=NULL, id=3}]
	at com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT.validateResult(PostgreSQLDatastreamToSpannerDataTypesIT.java:372)
	at com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT.testPostgreSqlDataTypes(PostgreSQLDatastreamToSpannerDataTypesIT.java:263)

PostgreSQLDatastreamToSpannerDataTypesIT.testPostgreSqlDataTypesPGDialect

expected that contains unordered record (and case insensitive) {COL=08:00:2b:01:02:03:04:05, ID=1}, but only had [{col=NULL, id=1}, {col=NULL, id=2}]
	at com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT.validateResult(PostgreSQLDatastreamToSpannerDataTypesIT.java:372)
	at com.google.cloud.teleport.v2.templates.PostgreSQLDatastreamToSpannerDataTypesIT.testPostgreSqlDataTypesPGDialect(PostgreSQLDatastreamToSpannerDataTypesIT.java:311)

@timatkinson-yw timatkinson-yw force-pushed the pg-data-type-tests-live branch from 70ed130 to 3490b3c Compare February 13, 2026 15:25
@timatkinson-yw timatkinson-yw force-pushed the pg-data-type-tests-live branch from 3490b3c to aea0a25 Compare February 13, 2026 18:10
@anowardear062-svg
Copy link

Support and proses issues

@anowardear062-svg
Copy link

This the update on the place

@VardhanThigle VardhanThigle merged commit 917cf5d into GoogleCloudPlatform:main Feb 23, 2026
22 checks passed
if (numCombinedConditions >= 3) {
conditions.add(combinedCondition);
combinedCondition = null;
numCombinedConditions = 0;
Copy link
Contributor

@darshan-sj darshan-sj Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you grouping 3 conditions with and and then using the Chained condition on the grouped ones? Any particular reason for having this pattern?

We should not use chained condition check like this. Chained condition check introduces 15s delay in between each check and with so many of them chained, the test takes a lot of time to execute and could sometimes timeout if the migration is a little slow.

I'm working on fixing the already checked in tests with this wrong pattern -
#3380

Copy link
Contributor Author

@nmemond nmemond Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nice thing about the Chained condition is that it avoids re-running conditions that have already passed, but of course the downside is as you mentioned: regardless of whether a condition passes or fails, there's a 15 second delay before the next check runs.

The combination of Chained and and was an effort to strike a balance between the two, given there are quite a few conditions. Maybe 3 was too small of a grouping, but I do think the idea itself is sound. Otherwise, there's a chance we end up having to re-run dozens of conditions over and over because a later condition is failing. In the worst case, we re-run every condition except the last one (if that's the one that's failing). Using a fixed group size, we can decide how many conditions will re-run in that worst case: for a group size of X, in the worst case we will re-run X-1 passing conditions every check.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that re-running conditions which have already passed is a problem here. The tables don't contain a lot of rows and the count query should return within milliseconds. Even if we have 100s of tables, querying all of them is not that much inefficient operation. On the other hand waiting for 15s seconds between each query is bad and it is adding up a lot of delay here. Please don't use this pattern in your further PRs and use "and" across all the conditions.

joy91227 pushed a commit to joy91227/DataflowTemplates that referenced this pull request Mar 4, 2026
…QL data type support (GoogleCloudPlatform#3152)

* Add data types test for live migration from PostgreSQL to Spanner (both dialects)

* Use static resource manager instead to allow access to CDC log

* Fix expected value in test

* Fix spotless

* Ignore columns which don't seem to migrate consistently (to avoid flaky test)

* Properly ignore the columns...

* Ignore columns which don't seem to migrate consistently (to avoid flaky tests)

* Ignore columns which don't seem to migrate consistently (to avoid flaky tests)

---------

Co-authored-by: Atkinson <tim.atkinson@improving.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants