How to overcome with big tables? Is there a chunk/batch/partial method of replication with checkpoints? #39453
Unanswered
kimvykz
asked this question in
Connector Questions
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, everyone! I run replication between a MySql DB and a PostgreSql DB. My table contains more than 250 million rows. The first replication process continued during 3 days and finished due the "SYNC_JOB_MAX_TIMEOUT_DAYS=3" parameter. I reconfigured this one to 10 days. But is there in Airbyte any kind of method which can fix partial result and after failure/error do not just restart from very beginning, but continue from the last checkpoint?
And also can Airbyte replicate tables by conditions? (ex.: ... where field = 'parameter')
Beta Was this translation helpful? Give feedback.
All reactions