View Issue Details

IDProjectCategoryView StatusLast Update
0004274SymmetricDSTaskpublic2020-02-05 14:19 Assigned To 
Status newResolutionopen 
Product Version3.9.22 
Summary0004274: SymmetricDS V3.9.2 and a Large Bulk Delete
DescriptionDear SymmetricDS.

We have a simple Symmds Replication setup between 2 DBs so we have only 1 Primary and 1 receiving Node.

Although it should not matter but the Primary is a PostgreSQL DB and the Receiving node is a Sybase DB.

We have been using this replication successfully for a couple of years, however we had a situation occur of which I would ask the Symmds Guru's on this Site for advice.

We have a Very Large (30 Million) row table which is part of the replication.

And Within the Primary DB this table incurred a delete within a single transaction of 18Million Rows.

Symmds was able to handle the delete and I was able to see that the data was transferred from the Sending Node (Outgoing) to Receiving Node (Incoming) side successfully (aka there was a single batch of 18 million delete events that had been built).

However in the Incoming node as it applied the delete it performed this in a single thread and sequentially was running 18 Million delete statements.

This Clobbered our Replication as it stalled everything as it tried to process this full batch.

I was able to mark the batch as OK and perform a cleanup of the table external to the replication via my own custom table sync program.

However, given that this occurred, I would ask if anyone knows any parameters to the Symmds setup which might allow for Parallel or concurrent processing of a batch (2 or more threads processing the deletes concurrently ) or anything which might allow for the batch to process in a more concurrent or parallel manner.

Additionally, I did read the setting Max Data to Route which has the definition "Specifies the maximum number of data rows to route for a channel at a time." Which I am not sure if it is just a Chunking of the data within the batch into smaller incoming batches (Split up 18Million into 1 Million at a time) or is or can be used to split up a very large batch into multiple threads or does this parameter just stop the replication when the actions within a batch exceed this value.

Any suggestions or guidance would be helpful.

Tony V
Steps To ReproducePerform a delete of X Rows (in this case 18 Million rows) against a table in the primary database within a single transaction and try to speed up its application in the Target DB's.
TagsNo tags attached.


There are no notes attached to this issue.

Issue History

Date Modified Username Field Change
2020-02-05 14:19 New Issue