View Issue Details

IDProjectCategoryView StatusLast Update
0002192SymmetricDSBugpublic2015-02-17 21:44
Reportertberger Assigned Tochenson  
Prioritynormal 
Status closedResolutionfixed 
Product Version3.7.6 
Target Version3.7.7Fixed in Version3.7.7 
Summary0002192: Conflict resolution of FALLBACK transforms update to insert although row exists
DescriptionI configured two tables for testing conflict resolution between a store and a corp node (MySQL and MS SQL respectively). They both just contain two integer columns, id and number (id is the primary key). Detection type is set to USE_CHANGED_DATA, resolution type to FALLBACK for one direction, IGNORE for the other (complete setup is attached below).

When a conflict on an update is detected (mismatch on the number column), on the FALLBACK resolution side, the update is not applied anyways - as expected from reading the documentation. It is instead transformed to an insert despite the fact that the row exists; which fails, unsurprisingly, and the batch is left in error state. The log output for the failed batch is attached below.
Additional InformationSymmetricDS setup:

INSERT INTO
    sym_trigger
    (trigger_id, source_schema_name, source_table_name, channel_id,
    sync_on_update, sync_on_insert, sync_on_delete, sync_on_incoming_batch,
    last_update_time, create_time)
VALUES
    ('testconflicts-corp' , 'dbo', 'testconflicts', 'testchannel' , 1, 1, 1, 0, current_timestamp, current_timestamp),
    ('testconflicts-store', NULL , 'testconflicts', 'testchannel' , 1, 1, 1, 0, current_timestamp, current_timestamp);

INSERT INTO
    sym_transform_table
    (transform_id, source_node_group_id, target_node_group_id, transform_point,
    source_schema_name, source_table_name, target_schema_name, target_table_name,
    delete_action, column_policy)
VALUES
    ('testconflicts-corp' , 'corp' , 'store', 'LOAD' , 'dbo', 'testconflicts', NULL , 'testconflicts', 'DEL_ROW', 'IMPLIED'),
    ('testconflicts-store', 'store', 'corp' , 'LOAD' , NULL , 'testconflicts', 'dbo', 'testconflicts', 'DEL_ROW', 'IMPLIED');

INSERT INTO
    sym_conflict
    (conflict_id, source_node_group_id, target_node_group_id, target_channel_id,
    target_schema_name, target_table_name, detect_type, detect_expression, resolve_type,
    ping_back, resolve_changes_only, resolve_row_only,
    create_time, last_update_time)
VALUES
    ('testconflicts-corp' , 'corp' , 'store', 'testchannel' , NULL, NULL, 'USE_CHANGED_DATA', NULL, 'FALLBACK', 'OFF' , 0, 1, current_timestamp, current_timestamp),
    ('testconflicts-store', 'store', 'corp' , 'testchannel' , NULL, NULL, 'USE_CHANGED_DATA', NULL, 'IGNORE' , 'OFF' , 0, 1, current_timestamp, current_timestamp);

SymmetricDS log output:

[store-001] - ProtocolDataReader - CSV parsed: binary,BASE64
[store-001] - ProtocolDataReader - CSV parsed: channel,testchannel
[store-001] - ProtocolDataReader - CSV parsed: batch,187
[store-001] - StagingDataWriter - Writing staging data: nodeid
[store-001] - StagingDataWriter - Creating staged resource for batch 000-187
[store-001] - StagingDataWriter - Writing staging data: ,
[store-001] - StagingDataWriter - Writing staging data: 000
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - StagingDataWriter - Writing staging data: binary
[store-001] - StagingDataWriter - Writing staging data: ,
[store-001] - StagingDataWriter - Writing staging data: BASE64
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - StagingDataWriter - Writing staging data: channel
[store-001] - StagingDataWriter - Writing staging data: ,
[store-001] - StagingDataWriter - Writing staging data: testchannel
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - StagingDataWriter - Writing staging data: batch
[store-001] - StagingDataWriter - Writing staging data: ,
[store-001] - StagingDataWriter - Writing staging data: 187
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - ProtocolDataReader - CSV parsed: catalog,<null>
[store-001] - ProtocolDataReader - CSV parsed: schema,dbo
[store-001] - ProtocolDataReader - CSV parsed: table,testconflicts
[store-001] - ProtocolDataReader - CSV parsed: keys,id
[store-001] - ProtocolDataReader - CSV parsed: columns,id,number
[store-001] - ProtocolDataReader - CSV parsed: old,1,3
[store-001] - StagingDataWriter - Writing staging data: catalog
[store-001] - StagingDataWriter - Writing staging data: ,
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - StagingDataWriter - Writing staging data: schema
[store-001] - StagingDataWriter - Writing staging data: ,
[store-001] - StagingDataWriter - Writing staging data: dbo
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - StagingDataWriter - Writing staging data: table
[store-001] - StagingDataWriter - Writing staging data: ,
[store-001] - StagingDataWriter - Writing staging data: testconflicts
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - StagingDataWriter - Writing staging data: keys,id
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - StagingDataWriter - Writing staging data: columns,id,number
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - ProtocolDataReader - CSV parsed: old,1,3
[store-001] - ProtocolDataReader - CSV parsed: update,1,4,1
[store-001] - StagingDataWriter - Writing staging data: old
[store-001] - StagingDataWriter - Writing staging data: ,
[store-001] - StagingDataWriter - Writing staging data: "1","3"
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - StagingDataWriter - Writing staging data: update
[store-001] - StagingDataWriter - Writing staging data: ,
[store-001] - StagingDataWriter - Writing staging data: "1","4"
[store-001] - StagingDataWriter - Writing staging data: ,
[store-001] - StagingDataWriter - Writing staging data: "1"
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - ProtocolDataReader - CSV parsed: commit,187
[store-001] - ProtocolDataReader - CSV parsed: channel,heartbeat
[store-001] - ProtocolDataReader - CSV parsed: batch,189
[store-001] - StagingDataWriter - Writing staging data: commit
[store-001] - StagingDataWriter - Writing staging data: ,
[store-001] - StagingDataWriter - Writing staging data: 187
[store-001] - StagingDataWriter - Writing staging data:
[store-001] - ProtocolDataReader - CSV parsed: nodeid,000
[store-001] - ProtocolDataReader - CSV parsed: binary,BASE64
[store-001] - ProtocolDataReader - CSV parsed: channel,testchannel
[store-001] - ProtocolDataReader - CSV parsed: batch,187
[store-001] - ProtocolDataReader - CSV parsed: catalog,<null>
[store-001] - ProtocolDataReader - CSV parsed: schema,dbo
[store-001] - ProtocolDataReader - CSV parsed: table,testconflicts
[store-001] - ProtocolDataReader - CSV parsed: keys,id
[store-001] - ProtocolDataReader - CSV parsed: columns,id,number
[store-001] - ProtocolDataReader - CSV parsed: old,1,3
[store-001] - ProtocolDataReader - CSV parsed: old,1,3
[store-001] - ProtocolDataReader - CSV parsed: update,1,4,1
[store-001] - TransformWriter - 1 transformation(s) started because of UPDATE on dbo.testconflicts. The original row data was: {id=1, number=4}
[store-001] - TransformWriter - 1 target data was created for the testconflicts-corp transformation. The target table is testconflicts
[store-001] - TransformWriter - Data has been transformed to a UPDATE for the 0000001 transform. The mapped target columns are: {id,number}. The mapped target values are: {1,4}
[store-001] - DefaultDatabaseWriter - Preparing dml: update `store`.`testconflicts` set `number` = ? where `number` = ? and `id` = ?
[store-001] - DefaultDatabaseWriter - Submitting data [4, 3, 1] with types [4, 4, 4]
[store-001] - AbstractDatabaseWriterConflictResolver - Conflict detected: testconflicts-corp in batch 187 at line 1 for table store.testconflicts
[store-001] - AbstractDatabaseWriterConflictResolver - Row data: "1","4"
[store-001] - AbstractDatabaseWriterConflictResolver - Old data: "1","3"
[store-001] - TransformWriter - 1 target data was created for the testconflicts-corp transformation. The target table is testconflicts
[store-001] - TransformWriter - Data has been transformed to a UPDATE for the 0000001 transform. The mapped target columns are: {id,number}. The mapped target values are: {1,4}
[store-001] - DefaultDatabaseWriter - Preparing dml: update `store`.`testconflicts` set `number` = ? where `number` = ? and `id` = ?
[store-001] - DefaultDatabaseWriter - Submitting data [4, 3, 1] with types [4, 4, 4]
[store-001] - DefaultDatabaseWriter - Failed to process a update event in batch 187.
Failed pk data was: "1"
Failed row data was: "1","4"
Failed old data was: "1","3"

[store-001] - TransformWriter - 1 target data was created for the testconflicts-corp transformation. The target table is testconflicts
[store-001] - TransformWriter - Data has been transformed to a INSERT for the 0000001 transform. The mapped target columns are: {id,number}. The mapped target values are: {1,4}
[store-001] - DefaultDatabaseWriter - Preparing dml: insert into `store`.`testconflicts` (`id`, `number`) values (?,?)
[store-001] - DefaultDatabaseWriter - Submitting data [1, 4] with types [4, 4]
[store-001] - DefaultDatabaseWriter - Failed to process a insert event in batch 187.
Failed row data was: "1","4"

[store-001] - DefaultDatabaseWriter - Failed to process a update event in batch 187.
Failed pk data was: "1"
Failed row data was: "1","4"
Failed old data was: "1","3"

[store-001] - DataLoaderService - Failed to load batch 000-187 because: Detected conflict while executing INSERT on store.testconflicts. The primary key data was: {id=1}. The original error message was: Duplicate entry '1' for key 'PRIMARY'
org.jumpmind.symmetric.io.data.writer.ConflictException: Detected conflict while executing INSERT on store.testconflicts. The primary key data was: {id=1}. The original error message was: Duplicate entry '1' for key 'PRIMARY'
        at org.jumpmind.symmetric.io.data.writer.AbstractDatabaseWriter.write(AbstractDatabaseWriter.java:181)
        at org.jumpmind.symmetric.io.data.writer.DefaultTransformWriterConflictResolver.performFallbackToInsert(DefaultTransformWriterConflictResolver.java:73)
        at org.jumpmind.symmetric.io.data.writer.AbstractDatabaseWriterConflictResolver.needsResolved(AbstractDatabaseWriterConflictResolver.java:98)
        at org.jumpmind.symmetric.io.data.writer.AbstractDatabaseWriter.write(AbstractDatabaseWriter.java:179)
        at org.jumpmind.symmetric.io.data.writer.AbstractDatabaseWriter.write(AbstractDatabaseWriter.java:124)
        at org.jumpmind.symmetric.io.data.writer.NestedDataWriter.write(NestedDataWriter.java:64)
        at org.jumpmind.symmetric.model.ProcessInfoDataWriter.write(ProcessInfoDataWriter.java:66)
        at org.jumpmind.symmetric.io.data.writer.TransformWriter.write(TransformWriter.java:191)
        at org.jumpmind.symmetric.io.data.DataProcessor.forEachDataInTable(DataProcessor.java:200)
        at org.jumpmind.symmetric.io.data.DataProcessor.forEachTableInBatch(DataProcessor.java:170)
        at org.jumpmind.symmetric.io.data.DataProcessor.process(DataProcessor.java:116)
        at org.jumpmind.symmetric.service.impl.DataLoaderService$LoadIntoDatabaseOnArrivalListener.end(DataLoaderService.java:807)
        at org.jumpmind.symmetric.io.data.writer.StagingDataWriter.notifyEndBatch(StagingDataWriter.java:75)
        at org.jumpmind.symmetric.io.data.writer.AbstractProtocolDataWriter.end(AbstractProtocolDataWriter.java:220)
        at org.jumpmind.symmetric.io.data.DataProcessor.process(DataProcessor.java:130)
        at org.jumpmind.symmetric.service.impl.DataLoaderService.loadDataFromTransport(DataLoaderService.java:428)
        at org.jumpmind.symmetric.service.impl.DataLoaderService.loadDataFromPull(DataLoaderService.java:265)
        at org.jumpmind.symmetric.service.impl.PullService.execute(PullService.java:135)
        at org.jumpmind.symmetric.service.impl.NodeCommunicationService$2.run(NodeCommunicationService.java:317)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
TagsNo tags attached.

Activities

chenson

2015-02-12 17:49

administrator   ~0000669

Yep. This is a bug. Found the issue. Thanks for submitting it.

Related Changesets

SymmetricDS: master c94acafd

2015-02-12 12:50:09

chenson

Details Diff
0002192: Conflict resolution of FALLBACK transforms update to insert although row exists Affected Issues
0002192
mod - symmetric-io/src/main/java/org/jumpmind/symmetric/io/data/writer/AbstractDatabaseWriter.java Diff File
mod - symmetric-io/src/main/java/org/jumpmind/symmetric/io/data/writer/DefaultTransformWriterConflictResolver.java Diff File

Issue History

Date Modified Username Field Change
2015-02-12 16:04 tberger New Issue
2015-02-12 17:49 chenson Note Added: 0000669
2015-02-12 17:49 chenson Target Version => 3.7.7
2015-02-12 18:00 chenson Changeset attached => SymmetricDS trunk r9361
2015-02-14 16:18 chenson Status new => resolved
2015-02-14 16:18 chenson Fixed in Version => 3.7.7
2015-02-14 16:18 chenson Resolution open => fixed
2015-02-14 16:18 chenson Assigned To => chenson
2015-02-17 21:44 chenson Status resolved => closed
2015-07-31 01:49 chenson Changeset attached => SymmetricDS master c94acafd