Copyright © 2007-2017 JumpMind, Inc

Version 3.8.44

Permission to use, copy, modify, and distribute this SymmetricDS User Guide for any purpose and without fee is hereby granted in perpetuity, provided that the above copyright notice and this paragraph appear in all copies.

Preface

This user guide introduces SymmetricDS and its features for data synchronization. It is intended for users, developers, and administrators who want to install the software, configure synchronization, and manage its operation. Thank you to all the members of the open source community whose feedback and contributions helped us build better software and documentation. This version of the guide was generated on 2017-05-01.

1. Introduction

SymmetricDS is open source software for database and file synchronization, with support for multi-master replication, filtered synchronization, and transformation. It uses web and database technologies to replicate change data as a scheduled or near real-time operation, and it includes an initial load feature for full data loads. The software was designed to scale for a large number of nodes, work across low-bandwidth connections, and withstand periods of network outage.

1.1. System Requirements

SymmetricDS is written in Java and requires a Java Runtime Environment (JRE) Standard Edition (SE) or Java Development Kit (JDK) Standard Edition (SE) version 7.0 or above. Most major operating systems and databases are supported. See the list of supported databases in the Database Compatibility section. The minimum operating system requirements are:

  • Java SE Runtime Environment 7 or above

  • Memory - 64 (MB) available

  • Disk - 256 (MB) available

The memory, disk, and CPU requirements increase with the number of connected clients and the amount of data being synchronized. The best way to size a server is to simulate synchronization in a lower environment and benchmark data loading. However, a rule of thumb for servers is one server-class CPU with 2 GB of memory for every 500 MB/hour of data transfer and 350 clients. Multiple servers can be used as a cluster behind a load balancer to achieve better performance and availability.

1.2. Overview

A node is responsible for synchronizing the data from a database or file system with other nodes in the network using HTTP. Nodes are assigned to one of the node Groups that are configured together as a unit. The node groups are linked together with Group Links to define either a push or pull communication. A pull causes one node to connect with other nodes and request changes that are waiting, while a push causes one node to connect with other nodes when it has changes to send.

Each node is connected to a database with a Java Database Connectivity (JDBC) driver using a connection URL, username, and password. While nodes can be separated across wide area networks, the database a node is connected to should be located nearby on a local area network for the best performance. Using its database connection, a node creates tables as a Data Model for configuration settings and runtime operations. The user populates configuration tables to define the synchronization and the runtime tables capture changes and track activity. The tables to sync can be located in any Catalog and Schema that are accessible from the connection, while the files to sync can be located in any directory that is accessible on the local server.

overview

At startup, SymmetricDS looks for Node Properties Files and starts a node for each file it finds, which allows multiple nodes to run in the same instance and share resources. The property file for a node contains its external ID, node group, registration server URL, and database connection information. The external ID is the name for a node used to identify it from other nodes. One node is configured as the registration server where the master configuration is stored. When a node is started for the first time, it contacts the registration server using a registration process that sends its external ID and node group. In response, the node receives its configuration and a node password that must be sent as authentication during synchronization with other nodes.

1.3. Architecture

Each subsystem in the node is responsible for part of the data movement and is controlled through configuration. Data flows through the system in the following steps:

  1. Capture into a runtime table at the source database

  2. Route for delivery to target nodes and group into batches

  3. Extract and transform into the rows, columns, and values needed for the outgoing batch

  4. Send the outgoing batch to target nodes

  5. Receive the incoming batch at the target node

  6. Transform into the rows, columns, and values needed for the incoming batch

  7. Load data and return an acknowledgment to the source node

architecture
Capture

Change Data Capture (CDC) for tables uses database triggers that fire and record changes as comma-separated values into a runtime table called DATA. For file sync, a similar mechanism is used, except changes to the metadata about files are captured. The changes are recorded as insert, update, and delete event types. The subsystem installs and maintains triggers on tables based on the configuration provided by the user, and it can automatically detect schema changes on tables and regenerate triggers.

Route

Routers run across new changes to determine which target nodes will receive the data. The user configures which routers to use and what criteria is used to match data, creating subsets of rows if needed. Changes are grouped into batches and assigned to target nodes in the DATA_EVENT and OUTGOING_BATCH tables.

Extract

Changes are extracted from the runtime tables and prepared to be sent as an outgoing batch. If large objects are configured for streaming instead of capture, they are queried from the table. Special event types like "reload" for Initial Loads are also processed.

Transform

If transformations are configured, they operate on the change data either during the extract phase at the source node or the load phase at the target node. The node’s database can be queried to enhance the data. Data is transformed into the tables, rows, columns, and values needed for either the outgoing or incoming batch.

Outgoing

The synchronization sends batches to target nodes to be loaded. Multiple batches can be configured to send during a single synchronization. The status of the batch is updated on the OUTGOING_BATCH table as it processes. An acknowledgment is received from target nodes and recorded on the batch.

Incoming

The synchronization receives batches from remote nodes and the data is loaded. The status of the batch is updated on the INCOMING_BATCH table as it processes. The resulting status of the batch is returned to the source node in an acknowledgment.

1.4. Features

SymmetricDS offers a rich set of features with flexible configuration for large scale deployment in a mixed environment with multiple systems.

  • Data Synchronization - Change data capture for relational databases and file synchronization for file systems can be periodic or near real-time, with an initial load feature to fully populate a node.

  • Central Management - Configure, monitor, and troubleshoot synchronization from a central location where conflicts and errors can be investigated and resolved.

  • Automatic Recovery - Data delivery is durable and low maintenance, withstanding periods of downtime and automatically recovering from a network outage.

  • Secure and Efficient - Communication uses a data protocol designed for low bandwidth networks and streamed over HTTPS for encrypted transfer.

  • Transformation - Manipulate data at multiple points to filter, subset, translate, merge, and enrich the data.

  • Conflict Management - Enforce consistency of two-way synchronization by configuring rules for automatic and manual resolution.

  • Extendable - Scripts and Java code can be configured to handle events, transform data, and create customized behavior.

  • Deployment Options - The software can be installed as a self-contained server that stands alone, deployed to a web application server, or embedded within an application.

1.5. Why SymmetricDS?

SymmetricDS is a feature-rich data synchronization solution that focuses on ease of use, openness, and flexibility. The software encourages interoperability and accessibility for users and developers with the availability of source code, an application programming interface (API), and a data model supported by documentation. Configuration includes a powerful set of options to define node topology, communication direction, transformation of data, and integration with external systems. Through scripts and Java code, the user can also extend functionality with custom behavior. With a central database for setup and runtime information, the user has one place to configure, manage, and troubleshoot synchronization, with changes taking immediate effect across the network.

The trigger-based data capture system is easy to understand and widely supported by database systems. Table synchronization can be setup by users and application developers without requiring a database administrator to modify the server. Triggers are database objects written in a procedural language, so they are open for examination, and include flexible configuration options for conditions and customization. Some overhead is associated with triggers, but they perform well for applications of online transaction processing, and their benefits of flexibility and maintenance outweigh the cost for most scenarios.

Using an architecture based on web server technology, many simultaneous requests can be handled at a central server, with proven deployments in production supporting more than ten thousand client nodes. Large networks of nodes can be grouped into tiers for more control and efficiency, with each group synchronizing data to the next tier. Data loading is durable and reliable by tracking batches in transactions and retrying of faults for automatic recovery, making it a low maintenance system.

1.6. License

SymmetricDS is free software licensed under the GNU General Public License (GPL) version 3.0. See http://www.gnu.org/licenses/gpl.html for the full text of the license. This project includes software developed by JumpMind (http://www.jumpmind.com/) and a community of multiple contributors. SymmetricDS is licensed to JumpMind as the copyright holder under one or more Contributor License Agreements. SymmetricDS and the SymmetricDS logos are trademarks of JumpMind.

2. Installation

SymmetricDS at its core is a web application. A SymmetricDS instance runs within the context of a web application container like Jetty or Tomcat, and uses web based protocols like HTTP to communicate with other instances.

An instance has one of the following installation options:

  1. Standalone Installation - SymmetricDS is installed and run as a standalone process using the built-in Jetty web server. This is the simplest and recommended way to install an instance.

  2. Web Archive (WAR) - A SymmetricDS web archive (WAR) file is deployed to an existing web application container that is separately installed, maintained and run.

  3. Embedded - SymmetricDS is embedded within an existing application. In this option, a custom wrapper program is written that calls the SymmetricDS API to synchronize data.

2.1. Standalone Installation

The SymmetricDS standalone ZIP file can be downloaded from Sourceforge. It is installed by unzipping to the installation location.

The sym command line utility starts a standalone instance of SymmetricDS using the built-in Jetty web server to handle requests. The web server can be configured by changing properties in the conf/symmetric-server.properties file.

The following example starts the server on the default port from the command line. SymmetricDS will automatically create a node for each Node Properties File configured in the engines directory.

bin/sym

To automatically start SymmetricDS when the machine boots, see Running as a Service.

2.2. Running as a Service

SymmetricDS can be configured to start automatically when the system boots, running as a Windows service or Linux/Unix daemon. A wrapper process starts SymmetricDS and monitors it, so it can be restarted if it runs out of memory or exits unexpectedly. The wrapper writes standard output and standard error to the logs/wrapper.log file.

2.2.1. Running as a Windows Service

To install the service, run the following command as Administrator:

bin\sym_service.bat install

Most configuration changes do not require the service to be re-installed. To uninstall the service, run the following command as Administrator:

bin\sym_service.bat uninstall

To start and stop the service manually, run the following commands as Administrator:

bin\sym_service.bat start
bin\sym_service.bat stop

2.2.2. Running as a Linux/Unix daemon

An init script is written to the system /etc/init.d directory. Symbolic links are created for starting on run levels 2, 3, and 5 and stopping on run levels 0, 1, and 6. To install the script, running the following command as root:

bin/sym_service install

Most configuration changes do not require the service to be re-installed. To uninstall the service, run the following command as root:

bin/sym_service uninstall

To start and stop the service manually, run the following commands:

bin/sym_service start
bin/sym_service stop

2.3. Clustering

A single SymmetricDS node may be deployed across a series of servers to cooperate as a cluster. A node can be clustered to provide load balancing and high availability.

When using clustering, a hardware load balancer is typically used, but a software load balancer, such as a reverse proxy, can also be used.

For clustered nodes running SymmetricDS 3.8 and later, the recommended approach is to configure the load balancer to use sticky sessions and ensure the staging directory for all nodes in the cluster are using a shared network drive. Sticky sessions are needed to support reservation requests, which allows for nodes to connect and obtain a reservation before connecting again and pushing their changes. The shared staging directory is needed to support extract in background of the initial load, which is extracted by one node, but served by different nodes in the cluster. If the start.initial.load.extract.job property is disabled, then shared staging is not required, but the performance of the initial load may be degraded.

For clustered nodes running SymmetricDS 3.7 and earlier, it is recommended to round robin client requests to the cluster and configure the load balancer for stateless connections.

Also, the sync.url (discussed in Node Properties File) SymmetricDS property should be set to the URL of the load balancer.

If the cluster will be running any of the SymmetricDS jobs, then the cluster.lock.enabled property should be set to true. By setting this property to true, SymmetricDS will use a row in the LOCK table as a semaphore to make sure that only one instance at a time runs a job. When a lock is acquired, a row is updated in the lock table with the time of the lock and the server id of the locking job. The lock time is set back to null when the job is finished running. Another instance of SymmetricDS cannot acquire a lock until the locking instance (according to the server id) releases the lock. If an instance is terminated while the lock is still held, an instance with the same server id is allowed to reacquire the lock. If the locking instance remains down, the lock can be broken after a period of time, specified by the cluster.lock.timeout.ms property, has expired. Note that if the job is still running and the lock expires, two jobs could be running at the same time which could cause database deadlocks.

By default, the locking server id is the hostname of the server. If two clustered instances are running on the same server, then the cluster.server.id property may be set to indicate the name that the instance should use for its server id.

When deploying SymmetricDS to an application server like Tomcat or JBoss, no special session clustering needs to be configured for the application server.

2.4. Other Deployment Options

It is recommended that SymmetricDS is installed as a standalone service, however there are two other deployment options.

2.4.1. Web Archive (WAR)

This option means packaging a WAR file and deploying to your favorite web server, like Apache Tomcat. It’s a little more work, but you can configure the web server to do whatever you need. SymmetricDS can also be embedded in an existing web application, if desired. As a web application archive, a WAR is deployed to an application server, such as Tomcat, Jetty, or JBoss. The structure of the archive will have a web.xml file in the WEB-INF folder, an appropriately configured symmetric.properties file in the WEB-INF/classes folder, and the required JAR files in the WEB-INF/lib folder.

symmetric war
Figure 1. War

A war file can be generated using the standalone installation’s symadmin utility and the create-war subcommand. The command requires the name of the war file to generate. It essentially packages up the web directory, the conf directory and includes an optional properties file. Note that if a properties file is included, it will be copied to WEB-INF/classes/symmetric.properties. This is the same location conf/symmetric.properties would have been copied to. The generated war distribution uses the same web.xml as the standalone deployment.

bin/symadmin -p my-symmetric-ds.properties create-war /some/path/to/symmetric-ds.war

2.4.2. Embedded

This option means you must write a wrapper Java program that runs SymmetricDS. You would probably use Jetty web server, which is also embeddable. You could bring up an embedded database like Derby or H2. You could configure the web server, database, or SymmetricDS to do whatever you needed, but it’s also the most work of the three options discussed thus far.

The deployment model you choose depends on how much flexibility you need versus how easy you want it to be. Both Jetty and Tomcat are excellent, scalable web servers that compete with each other and have great performance. Most people choose either the Standalone or Web Archive with Tomcat 5.5 or 6. Deploying to Tomcat is a good middle-of-the-road decision that requires a little more work for more flexibility.

A Java application with the SymmetricDS Java Archive (JAR) library on its classpath can use the SymmetricWebServer to start the server.

import org.jumpmind.symmetric.SymmetricWebServer;

public class StartSymmetricEngine {

    public static void main(String[] args) throws Exception {

        SymmetricWebServer node = new SymmetricWebServer(
                                   "classpath://my-application.properties", "conf/web_dir");

        // this will create the database, sync triggers, start jobs running
        node.start(8080);

        // this will stop the node
        node.stop();
    }

This example starts the SymmetricDS server on port 8080. The configuration properties file, my-application.properties, is packaged in the application to provide properties that override the SymmetricDS default values. The second parameter to the constructor points to the web directory. The default location is web. In this example the web directory is located at conf/web_dir. The web.xml is expected to be found at conf/web_dir/WEB-INF/web.xml.

3. Setup

The first node created is called the Master Node. The master node is where configuration is managed. Other nodes typically register with the master node to get their initial configuration. After nodes are registered, additional configuration changes made at the master node are synchronized to the registered nodes automatically.

All nodes require a properties file that contains identity information and database connection information. SymmetricDS configuration needs to be inserted via a SQL script at the master node.

3.1. Node Properties File

Each node that is deployed to a server is represented by a properties file that allows it to connect to a database and register with a parent node. Properties are configured in a file named xxxxx.properties. It is placed in the engines directory of the SymmetricDS install. The file is usually named according to the engine.name, but it is not a requirement.

To give a node its identity, the following properties are required. Any other properties found in conf/symmetric.properties can be overridden for a specific engine in an engine’s properties file. If the properties are changed in conf/symmetric.properties they will take effect across all engines deployed to the server.

You can use the variable $(hostName) to represent the host name of the machine when defining these properties (for example, external.id=$(hostName)). You can also access external id, engine name, node group id, sync URL, and registration URL in this manner. (for example, engine.name=$(nodeGroupId)-$(externalId)).
You can also use BSH script for the external id, engine name, node group id, sync URL, and registration URL. Use back ticks to indicate the BSH expression, and note that only one BSH expression is supporter for a given property line. The script can be prefixed or suffixed with fixed text. For example, if you wish to based the external id off of just a part of the hostname (e.g., substring of hostName): external.id=store-`import org.apache.commons.lang.StringUtils; return StringUtils.substring(hostName,2,4);\`
engine.name

This is an arbitrary name that is used to access a specific engine using an HTTP URL. Each node configured in the engines directory must have a unique engine name. The engine name is also used for the domain name of registered JMX beans.

group.id

The node group that this node is a member of. Synchronization is specified between node groups, which means you only need to specify it once for multiple nodes in the same group.

external.id

The external id for this node has meaning to the user and provides integration into the system where it is deployed. For example, it might be a retail store number or a region number. The external id can be used in expressions for conditional and subset data synchronization. Behind the scenes, each node has a unique sequence number for tracking synchronization events. That makes it possible to assign the same external id to multiple nodes, if desired.

sync.url

The URL where this node can be contacted for synchronization. At startup and during each heartbeat, the node updates its entry in the database with this URL. The sync url is of the format: http://{hostname}:{port}/{webcontext}/sync/{engine.name}

The {webcontext} is blank for a standalone deployment. It will typically be the name of the war file for an application server deployment.

The {engine.name} can be left blank if there is only one engine deployed in a SymmetricDS server.

When a new node is first started, it is has no information about synchronizing. It contacts the registration server in order to join the network and receive its configuration. The configuration for all nodes is stored on the registration server, and the URL must be specified in the following property:

registration.url

The URL where this node can connect for registration to receive its configuration. The registration server is part of SymmetricDS and is enabled as part of the deployment. This is typically equal to the value of the sync.url of the registration server.

Note that a registration server node is defined as one whose registration.url is either blank or identical to its sync.url.

For a deployment where the database connection pool should be created using a JDBC driver, set the following properties:

db.driver

The class name of the JDBC driver.

db.url

The JDBC URL used to connect to the database.

db.user

The database username, which is used to login, create, and update SymmetricDS tables.

db.password

The password for the database user.

See Startup Parameters, for additional parameters that can be specified in the engine properties file.

3.2. Master Node Setup

The majority of the synchronization Configuration is stored in the database, including configuration information that identifies the individual nodes. The master node requires that the node tables be populated with records that represent the node.

A node is represented by four tables:

NODE

Contains basic node information

NODE_IDENTITY

Contains a single row that identifies the current node

NODE_SECURITY

Contains a password need to authenticate with another node

NODE_HOST

Contains informational data about the node. Updated by SymmetricDS.

When setting up a master node for the first you must insert into two of these tables for the node to startup properly.

The following SQL statements configure a node with a node_id and external_id of "server" that belongs to the "server" node group. The node_group_id must match the group.id in the properties file. The external_id must match the external.id in the properties file.

insert into SYM_NODE (node_id, node_group_id, external_id, sync_enabled, created_at_node_id)
  values ('server', 'server', 'server', 1, 'server');

insert into SYM_NODE_IDENTITY values ('server');

NODE_SECURITY contains a password that is used to authenticate a node. The master node only needs this row if it is going to initiate communicate with another node. The following is an example of an insert statement for the registration server. registration_time and initial_load_time are set to indicate that the node does not need registered and does not need an initial load.

insert into sym_node_security (node_id,node_password,registration_enabled,registration_time,initial_load_enabled,initial_load_time,created_at_node_id)
 values ('server','5d1c92bbacbe2edb9e1ca5dbb0e481',0,current_timestamp,0,current_timestamp,'server');

3.3. Adding Nodes

Section Add Node talks about creating additional nodes.

4. Configuration

Configuring SymmetricDS is the process of setting up your synchronization scenario.

4.1. Groups

In SymmetricDS, configuration rules are applied to groups of nodes versus individual nodes. A group is a categorization of nodes with similar synchronization needs. For example, in a synchronization scenario where a corporate office database is synchronized with field office databases, two node groups would be created, one for the corporate office database (Corporate), and one for the field office databases (Field_office). In the corporate group, there would be a single node and database. In the field_office group, there would be many nodes and databases, one for each field office. Configuration rules/elements are applied to the node group versus the individual nodes in order to simplify the configuration setup (no need to configure each individual field office node, just how the field office nodes sync with the corporate office node).

Required Fields
Group ID

Unique identifier for the group.

Description

Description of the group that is available through the console.

Example 1. Sample Node Groups
insert into SYM_NODE_GROUP
        (node_group_id, description)
        values ('store', 'A retail store node');

insert into SYM_NODE_GROUP
        (node_group_id, description)
        values ('corp', 'A corporate node');

Group links define at a high level how data moves throughout your synchronization scenario. The group link defines which node groups will synchronize data to other node groups and within that exchange, which node group will initiate the conversation for that exchange.

Source Group ID

The source group of the communication link.

Link

Defines how the source and target groups will communicate.

Table 1. Options for Group Links

Push [P]

Indicates that nodes in the source node group will initiate communication over an HTTP PUT and push data to nodes in the target node group.

Wait for Pull [W]

Indicates nodes in the source node group will wait for a node in the target node group to connect via an HTTP GET and allow the nodes in the target node group to pull data from the nodes in the source node group.

Route-only [R]

Route-only indicates that the data isn’t exchanged between nodes in the source and nodes in the target node groups via SymmetricDS. This action type might be useful when using an XML publishing router or an audit table changes router.

Target Group ID

The target group of the communication link.

Sync Configuration

Determines if configuration is also sent through this group link. By default this is checked and configuration will communicate on this path. There are configurations that might cause configuration to continuously loop through the network as a result this might need to be unchecked for some links.

Reversible

Allows the communication link to send in the reverse direction if specified on the channel. A push link can be overridden to pull and a pull link can be overridden to push using a setting on the channel.

Example 2. Sample Group Links
insert into SYM_NODE_GROUP_LINK
(source_node_group, target_node_group, data_event_action)
      values ('store', 'corp', 'P');

insert into SYM_NODE_GROUP_LINK
(source_node_group, target_node_group, data_event_action)
      values ('corp', 'store', 'W');

4.3. Routers

Routers ride on top of group links. While a group link specifies that data should be moved from nodes in a source node group to nodes in a target node group, routers define more specifically which captured data from a source node should be sent to which specific nodes in a target node group, all within the context of the node group link.

Router Id

Unique description of a specific router

Group Link

The group link used for the source and target node groups of this router

Router Type

The type of router. Standard router types are listed below. Custom routers can be configured as extension points.

Table 2. Router Types
Type Provided Description

default

x

A router that sends all captured data to all nodes that belong to the target node group defined in the router. See Default Router

column

x

A router that compares old or new column values in a captured data row to a constant value or the value of a target node’s external id or node id. See Column Match Router

audit

x

A router that inserts into an automatically created audit table. It records captured changes to tables that it is linked to. See Audit Table Router

java

x

A router that executes a Java expression in order to select nodes to route to. The script can use the old and new column values. See [Java Router]

lookuptable

x

A router which can be configured to determine routing based on an existing or ancillary table specifically for the purpose of routing data. See Lookup Table Router

subselect

x

A router that executes a SQL expression against the database to select nodes to route to. This SQL expression can be passed values of old and new column values. See Subselect Router

bsh

x

A router that executes a Bean Shell script expression in order to select nodes to route to. The script can use the old and new column values. See Beanshell Router

Router Expression

An expression that is specific to the type of router that is configured in router type. See the documentation for each router for more details.

Use Source Catalog/Schema

If set then the source catalog and source schema are sent to the target to be used to find the target table.

Target Catalog

Optional name of catalog where a target table is located. If this field is unspecified, the catalog will be either the default catalog at the target node or the "source catalog name" from the table trigger, depending on how "use source catalog schema" is set for the router. Variables are substituted for $(sourceNodeId), $(sourceExternalId), $(sourceNodeGroupId), $(targetNodeId), $(targetExternalId), $(targetNodeGroupId), and $(none).

Target Schema

Optional name of schema where a target table is located. If this field is unspecified, the schema will be either the default schema at the target node or the "source schema name" from the table trigger, depending on how "use source catalog schema" is set for the router. Variables are substituted for $(sourceNodeId), $(sourceExternalId), $(sourceNodeGroupId), $(targetNodeId), $(targetExternalId), $(targetNodeGroupId), and $(none).

Sync on Update

Flag that indicates that this router should send updated rows from nodes in the source node group to nodes in the target node group.

Sync on Insert

Flag that indicates that this router should send inserted rows from nodes in the source node group to nodes in the target node group.

Sync on Delete

Flag that indicates that this router should send deleted rows from nodes in the source node group to nodes in the target node group.

Target Table

Optional name for a target table. Only use this if the target table name is different than the source.

4.3.1. Router Types

Default Router

The simplest router is a router that sends all the data that is captured by its associated triggers to all the nodes that belong to the target node group defined in the router.

The following SQL statement defines a router that will send data from the 'corp' group to the 'store' group.
insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, create_time,
        last_update_time) values ('corp-2-store','corp', 'store',
        current_timestamp, current_timestamp);
The following SQL statement maps the 'corp-2-store' router to the item trigger.
insert into SYM_TRIGGER_ROUTER
        (trigger_id, router_id, initial_load_order, create_time,
        last_update_time) values ('item', 'corp-2-store', 1, current_timestamp,
        current_timestamp);
Column Match Router

Sometimes requirements may exist that require data to be routed based on the current value or the old value of a column in the table that is being routed. Column routers are configured by setting the router_type column on the ROUTER table to column and setting the router_expression column to an equality expression that represents the expected value of the column.

The first part of the expression is always the column name. The column name should always be defined in upper case. The upper case column name prefixed by OLD_ can be used for a comparison being done with the old column data value.

The second part of the expression can be a constant value, a token that represents another column, or a token that represents some other SymmetricDS concept. Token values always begin with a colon (:).

  1. Consider a table that needs to be routed to all nodes in the target group only when a status column is set to 'READY TO SEND.'

The following SQL statement will insert a column router to accomplish that.
 insert into SYM_ROUTER (router_id,
                source_node_group_id, target_node_group_id, router_type,
                router_expression, create_time, last_update_time) values
                ('corp-2-store-ok','corp', 'store', 'column', 'STATUS=READY TO SEND',
                current_timestamp, current_timestamp);
  1. Consider a table that needs to be routed to all nodes in the target group only when a status column changes values.

The use of OLD_STATUS, where the OLD_ prefix gives access to the old column value.
insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, router_type,
        router_expression, create_time, last_update_time) values
        ('corp-2-store-status','corp', 'store', 'column', 'STATUS!=:OLD_STATUS',
        current_timestamp, current_timestamp);
Attributes on a NODE that can be referenced with the following tokens
  • :NODE_ID

  • :EXTERNAL_ID

  • :NODE_GROUP_ID

  • :REDIRECT_NODE

  1. Consider a table that needs to be routed to only nodes in the target group whose STORE_ID column matches the external id of a node.

The following SQL statement will insert a column router to accomplish that.
insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, router_type,
        router_expression, create_time, last_update_time) values
        ('corp-2-store-id','corp', 'store', 'column', 'STORE_ID=:EXTERNAL_ID',
        current_timestamp, current_timestamp);
  1. Consider a table that needs to be routed to a redirect node defined by its external id in the REGISTRATION_REDIRECT table.

The following SQL statement will insert a column router to accomplish that.
insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, router_type,
        router_expression, create_time, last_update_time) values
        ('corp-2-store-redirect','corp', 'store', 'column',
        'STORE_ID=:REDIRECT_NODE', current_timestamp, current_timestamp);
  1. More than one column may be configured in a router_expression. When more than one column is configured, all matches are added to the list of nodes to route to. The following is an example where the STORE_ID column may contain the STORE_ID to route to or the constant of ALL which indicates that all nodes should receive the update.

The following SQL statement will insert a column router to accomplish that.
insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, router_type,
        router_expression, create_time, last_update_time) values
        ('corp-2-store-multiple-matches','corp', 'store', 'column',
        'STORE_ID=ALL or STORE_ID=:EXTERNAL_ID', current_timestamp,
        current_timestamp);
  1. The NULL keyword may be used to check if a column is null. If the column is null, then data will be routed to all nodes who qualify for the update. This following is an example where the STORE_ID column is used to route to a set of nodes who have a STORE_ID equal to their EXTERNAL_ID, or to all nodes if the STORE_ID is null.

The following SQL statement will insert a column router to accomplish that.
insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, router_type,
        router_expression, create_time, last_update_time) values
        ('corp-2-store-multiple-matches','corp', 'store', 'column',
        'STORE_ID=NULL or STORE_ID=:EXTERNAL_ID', current_timestamp,
        current_timestamp);
  1. External data collected as part of the trigger firing (see External Select) can also be used as a virtual column in the router expression as well.

The following SQL statement will insert a column router to accomplish that.
insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, router_type,
        router_expression, create_time, last_update_time) values
        ('corp-2-store-multiple-matches','corp', 'store', 'column',
        'EXTERNAL_DATA=:EXTERNAL_ID', current_timestamp,
        current_timestamp);
Audit Table Router

This router audits captured data by recording the change in an audit table that the router creates and keeps up to date. The router creates a table named the same as the table for which data was captured with the suffix of _AUDIT. It will contain all of the same columns as the original table with the same data types only each column is nullable with no default values.

The following parameter must be set to true so that the audit table can be created.
auto.config.database=true
Three extra "AUDIT" columns are added to the table:
AUDIT_ID

the primary key of the table.

AUDIT_TIME

the time at which the change occurred.

AUDIT_EVENT

the DML type that happened to the row.

The following is an example of an audit router
insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, router_type, create_time,
        last_update_time) values ('audit_at_corp','corp', 'local', 'audit',
        current_timestamp, current_timestamp);
The audit router must be associated with a node group link of type 'R'. The 'R' stands for 'only routes to' (see Group Links). In the above example, we refer to a 'corp to local' group link. Here, local is a new node_group created for the audit router. No nodes belong to the 'local' node_group. If a trigger linked to an audit router fires on the corp node, a new audit table will be created at the corp node with the new data inserted.
Lookup Table Router

A lookup table may contain the id of the node where data needs to be routed. This could be an existing table or an ancillary table that is added specifically for the purpose of routing data. Lookup table routers are configured by setting the router_type column on the ROUTER table to lookuptable and setting a list of configuration parameters in the router_expression column.

Each of the following configuration parameters are required.
LOOKUP_TABLE

This is the name of the lookup table.

KEY_COLUMN

This is the name of the column on the table that is being routed. It will be used as a key into the lookup table.

LOOKUP_KEY_COLUMN

This is the name of the column that is the key on the lookup table.

EXTERNAL_ID_COLUMN

This is the name of the column that contains the external_id of the node to route to on the lookup table.

ALL_NODES_VALUE

This is an optional parameter that allows you to specify a value for the EXTERNAL_ID_COLUMN that means "send to all nodes".

The lookup table will be read into memory and cached for the duration of a routing pass for a single channel.

Consider a table that needs to be routed to a specific store, but the data in the changing table only contains brand information. In this case, the STORE table may be used as a lookup table.

insert into SYM_ROUTER (router_id,
                source_node_group_id, target_node_group_id, router_type,
                router_expression, create_time, last_update_time) values
                ('corp-2-store-ok','corp', 'store', 'lookuptable', 'LOOKUP_TABLE=STORE
                KEY_COLUMN=BRAND_ID LOOKUP_KEY_COLUMN=BRAND_ID
                EXTERNAL_ID_COLUMN=STORE_ID', current_timestamp, current_timestamp);
Subselect Router

Sometimes routing decisions need to be made based on data that is not in the current row being synchronized. A 'subselect' router can be used in these cases. A 'subselect' is configured with a router expression that is a SQL select statement which returns a result set of the node ids that need routed to. Column tokens can be used in the SQL expression and will be replaced with row column data.

The overhead of using this router type is high because the 'subselect' statement runs for each row that is routed. It should not be used for tables that have a lot of rows that are updated. It also has the disadvantage that if the data being relied on to determine the node id has been deleted before routing takes place, then no results would be returned and routing would not happen.

The router expression you specify is appended to the following SQL statement in order to select the node ids:

select c.node_id
from sym_node c
where c.node_group_id=:NODE_GROUP_ID
        and c.sync_enabled=1 and ...

As you can see, you have access to information about the node currently under consideration for routing through the 'c' alias, for example c.external_id . There are two node-related tokens you can use in your expression:

  1. :NODE_GROUP_ID

  2. :EXTERNAL_DATA

Column names representing data for the row in question are prefixed with a colon as well., for example: :EMPLOYEE_ID, or :OLD_EMPLOYEE_ID. Here, the OLD_ prefix indicates the value before the change in cases where the old data has been captured.

Example 3. Sample Use Case for Subselect Router

For an example, consider the case where an Order table and an OrderLineItem table need to be routed to a specific store. The Order table has a column named order_id and STORE_ID. A store node has an external_id that is equal to the STORE_ID on the Order table. OrderLineItem, however, only has a foreign key to its Order of order_id. To route OrderLineItems to the same nodes that the Order will be routed to, we need to reference the master Order record.

There are two possible ways to solve this in SymmetricDS.

  1. Configure a 'subselect' router type (shown below).

  2. Use an external select to capture the data via a trigger for use in a column match router, see External Select.

insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, router_type,
        router_expression, create_time, last_update_time) values
        ('corp-2-store','corp', 'store', 'subselect', 'c.external_id in (select
        STORE_ID from order where order_id=:ORDER_ID)', current_timestamp,
        current_timestamp);
In this example that the parent row in Order must still exist at the moment of routing for the child rows (OrderLineItem) to route, since the select statement is run when routing is occurring, not when the change data is first captured.
Beanshell Router

When more flexibility is needed in the logic to choose the nodes to route to, then the a scripted router may be used. The currently available scripting language is Bean Shell. Bean Shell is a Java-like scripting language. Documentation for the Bean Shell scripting language can be found at http://www.beanshell.org .

The router type for a Bean Shell scripted router is 'bsh'. The router expression is a valid Bean Shell script that:

Table 3. Variables available to the script

nodes

Collection of org.jumpmind.symmetric.model.Node objects the router would route to normally.

nodeIds

Collection of node ids that the router would route to normally. You can just return this if you want the bsh router to behave like the default router.

targetNodes

Collection of org.jumpmind.symmetric.model.Node objects to be populated and returned.

engine

The instance of org.jumpmind.symmetric.ISymmetricEngine which has access to SymmetricDS services.

Any Data Column

Data column values are bound to the script evaluation as Java object representations of the column data. The columns are bound using the uppercase names of the columns. Example if store_id is a column then STORE_ID is a variable name available in Bean Shell script.

Any Old Values

Old Data column values are bound to the script evaluation as Java object representations of the column data. The columns are bound using the uppercase representations that are prefixed with 'OLD_'. Example if store_id is a column then OLD_STORE_ID is a variable name available in Bean Shell script representing the old value for the store_id before the change.

Table 4. Return options

targetNodes

Collection of org.jumpmind.symmetric.model.Node objects that will be routed to.

true

All nodes should be routed

false

No nodes should be routed

The last line of a bsh script is always the return value.
Example 4. Use case using a Bean Shell where the node_id is a combination of STORE_ID and WORKSTATION_NUMBER, both of which are columns on the table that is being routed.
insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, router_type,
        router_expression, create_time, last_update_time) values
        ('corp-2-store-bsh','corp', 'store', 'bsh', 'targetNodes.add(STORE_ID +
        "-" + WORKSTATION_NUMBER);', current_timestamp, current_timestamp);

The same could also be accomplished by simply returning the node id.

insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, router_type,
        router_expression, create_time, last_update_time) values
        ('corp-2-store-bsh','corp', 'store', 'bsh', 'STORE_ID +
        "-" + WORKSTATION_NUMBER', current_timestamp, current_timestamp);
Example 5. Use case using a Bean Shell script to synchronize to all nodes if the FLAG column has changed, otherwise no nodes will be synchronized.
insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, router_type,
        router_expression, create_time, last_update_time) values
        ('corp-2-store-flag-changed','corp', 'store', 'bsh', 'FLAG != null
        && !FLAG.equals(OLD_FLAG)', current_timestamp,
        current_timestamp);
Here we make use of OLD_, which provides access to the old column value.
Example 6. Use case using a Bean Shell script that iterates over each eligible node and checks to see if the trimmed value of the column named STATION equals the external_id.
 insert into SYM_ROUTER (router_id,
        source_node_group_id, target_node_group_id, router_type,
        router_expression, create_time, last_update_time) values
        ('corp-2-store-trimmed-station','corp', 'store', 'bsh', 'for
        (org.jumpmind.symmetric.model.Node node : nodes) { if (STATION != null
        && node.getExternalId().equals(STATION.trim())) {
        targetNodes.add(node.getNodeId()); } }', current_timestamp,
        current_timestamp);

4.4. Channels

Once group links and routers are defined, configuration must be completed to specify which data (tables, file systems, etc.) should be synchronized over those links and routers. The next step in defining which specific data in the database is moved is to define logical groupings for that data. Channels define those logical groupings. As an example, a set of tables that hold customer data might be logically grouped together in a Customer channel. Sales, returns, tenders, etc. (transaction data) might be logically grouped into a transaction channel. A default channel is automatically created that all tables will fall into unless other channels are created and specified. The default channel is called 'default'.

Channels can be disabled, suspended, or scheduled as needed.

Transactions will NOT be preserved across channels so its important to setup channels to contain all tables that participate in a given transaction.
Channel ID

Identifier used through the system to identify a given channel.

Processing Order

Numeric value to determine the order in which a channel will be processed. Channels will be processed in ascending order.

Batch Algorithm

Batching is the grouping of data, by channel, to be transferred and committed at the client together.

Default

All changes that happen in a transaction are guaranteed to be batched together. Multiple transactions will be batched and committed together until there is no more data to be sent or the max_batch_size is reached.

Transactional

Batches will map directly to database transactions. If there are many small database transactions, then there will be many batches. The max_batch_size column has no effect.

Nontransactional

Multiple transactions will be batched and committed together until there is no more data to be sent or the max_batch_size is reached. The batch will be cut off at the max_batch_size regardless of whether it is in the middle of a transaction.

Max Batch Size

Specifies the maximum number of data events to process within a batch for this channel.

Max Batch To Send

Specifies the maximum number of batches to send for a given channel during a 'synchronization' between two nodes. A 'synchronization' is equivalent to a push or a pull. For example, if there are 12 batches ready to be sent for a channel and max_batch_to_send is equal to 10, then only the first 10 batches will be sent even though 12 batches are ready.

Max Data To Route

Specifies the maximum number of data rows to route for a channel at a time.

Max KB/s

Specifies the maximum network transfer rate in kilobytes per second. Use zero to indicate unlimited. When throttling the channel, make sure the channel is on its own queue or within a queue of channels that are throttled at the same rate. This is currently only implemented when staging is enabled.

Data Loader Types

Determines how data will be loaded into the target tables. These are used during an initial load or a reverse initial load. Data loaders do not always have to load into the target relational database. They can write to a file, a web service, or any other type of non-relational data source. Data loaders can also use other techniques to increase performance of data loads into the target relation database.

default

Performs an insert first and if this fails will fall back to an update to load the data.

ftp_localhost

Sends the data in CSV format to a configured ftp location. These locations are setup in the TODO {SYM_HOME}/conf/ftp-extensions.xml

bulk

Assigns the appropriate bulk loader to this channel. Supported bulk loaders include: Microsoft SQL, PostgreSQL, MySQL and Amazon Redshift over S3.

mongodb

MongoDB data loader.

Tables that should be data loaded should be configured to use this channel. Many times, a reload channel will be set to bulk load to increase the performance of an initial load.
Group Link Direction

For a node group link that is reversible, the channel can specify either "push" or "pull" to override the default group link communication. If this field is empty, the default group link communication is used.

Enabled

Indicates whether the channel is enabled or disabled. If a channel is disabled, data is still captured for changes that occur on the source system, but it will not be routed and sent to the target until the channel is re-enabled.

Reload Channel

Indicates whether a channel is available for initial loads and reverse initial loads.

File Sync Channel

Indicates whether a channel is available for file synchronization.

Use Old Data To Route

Indicates if the old data will be included for routing. Routing can then use this data for processing. Defaults to true.

Use Row Data To Route

Indicates if the current data will be included for routing. Routing can then use this data for processing. Defaults to true.

Use Primary Key (PK) Data to Route

Indicates if the primary key data will be include for routing. For example maybe a store ID is needed to apply logic on before sending to the appropriate target nodes. Defaults to true.

Tables Contain Big Lobs

Indicates whether the channel contains big lobs. Some databases have shortcuts that SymmetricDS can take advantage of if it knows that the lob columns in SYM_DATA aren’t going to contain large lobs. The definition of how large a 'big' lob is varies from database to database.

Example 7. Sample Channels
insert into SYM_CHANNEL (channel_id, processing_order, max_batch_size, max_batch_to_send,
         extract_period_millis, batch_algorithm, enabled, description)
     values ('item', 10, 1000, 10, 0, 'default', 1, 'Item and pricing data');

insert into SYM_CHANNEL (channel_id, processing_order, max_batch_size,
          max_batch_to_send, extract_period_millis, batch_algorithm, enabled, description)
          values ('sale_transaction', 1, 1000, 10, 60000,
          'transactional', 1, 'retail sale transactions from register');
Channel Tips and Tricks
Increase performance by creating designated channels for tables that use LOB data types. For these channels be sure to check the "Table Contains Big Lobs" to increase performance.

4.5. Table Triggers

SymmetricDS captures synchronization data using database triggers. SymmetricDS' Triggers are defined in the TRIGGER table. Each record is used by SymmetricDS when generating database triggers. Database triggers are only generated when a trigger is associated with a ROUTER whose source_node_group_id matches the node group id of the current node.

When determining whether a data change has occurred or not, by default the triggers will record a change even if the data was updated to the same value(s) they were originally. For example, a data change will be captured if an update of one column in a row updated the value to the same value it already was. There is a global property, trigger.update.capture.changed.data.only.enabled (false by default), that allows you to override this behavior. When set to true, SymmetricDS will only capture a change if the data has truly changed (i.e., when the new column data is not equal to the old column data).

Trigger Id

Unique identifier for a trigger.

Source Catalog

Optional name for the catalog the configured table is in. If the name includes * then a wildcard match on the table name will be attempted. \ Wildcard names can include a list of names that are comma separated. The ! symbol may be used to indicate a NOT match condition.

Source Schema

Optional name for the schema a configured table is in. If the name includes * then a wildcard match on the table name will be attempted. Wildcard names can include a list of names that are comma separated. The ! symbol may be used to indicate a NOT match condition.

Source Table

The name of the source table that will have a trigger installed to watch for data changes. See Trigger Wildcards for using wildcards to specify multiple source tables.

Channel

The channel_id of the channel that data changes will flow through.

Sync On Insert

Determines if changes will be captured for inserts.

Sync On Update

Determines if changes will be captured for updates.

Sync On Delete

Determines if changes will be captured for deletes.

Reload Channel Id

The channel_id of the channel that will be used for initial loads.

Sync On Insert Condition

Specify a condition for the insert trigger firing using an expression specific to the database. On most platforms, it is added to an "IF" statement in the trigger text. On SQL-Server it is added to the "WHERE" clause of a query for inserted/deleted logical tables. See Sync Condition Example.

Sync On Update Condition

Specify a condition for the update trigger firing using an expression specific to the database. On most platforms, it is added to an "IF" statement in the trigger text. On SQL-Server it is added to the "WHERE" clause of a query for inserted/deleted logical tables. See Sync Condition Example.

Sync On Delete Condition

Specify a condition for the delete trigger firing using an expression specific to the database. On most platforms, it is added to an "IF" statement in the trigger text. On SQL-Server it is added to the "WHERE" clause of a query for inserted/deleted logical tables. See Sync Condition Example.

Sync Condition Example

Sync Conditions can access both old values and new values of a field/column using "old_" and "new_" respectively. For example, if your column is id and your condition checks the value coming in to be 'test', your condition will be:

 new_id = 'test'
Custom Insert Trigger Text

Specify insert trigger text (SQL) to execute after the SymmetricDS trigger fires. This field is not applicable for H2, HSQLDB 1.x or Apache Derby.

Custom Update Trigger Text

Specify update trigger text (SQL) to execute after the SymmetricDS trigger fires. This field is not applicable for H2, HSQLDB 1.x or Apache Derby.

Custom Delete Trigger Text

Specify delete trigger text (SQL) to execute after the SymmetricDS trigger fires. This field is not applicable for H2, HSQLDB 1.x or Apache Derby.

Sync On Incoming

Whether or not an incoming batch that loads data into this table should cause the triggers to capture data_events. Be careful turning this on, because an update loop is possible.

Stream Lobs

Specifies whether to capture lob data as the trigger is firing or to stream lob columns from the source tables using callbacks during extraction. A value of 1 indicates to stream from the source via callback; a value of 0, lob data is captured by the trigger.

Capture Lobs

Provides a hint as to whether this trigger will capture big lobs data. If set to 1 every effort will be made during data capture in trigger and during data selection for initial load to use lob facilities to extract and store data in the database. On Oracle, this may need to be set to 1 to get around 4k concatenation errors during data capture and during initial load.

Capture Old Data

Indicates whether this trigger should capture and send the old data (previous state of the row before the change).

Stream Row

Captures only the primary key when the trigger fires which can reduce the overhead of the trigger on tables with lots of columns. The data will then be queried using the PK values captured when the batch is ready for extraction.

External Select

Specify a SQL select statement that returns a single row, single column result. It will be used in the generated database trigger to populate the EXTERNAL_DATA field on the data table. See

Excluded Column Names

Specify a comma-delimited list of columns that should not be synchronized from this table.

Included Column Names

Specify a comma-delimited list of columns only should be synchronized from this table.

Sync Key Names

Specify a comma-delimited list of columns that should be used as the key for synchronization operations. By default, if not specified, then the primary key of the table will be used.

Channel Expression

An expression that will be used to capture the channel id in the trigger. This expression will only be used if the channel_id is set to 'dynamic.'

Example 8. Sample Triggers
insert into SYM_TRIGGER (trigger_id, source_table_name,
          channel_id, last_update_time, create_time)
                  values ('item', 'item', 'item', current_timestamp, current_timestamp);
Multiple Triggers On A Table
Note that many databases allow for multiple triggers of the same type to be defined. Each database defines the order in which the triggers fire differently. If you have additional triggers beyond those SymmetricDS installs on your table, please consult your database documentation to determine if there will be issues with the ordering of the triggers.
Capture Changed Data

When determining whether a data change has occurred or not, by default the triggers will record a change even if the data was updated to the same value(s) they were originally. For example, a data change will be captured if an update of one column in a row updated the value to the same value it already was. There is a global property that allows you to override this behavior (defaults to false).

trigger.update.capture.changed.data.only.enabled=false

This property is currently only supported on MySQL, DB2, SQL Server, and Oracle.

4.5.1. Trigger Wildcards

The source table name may contain the asterisk ('*') wildcard character so that one trigger entry can define synchronization for many tables.

Wildcard Rules
  • If multiple wildcard tokens are are supplied they should be deliminated with a comma.

  • Tokens are always evaluated from left to right.

  • When a table match is made, the table is either added to or removed from the list of tables. If another trigger already exists for a table, then that table is not included in the wildcard match (the explicitly defined trigger entry take precendence).

  • System tables and any tables that start with the SymmetricDS table prefix will be excluded.

  • A wildcard token can also start with an exclamation ('!') to indicate an exclusive match.

4.5.2. External Select

Occasionally, you may find that you need to capture and save away a piece of data present in another table when a trigger is firing. This data is typically needed for the purposes of determining where to 'route' the data to once routing takes place. Each trigger definition contains an optional "external select" field which can be used to specify the data to be captured. Once captured, this data is available during routing in DATA 's external_data field.

For these cases, place a SQL select statement which returns the data item you need for routing in external_select.

The external select SQL must return a single row, single column
Example 9. Sample Trigger With External Select SQL that returns STORE_ID based on the ORDER_ID captured in the trigger.

insert into SYM_TRIGGER (trigger_id,source_table_name,channel_id,external_select, last_update_time,create_time) values ('orderlineitem', 'orderlineitem','orderlineitem', 'select STORE_ID from order where order_id=$(curTriggerValue).$(curColumnPrefix)order_id', current_timestamp, current_timestamp);

Table 5. The following variables can be used with the external select

$(curTriggerValue)

Variable to be replaced with the NEW or OLD column alias provided by the trigger context, which is platform specific. For insert and update triggers, the NEW alias is used; for delete triggers, the OLD alias is used. For example, "$(curTriggerValue).COLUMN" becomes ":new.COLUMN" for an insert trigger on Oracle.

$(curColumnPrefix)

Variable to be replaced with the NEW_ or OLD_ column prefix for platforms that don’t support column aliases. This is currently only used by the H2 database. All other platforms will replace the variable with an empty string. For example "$(curColumnPrefix)COLUMN" becomes "NEW_COLUMN" on H2 and "COLUMN" on Oracle.

External select SQL statements should be used carefully as they will cause the trigger to run the additional SQL each time the trigger fires.
Using an external select on the trigger is similar to using the 'subselect' router. The advantage of this approach over the 'subselect' approach is that it guards against the (somewhat unlikely) possibility that the master Order table row might have been deleted before routing has taken place. This external select solution also is a bit more efficient than the 'subselect' approach.

4.5.3. Load Only Triggers

Occasionally the decision of what data to load initially results in additional triggers. These triggers, known as load only triggers, are configured such that they do not capture any data changes. In other words, the sync on insert, sync on update, and sync on delete attributes of the trigger are all set to false.

Load only triggers still participate in the following:
  • Initial Loads

  • Reverse Initial Loads

  • Table Reloads

  • Creation of tables during initial loads

Use cases for load only triggers:
  • To load a read-only lookup table, for example. It could also be used to load a table that needs populated with example or default data.

  • Recovery of data for tables that have a single direction of synchronization. For example, a retail store records sales transactions that synchronize in one direction by trickling back to the central office. If the retail store needs to recover all the sales transactions from the central office, they can be sent are part of an initial load from the central office by setting up a load only trigger that "sync" in that direction.

The following SQL statement sets up a non-syncing dead Trigger that sends the sale_transaction table to the "store" Node Group from the "corp" Node Group during an initial load.

insert into sym_trigger (TRIGGER_ID,SOURCE_CATALOG_NAME,
  SOURCE_SCHEMA_NAME,SOURCE_TABLE_NAME,CHANNEL_ID,
  SYNC_ON_UPDATE,SYNC_ON_INSERT,SYNC_ON_DELETE,
  SYNC_ON_INCOMING_BATCH,NAME_FOR_UPDATE_TRIGGER,
  NAME_FOR_INSERT_TRIGGER,NAME_FOR_DELETE_TRIGGER,
  SYNC_ON_UPDATE_CONDITION,SYNC_ON_INSERT_CONDITION,
  SYNC_ON_DELETE_CONDITION,EXTERNAL_SELECT,
  TX_ID_EXPRESSION,EXCLUDED_COLUMN_NAMES,
  CREATE_TIME,LAST_UPDATE_BY,LAST_UPDATE_TIME)
  values ('SALE_TRANSACTION_DEAD',null,null, 'SALE_TRANSACTION','transaction',
  0,0,0,0,null,null,null,null,null,null,null,null,null,
  current_timestamp,'demo',current_timestamp);

insert into sym_router (ROUTER_ID,TARGET_CATALOG_NAME,TARGET_SCHEMA_NAME,
  TARGET_TABLE_NAME,SOURCE_NODE_GROUP_ID,TARGET_NODE_GROUP_ID,ROUTER_TYPE,
  ROUTER_EXPRESSION,SYNC_ON_UPDATE,SYNC_ON_INSERT,SYNC_ON_DELETE,
  CREATE_TIME,LAST_UPDATE_BY,LAST_UPDATE_TIME)
  values ('CORP_2_STORE',null,null,null, 'corp','store',null,null,1,1,1,
  current_timestamp,'demo',current_timestamp);

insert into sym_trigger_router (TRIGGER_ID,ROUTER_ID,INITIAL_LOAD_ORDER,
  INITIAL_LOAD_SELECT,CREATE_TIME,LAST_UPDATE_BY,LAST_UPDATE_TIME)
  values ('SALE_TRANSACTION_DEAD','CORP_2_REGION',100,null,
  current_timestamp,'demo',current_timestamp);

4.6. Table Routing

The TRIGGER_ROUTER table is used to define which specific combinations of triggers and routers are needed for your configuration. The relationship between triggers and routers is many-to-many, so this table serves as the join table to define which combinations are valid, as well as to define settings available at the trigger-router level of granularity.

Three important controls can be configured for a specific Trigger / Router combination: Enabled, Initial Loads and Ping Back. The parameters for these can be found in the Trigger / Router mapping table, TRIGGER_ROUTER .

Table Trigger

The table trigger to link.

Router

The router to link.

Initial Load Select

SQL used as part of the WHERE clause on a SQL statement during the initial load process to extract data from the source node of the router. If blank all rows will be selected. If you want no rows to load during initial load you can set the expression to 1=0 or set Initial Load Order to a negative number.

Initial Load Delete

SQL used as part of the WHERE clause on a SQL statement during the initial load process to delete data on the target node of the router.

Initial Load Delete SQL will only be used if the following parameter is true (default is false).

initial.load.delete.first=true
Initial Load Order

Order sequence of this table when an initial load is sent to a node. If this value is the same for multiple tables, then SymmetricDS will attempt to order the tables according to FK constraints. If this value is set to a negative number, then the table will be excluded from an initial load.

Initial Load Batch Count

Only applicable if the initial load extract job is enabled. The number of batches to split an initial load of a table across. If 0 then a select count(*) will be used to dynamically determine the number of batches based on the max_batch_size of the reload channel.

Enabled

Each individual trigger-router combination can be disabled or enabled if needed. By default, a trigger router is enabled, but if you have a reason you wish to define a trigger router combination prior to it being active, you can set the enabled flag to 0. This will cause the trigger-router mapping to be sent to all nodes, but the trigger-router mapping will not be considered active or enabled for the purposes of capturing data changes or routing.

Ping Back Enabled

SymmetricDS, by default, avoids circular data changes. When a trigger fires as a result of SymmetricDS itself (such as the case when sync on incoming batch is set), it records the originating source node of the data change in source_node_id. During routing, if routing results in sending the data back to the originating source node, the data is not routed by default. If instead you wish to route the data back to the originating node, you can set the ping_back_enabled column for the needed particular trigger / router combination. This will cause the router to "ping" the data back to the originating node when it usually would not.

4.7. File Triggers

In addition to supporting database synchronization, SymmetricDS also supports File Synchronization. Similar to database synchronization which allows configuring [Table Triggers], SymmetricDS also supports setting up File Triggers. A file trigger is equivalent to specifying a directory structure or path that should be "watched" for files that need to be synchronized.

File Trigger Id

Unique identifier for a trigger.

Channel

The channel_id of the channel that data changes will flow through.

Reload Channel Id

The channel_id of the channel that will be used for reloads.

Base Directory

The base directory on the source node that files will be synchronized from.

Recurse

Whether to synchronize child directories.

Include Files

Wildcard-enabled (*), comma-separated list of file to include in synchronization.

Exclude Files

Wildcard-enabled (*), comma-separated list of file to exclude from synchronization.

Sync On Create

Whether to capture and send files when they are created.

Sync On Modified

Whether to capture and send files when they are modified.

Sync On Delete

Whether to capture and send files when they are deleted.

Sync On Ctl File

Combined with sync_on_create, determines whether to capture and send files when a matching control file exists. The control file is a file of the same name with a '.ctl' extension appended to the end.

Delete After Sync

Determines whether to delete the file after it has synced successfully.

Before Copy Script

A beanshell script that is run at the target node right before the file copy to it’s destination directory.

After Copy Script

A beanshell script that is run at the target node right after the file copy to it’s destination directory.

4.7.1. Operation

Not only is file synchronization configured similar to database synchronization, but it also operates in a very similar way. The file system is monitored for changes via a background job that tracks the file system changes (this parallels the use of triggers to monitor for changes when synchronizing database changes). When a change is detected it is written to the FILE_SNAPSHOT table. The file snapshot table represents the most recent known state of the monitored files. The file snapshot table has a SymmetricDS database trigger automatically installed on it so that when it is updated the changes are captured by SymmetricDS on an internal channel named filesync.

The changes to FILE_SNAPSHOT are then routed and batched by a file-synchronization-specific router that delegates to the configured router based on the FILE_TRIGGER_ROUTER configuration. The file sync router can make routing decisions based on the column data of the snapshot table, columns which contain attributes of the file like the name, path, size, and last modified time. Both old and new file snapshot data are also available. The router can, for example, parse the path or name of the file and use it as the node id to route to.

Batches of file snapshot changes are stored on the filesync channel in OUTGOING_BATCH. The existing SymmetricDS pull and push jobs ignore the filesync channel. Instead, they are processed by file-synchronization-specific push and pull jobs.

When transferring data, the file sync push and pull jobs build a zip file dynamically based on the batched snapshot data. The zip file contains a directory per batch. The directory name is the batch_id. A sync.bsh Bean Shell script is generated and placed in the root of each batch directory. The Bean Shell script contains the commands to copy or delete files at their file destination from an extracted zip in the staging directory on the target node. The zip file is downloaded in the case of a pull, or, in the case of a push, is uploaded as an HTTP multi-part attachment. Outgoing zip files are written and transferred from the outgoing staging directory. Incoming zip files are staged in the filesync_incoming staging directory by source node id. The filesync_incoming/{node_id} staging directory is cleared out before each subsequent delivery of files.

The acknowledgement of a batch happens the same way it is acknowledged in database synchronization. The client responds with an acknowledgement as part of the response during a file push or pull.

4.7.2. BeanShell Scripts

There are two types of Bean Shell scripts that can be leveraged to customize file synchronization behavior:

Before copy script

This runs on delivery of a file before it is copied to it’s target location

After copy script

This run on delivery of a file after it is copied to it’s target location

Each of these scripts have access to local variables that can be read or set to affect the behavior of copying files.

targetBaseDir

The preset base directory as configured in file trigger or overwritten in file trigger router. This variable can be set by the before copy script to set a different target directory.

targetFileName

The name of the file that is being synchronized. This variable can be overwritten by the before copy script to rename a file at the target.

targetRelativeDir

The name of a directory relative to the target base directory to which the target file will be copied. The default value of this variable is the relative directory of the source. For example, if the source base directory is /src and the target base directory is /tgt and the file /src/subfolder/1.txt is changed, then the default targetRelativeDir will be subfolder. This variable can be overwritten by the before_copy_script to change the relative directory at the target. In the above example, if the variable is set to blank using the following script, then the target file will be copied to /tgt/1.txt.

targetRelativeDir = "";
processFile

This is a variable that is set to true by default. A custom before copy script may process the file itself and set this variable to false to indicate that the file should NOT be copied to its target location.

sourceFileName

This is the name of the file.

sourceFilePath

This is the path where the file can be found relative to the batch directory.

batchDir

This is the staging directory where the batch has been extracted. The batchDir + sourceFilePath + sourceFileName can be used to locate the extracted file.

engine

This is the bound instance of the ISymmetricEngine that is processing a file. It gives access to all of the APIs available in SymmetricDS.

sourceNodeId

This is a bound variable that represents the nodeId that is the source of the file.

log

This is the bound instance of an org.slf4j.Logger that can be used to log to the SymmetricDS log file.

Example 10. Example of a Before Copy Script
File file = new File(batchDir + "/" + sourceFilePath + "/" + sourceFileName);
if (file.exists()) {
    String path = file.getAbsolutePath();
    cp (path,"/backup/" + sourceFileName);
}

4.8. File Routing

The FILE_TRIGGER_ROUTER table is used to define which specific combinations of file triggers and routers are needed for your configuration. The relationship between file triggers and routers is many-to-many, so this table serves as the join table to define which combinations are valid, as well as to define settings available at the trigger-router level of granularity.

File Triggers

The file trigger to link.

Routers

The router to link.

Target Base Directory

The base directory on the target node that files will be synchronized to.

Conflict Strategy

The strategy to employ when a file has been modified at both the client and the server.

source_wins

The source file will be used when a conflict occurs.

target_wins

The target file will be used when a conflict occurs.

manual

If a conflict occurs the batch will be put in ER (error) status and require manual intervention to resolve the issue.

Initial Enabled

Indicates whether this file trigger should be initial loaded.

Enabled

Indicates whether this file trigger router is enabled or not.

4.9. Conflicts

Conflict detection is the act of determining if an insert, update or delete is in "conflict" due to the target data row not being consistent with the data at the source prior to the insert/update/delete.

Conflicts are broken into 3 key components in SymmetricDS:
  1. Detection

  2. Resolution

  3. Ping Back

Conflict detection and resolution strategies are configured in the CONFLICT table. They are configured at minimum for a specific NODE_GROUP_LINK . The configuration can also be specific to a CHANNEL and/or table.

Conflict detection is configured in the detect_type and detect_expression columns of CONFLICT . The value for detect_expression depends on the detect_type.

Conflict Id

Unique identifier for a specific conflict detection setting.

Group Link

References a node group link.

Detection Type

Indicates the strategy to use for detecting conflicts during a dml action.

Conflicts are detected while data is being loaded into a target system.
Table 6. Detection Types

USE_PK_DATA

Indicates that only the primary key is used to detect a conflict. If a row exists with the same primary key, then no conflict is detected during an update or a delete. Updates and deletes rows are resolved using only the primary key columns. If a row already exists during an insert then a conflict has been detected.

USE_CHANGED_DATA

Indicates that the primary key plus any data that has changed on the source system will be used to detect a conflict. If a row exists with the same old values on the target system as they were on the source system for the columns that have changed on the source system, then no conflict is detected during an update or a delete. If a row already exists during an insert then a conflict has been detected.

USE_OLD_DATA

Indicates that all of the old data values are used to detect a conflict. Old data is the data values of the row on the source system prior to the change. If a row exists with the same old values on the target system as they were on the source system, then no conflict is detected during an update or a delete. If a row already exists during an insert then a conflict has been detected.

USE_TIMESTAMP

Indicates that the primary key plus a timestamp column (as configured in detect_expression ) will indicate whether a conflict has occurred. If the target timestamp column is not equal to the old source timestamp column, then a conflict has been detected. If a row already exists during an insert then a conflict has been detected.

USE_VERSION

Indicates that the primary key plus a version column (as configured in detect_expression ) will indicate whether a conflict has occurred. If the target version column is not equal to the old source version column, then a conflict has been detected. If a row already exists during an insert then a conflict has been detected.

Resolution Type

The choice of how to resolve a detected conflict is configured via the resolve type. Depending on the setting, two additional boolean settings may also be configured, namely "resolve row only" and "resolve changes only".

Table 7. Resolution Types

FALLBACK

Indicates that when a conflict is detected the system should automatically apply the changes anyways. If the source operation was an insert, then an update will be attempted. If the source operation was an update and the row does not exist, then an insert will be attempted. If the source operation was a delete and the row does not exist, then the delete will be ignored. The resolve_changes_only flag controls whether all columns will be updated or only columns that have changed will be updated during a fallback operation.

IGNORE

Indicates that when a conflict is detected the system should automatically ignore the incoming change. The resolve_row_only column controls whether the entire batch should be ignore or just the row in conflict.

MANUAL

Indicates that when a conflict is detected the batch will remain in error until manual intervention occurs. A row in error is inserted into the INCOMING_ERROR table. The conflict detection id that detected the conflict is recorded (i.e., the conflict_id value from CONFLICT), along with the old data, new data, and the "current data" (by current data, we mean the unexpected data at the target which doesn’t match the old data as expected) in columns old_data, new_data, and cur_data. In order to resolve, the resolve_data column can be manually filled out which will be used on the next load attempt instead of the original source data. The resolve_ignore flag can also be used to indicate that the row should be ignored on the next load attempt.

NEWER_WINS

Indicates that when a conflict is detected by USE_TIMESTAMP or USE_VERSION that the either the source or the target will win based on the which side has the newer timestamp or higher version number. The resolve_row_only column controls whether the entire batch should be ignore or just the row in conflict.

Ping Back

For each configured conflict, you also have the ability to control if and how much "resolved" data is sent back to the node who’s data change is in conflict. This "ping back" behavior is specified by the following options.

Table 8. Ping Backs

REMAINING_ROWS

The resolved data of the single row in the batch in conflict, along with the entire remainder of the batch, is sent back to the originating node.

SINGLE_ROW

The resolved data of the single row in the batch that caused the conflict is sent back to the originating node.

OFF

No data is sent back to the originating node, even if the resolved data doesn’t match the data the node sent.

Detection Expression

An expression that provides additional information about the detection mechanism. If the detection mechanism is use_timestamp or use_version then this expression will be the name of the timestamp or version column.

The detect_expression can be used to exclude certain column names from being used. In order to exclude column1 and column2, the expression would be:
excluded_column_names=column1,column2
Resolve Changes Only

Indicates that when applying changes during an update that only data that has changed should be applied. Otherwise, all the columns will be updated. This really only applies to updates.

Resolve Row Only

When 'resolve row only' is set to true, the system will ignore only the rows in conflict. When 'resolve row only' is set to false, the system will ignore the entire batch. This applies to a resolve type of 'ignore'.

Channel

Optional channel that this setting will be applied to.

Target Catalog

Optional database catalog that the target table belongs to. Only use this if the target table is not in the default catalog.

Target Schema

Optional database schema that the target table belongs to. Only use this if the target table is not in the default schema.

Target Table

Optional database table that this setting will apply to. If left blank, the setting will be for any table in the channel (if set) and in the specified node group link.

Be aware that conflict detection will not detect changes to binary columns in the case where use_stream_lobs is true in the trigger for the table. In addition, some databases do not allow comparisons of binary columns whether use_stream_lobs is true or not.
Some platforms do not support comparisons of binary columns. Conflicts in binary column values will not be detected on the following platforms: DB2, DERBY, ORACLE, and SQLSERVER.

4.10. Transforms

Transforms allow you to manipulate data on a source node or on a target node, as the data is being loaded or extracted.

The transform source table must be configured for synchronization through a linked trigger (see Table Triggers).

The source trigger creates the synchronization data, while the transformation configuration decides what to do with the synchronization data as it is either being extracted from the source or loaded into the target. You have the flexibility of defining different transformation behavior depending on whether the source change that triggered the synchronization was an Insert, Update, or Delete. In the case of Delete, you even have options on what exactly to do on the target side, be it a delete of a row, setting columns to specific values, or absolutely nothing at all.

SymmetricDS stores its transformation configuration in two configuration tables, TRANSFORM_TABLE and TRANSFORM_COLUMN. Defining a transformation involves configuration in both tables, with the first table defining which source and destination tables are involved, and the second defining the columns involved in the transformation and the behavior of the data for those columns. We will explain the various options available in both tables and the various pre-defined transformation types.

To define a transformation, you will first define the source table and target table that applies to a particular transformation. The source and target tables, along with a unique identifier (the transform_id column) are defined in TRANSFORM_TABLE . In addition, you will specify the source_node_group_id and target_node_group_id to which the transform will apply, along with whether the transform should occur on the Extract step or the Load step (transform_point).

Transform Id

Unique identifier of a specific transform.

Group Link

The group link defining which direction the transform will process.

Transform Point

Where this transform will occur. The options include:

Table 9. Transform Points

EXTRACT

The transform will execute while data is being extracted from the source. This means the transform will have access to the source’s database.

LOAD

The transform will execute while data is being loaded into the target. This means the transform will have access to the target’s database.

Column Policy

Indicates whether unspecified columns are passed thru or if all columns must be explicitly defined. The options include:

SPECIFIED

Indicates that only the transform columns that are defined will be the ones that end up as part of the transformation.

IMPLIED

Indicates that if not specified, then columns from the source are passed through to the target. This is useful if you just want to map a table from one name to anther or from one schema to another. It is also useful if you want to transform a table, but also want to pass it through. You would define an implied transform from the source to the target and would not have to configure each column.

NONE

The delete results in no target changes.

Source Catalog

Name of the catalog of the configured source table. This should only be set if Use Source Catalog/Schema or Target Catalog are set on the Router.

Source Schema

Name of the schema for the configured source table. This should only be set if Use Source Catalog/Schema or Target Schema are set on the Router.

Source Table

The name of the source table that will be transformed.

Target Catalog

Optional name for the catalog a target target table is in. Only use this if the target table is not in the default catalog.

Target Schema

Optional name of the schema a target target table is in. Only use this if the target table is not in the default schema."

Target Table

The name of the target table.

Update First

This option overrides the default behavior for an Insert operation. Instead of attempting the Insert first, SymmetricDS will always perform an Update first and then fall back to an Insert if that fails. Note that, by default, fall back logic always applies for Insert and Updates. Here, all you a specifying is whether to always do an Update first, which can have performance benefits under certain situations you may run into (see Operation Change).

Delete Action

An action to take upon delete of a row.

Table 10. Transform Points

DEL_ROW

The delete results in a delete of the row as specified by the pk columns defined in the transformation configuration.

UPDATE_COL

The delete results in an update operation (see Operation Change) on the target which updates the specific rows and columns based on the defined transformation.

NONE

The delete results in no target changes.

Update Action

An action to take upon update of a row.

UPD_ROW

The update performs normally.

INS_ROW

The update is transformed into an insert instead.

DEL_ROW

The update is transformed into a delete instead.

NONE

The update is ignored and no changes are made.

Transform Order

For a single source operation that is mapped to a transformation, there could be more than one target operation that takes place. You may control the order in which the target operations are applied through a configuration parameter defined for each source-target table combination. This might be important, for example, if the foreign key relationships on the target tables require you to execute the transformations in a particular order.

Example 11. Some Transform Use Cases
  • Copy a column from a source table to two (or more) target table columns,

  • Merge columns from two or more source tables into a single row in a target table,

  • Insert constants in columns in target tables based on source data synchronizations,

  • Insert multiple rows of data into a single target table based on one change in a source table,

  • Apply a Bean Shell script to achieve a custom transform when loading into the target database.

You must define columns for the transformation that are sufficient to fill in any primary key or other required data in the target table.
Example 12. Transform Example
insert into SYM_TRANSFORM_TABLE (
        transform_id, source_node_group_id, target_node_group_id, transform_point, source_table_name,
        target_table_name, update_action, delete_action, transform_order, column_policy, update_first,
        last_update_by, last_update_time, create_time
) values (
        'itemSellingPriceTransform', 'store', 'corp', 'EXTRACT', 'ITEM_SELLING_PRICE',
        'ITEM_SELLING_PRICE', 'DEL_ROW', 1, 'IMPLIED', 1,
        'Documentation', current_timestamp, current_timestamp
);

4.10.1. Operation Change

By default the the "source operation" or "source DML type" (i.e., an insert, a delete, or an update), which initiated the transform execution will be the same operation applied to the target. There are two ways you can override this behavior.

Table 11. Operation Changes
Source Operation Target Operation Setting

INSERT

UPDATE followed by INSERT if update was unsuccessful

Update First

DELETE

UPDATE

Delete Action is set to UPDATE_COL

Update First
insert into SYM_TRANSFORM_TABLE (
        transform_id, source_node_group_id, target_node_group_id, transform_point, source_table_name,
        target_table_name, update_action, delete_action, transform_order, column_policy, update_first,
        last_update_time, create_time
) values (
        'update-first', 'store', 'corp', 'EXTRACT', 'ITEM_SELLING_PRICE',
        'ITEM_SELLING_PRICE', 'DEL_ROW', 1, 'IMPLIED', 1,
        current_timestamp, current_timestamp
);
Delete Action
insert into SYM_TRANSFORM_TABLE (
        transform_id, source_node_group_id, target_node_group_id, transform_point, source_table_name,
        target_table_name, update_action, delete_action, transform_order, column_policy, update_first,
        last_update_time, create_time
) values (
        'delete-action-update-col', 'store', 'corp', 'EXTRACT', 'ITEM_SELLING_PRICE',
        'ITEM_SELLING_PRICE', 'UPDATE_COL', 2, 'IMPLIED', 0,
        current_timestamp, current_timestamp
);

4.10.2. Columns

Transforms are not complete until the columns involved in the transformation have been defined. Typically there will be several columns defined for each transform, each of which will define a source column and a target column.

PK

Indicates that this mapping is used to define the "primary key" for identifying the target row(s) (which may or may not be the true primary key of the target table). This is used to define the "where" clause when an Update or Delete on the target is occurring.

Unless the column policy is "IMPLIED" at least one row marked as a pk should be present for each transform_id.
Source

The source column name to be transformed.

Target

The target column name to be transformed.

Transform On

Defines whether this entry applies to source operations of Insert, Update, Delete, or All.

Table 12. Transform On Supported Values

I

Insert

U

Update

D

Delete

*

All

Type

The name of a specific type of transform, default type is "copy". See Transform Types for more information.

Expression

An expression that is specific to the type of transform that is configured in transform_type. See Transform Types for more information.

Order

In the event there are more than one columns to transform, this defines the relative order in which the transformations are applied.

4.10.3. Transform Types

There are several pre-defined transform types available in SymmetricDS. Additional ones can be defined by creating and configuring an extension point which implements the IColumnTransform interface. The pre-defined transform types include the following:

Copy Transform

This transformation type copies the source column value to the target column. This is the default behavior.

insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'ITEM_ID', 'ITEM_ID', 1,
        'copy', '', 1, current_timestamp, 'Documentation',
        current_timestamp
);
insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'STORE_ID', 'STORE_ID', 1,
        'copy', '', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Remove Transform

This transformation type excludes the source column. This transform type is only valid for a table transformation type of 'IMPLIED' where all the columns from the source are automatically copied to the target.

insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', '', 'COST', 1,
        'remove', '', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Constant Transform

This transformation type allows you to map a constant value to the given target column. The constant itself is placed in transform expression.

insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'PRICE', 'PRICE', 0,
        'const', '10', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Variable Transform

This transformation type allows you to map a built-in dynamic variable to the given target column. The variable name is placed in transform expression. The following variables are available:

Table 13. Variables

system_date

current system date

system_timestamp

current system date and time using default timezone

system_timestamp_utc

current system date and time using UTC timezone

source_node_id

node id of the source

target_node_id

node id of the target

null

null value

old_column_value

column’s old value prior to the DML operation.

insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'STORE_ID', 'STORE_ID', 0,
        'variable', 'source_node_id', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Additive Transform

This transformation type is used for numeric data. It computes the change between the old and new values on the source and then adds the change to the existing value in the target column. That is, target = target + multiplier (source_new - source_old), where multiplier is a constant found in the transform expression (default is 1 if not specified).

Example 13. Additive Transform Example

If the source column changed from a 2 to a 4, the target column is currently 10, and the multiplier is 3, the effect of the transform will be to change the target column to a value of 16 ( 10+3*(4-2) ⇒ 16 ).

In the case of deletes, the new column value is considered 0 for the purposes of the calculation.
insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'PRICE', 'PRICE', 0,
        'additive', '3', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Substring Transform

This transformation computes a substring of the source column data and uses the substring as the target column value. The transform expression can be a single integer ( n , the beginning index), or a pair of comma-separated integers ( n,m - the beginning and ending index). The transform behaves as the Java substring function would using the specified values in transform expression.

insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'STORE_ID', 'STORE_ID', 0,
        'substring', '0,5', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Left Transform

This transform copies the left most number of characters specified.

BLeft Transform

This transform copies the left most number of bytes specified.

Lookup Transform

This transformation determines the target column value by using a query, contained in transform expression to lookup the value in another table. The query must return a single row, and the first column of the query is used as the value. Your query references source column names by prefixing with a colon (e.g., :MY_COLUMN). Additional you can reference old values with :OLD_COLUMN and previously transformed columns (see transform order) with :TRM_COLUMN.

insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'COST', 'COST', 0,
        'lookup', 'select max(price) from sale_return_line_item
        where item_id = :ITEM_ID', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Multiply Transform

This transformation allows for the creation of multiple rows in the target table based on the transform expression. This transform type can only be used on a primary key column. The transform expression is a SQL statement, similar to the lookup transform, except it can return multiple rows that result in multiple rows for the target table. The first column of the query is used as the value for the target column. The query can reference source column names by prefixing them with a colon (e.g., :MY_COLUMN).

BeanShell Script Transform

This transformation allows you to provide a BeanShell script in the transform expression and executes the script at the time of transformation. Beanshell transforms can return either a String value or an instance of NewAndOldValue. Some variables are provided to the script:

Table 14. Variables

COLUMN_NAME

The variable name is the source column name in uppercase of the row being changed (replace COLUMN_NAME with your column)

currentValue

The value of the current source column

oldValue

The old value of the source column for an updated row

sqlTemplate

a org.jumpmind.db.sql.ISqlTemplate object for querying or updating the database

channelId

a reference to the channel on which the transformation is happening

sourceNode

a org.jumpmind.symmetric.model.Node object that represents the node from where the data came

targetNode

a org.jumpmind.symmetric.model.Node object that represents the node where the data is being loaded.

Example 14. Transform Expression Example Returning a String
if (currentValue > oldValue) {
	return currentValue * .9;
} else {
	return PRICE;
}
Example 15. Transform Expression Example Returning a NewAndOldValue object
if (currentValue != null && currentValue.length() == 0) {
	return org.jumpmind.symmetric.io.data.transform.NewAndOldValue(null, oldValue);
} else {
	return currentValue;
}
Example 16. Transform Expression Example Accessing Old/New Values for the Additional Column 'path'
String newFilePath = PATH;
String oldFilePath = null;
if (transformedData.getOldSourceValues() != null) {
    oldFilePath = transformedData.getOldSourceValues().get("path");
}
if (oldFilePath == null) {
    return newFilePath;
} else {
    return oldFilePath;
}
insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'COST', 'COST', 0,
        'bsh', 'if (currentValue > oldValue) { return currentValue * .9 } else { return PRICE }', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Identity Transform

This transformation allows you to insert into an identity column by letting the database compute a new identity, instead of loading an explicit value from a source database. This transform is needed on databases like SQL-Server and Sybase, which have an INSERT_IDENTITY option that is normally ON for normal data sync. By using the identity transform, the INSERT_IDENTITY is set to OFF, so the next value is generated by the database.

insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'ITEM_ID', 'ITEM_ID', 0,
        'identity', '', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Mathematical Transform

This transformation allows you to perform mathematical equations in the transform expression. Some variables are provided to the script:

#{COLUMN_NAME}

A variable for a source column in the row, where the variable name is the column name in uppercase (replace COLUMN_NAME with your column name).

#{currentValue}

The value of the current source column

#{oldValue}

The old value of the source column for an updated row.

Example 17. Tranform Expression Example
#{currentValue} - #{oldValue} * #{PRICE}
insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'COST', 'COST', 0,
        'math', '#{currentValue} - #{oldValue} * #{PRICE}', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Copy If Changed

This transformation will copy the value to the target column if the source value has changed. More specifically, the copy will occur if the the old value of the source does not equal the new value.

Table 15. Target Expression Options

IgnoreColumn

If old and new values are equal, the COLUMN will be ignored

{empty string}

If old and new values are equal, the ROW will be ignored

Old and new values are equal, ignore just the column
insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'COST', 'COST', 0,
        'copyIfChanged', 'IgnoreColumn', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Old and new values are equal, ignore entire row
insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'COST', 'COST', 0,
        'copyIfChanged', '', 2, current_timestamp, 'Documentation',
        current_timestamp
);
Value Map Transform

This transformation allows for simple value substitutions through use of the transform expression. The transform expression should consist of a space separated list of value pairs of the format sourceValue=TargetValue. The column value is used to locate the correct sourceValue, and the transform will change the value into the corresponding targetValue. A sourceValue of * can be used to represent a default target value in the event that the sourceValue is not found. Otherwise, if no default value is found, the result will be null.

Example 18. Value Map Examples
transform expression source value target value (result)

s1=t1 s2=t2 s3=t3 *=t4

s1

t1

s1=t1 s2=t2 s3=t3 *=t4

s2

t2

s1=t1 s2=t2 s3=t3 *=t4

s3

t3

s1=t1 s2=t2 s3=t3 *=t4

s4

t4

s1=t1 s2=t2 s3=t3 *=t4

s5

t4

s1=t1 s2=t2 s3=t3 *=t4

null

t4

insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'itemSellingPriceTransform', '*', 'COST', 'COST', 0,
        'valueMap', 's1=t1 s2=t2 s3=t3 *=t4', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Clarion Date Time

Convert a Clarion date column with optional time column into a timestamp. Clarion dates are stored as the number of days since December 28, 1800, while Clarion times are stored as hundredths of a second since midnight, plus one. Use a source column of the Clarion date and a target column of the timestamp. If the Clarion time exists in a separate column it can optionally be provided through the transform expression to be included in the target timestamp column.

Columns To Rows

Convert column values from a single source row into a row per column value at the target. Two column mappings are needed to complete the work:

columnsToRowsKey

Maps which source column is used

column1=key1,column2=key2
columnsToRowsValue

Maps the value

changesOnly=true

Convert only rows when the old and new values have changed

ignoreNulls=true

Convert only rows that are not null

TODO Need SQL for this scenario
Example 19. Example

"fieldid" mapped as "columnsToRowsKey" with expression of "user1=1,user2=2" and column "color" mapped as "columnsToRowsValue" would convert a row with columns named "user1" and "user2" containing values "red" and "blue" into two rows with columns "fieldid" and "color" containing a row of "1" and "red" and a row of "2" and "blue".

isEmpty Transform

This transformation checks to see if a string is null or zero length. If it is empty the replacement value will be used. If no value is provided null will be used as a default replacement for empty values.

isBlank Transform

This transformation checks to see if a string is null or zero length after trimming white spaces. If it is blank the replacement value will be used. If no value is provided null will be used as a default replacement for blank values.

Null Value Transform

This transformation checks to see if the source value is null and if so replaces it with the provided value.

4.10.4. Virtual Columns

Transforms provide the ability to create "virtual columns" which can pass data between nodes for use by other SymmetricDS processes.

Use cases for virtual columns
  1. Extract transform adds virtual column to be processed by a target load transform.

  2. Extract transform adds virtual column to be processed by a target load filter.

  3. Extract transform adds virtual column to be processed by a source router.

Example 20. Example of an extract transform passing a virtual column to a target load transform
Create two transforms, one for extract and one for target using different group links
insert into SYM_TRANSFORM_TABLE (
        transform_id, source_node_group_id, target_node_group_id, transform_point, source_table_name,
        target_table_name, update_action, delete_action, transform_order, column_policy, update_first,
        last_update_by, last_update_time, create_time
) values (
        'extractStoreItemSellingPriceTransform', 'store', 'corp', 'EXTRACT', 'ITEM_SELLING_PRICE',
        'ITEM_SELLING_PRICE', 'DEL_ROW', 1, 'IMPLIED', 0,
        'Documentation', current_timestamp, current_timestamp
);
insert into SYM_TRANSFORM_TABLE (
        transform_id, source_node_group_id, target_node_group_id, transform_point, source_table_name,
        target_table_name, update_action, delete_action, transform_order, column_policy, update_first,
        last_update_by, last_update_time, create_time
) values (
        'loadCorpItemSellingPriceTransform', 'corp', 'store', 'LOAD', 'ITEM_SELLING_PRICE',
        'ITEM_SELLING_PRICE', 'DEL_ROW', 1, 'IMPLIED', 0,
        'Documentation', current_timestamp, current_timestamp
);
Create lookup transform for the extract transform to create a new virtual column to be sent to target.
insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'extractStoreItemSellingPriceTransform', '*', 'VIRTUAL_COL', 'COST', 0,
        'lookup', 'select max(price) from sale_return_line_item
        where item_id = :ITEM_ID', 1, current_timestamp, 'Documentation',
        current_timestamp
);
Create copy transform for the load transform to populate the cost column from the virtual column that was sent over.
insert into SYM_TRANSFORM_COLUMN (
        transform_id, include_on, target_column_name, source_column_name, pk,
        transform_type, transform_expression, transform_order, last_update_time,
        last_update_by, create_time
) values (
        'loadCorpItemSellingPriceTransform', '*', 'COST', 'VIRTUAL_COL', 0,
        'copy', '', 1, current_timestamp, 'Documentation',
        current_timestamp
);

4.11. Load Filters

Load Filters are a way to take a specific action when a row of data is loaded by SymmetricDS at a destination database node.

Load filters run for each row of data being loaded.
Filter Id

The unique identifier for the load filter

Group Link

The group link for with the load filter will be applied.

Type

The type of load filter. Today only Bean Shell and Java are supported ('BSH', 'Java', 'SQL').

Target Table

The table on the target which the load filter will execute when changes occur on it.

Use the wildcard * to specify all tables configured through the group link. Partial table names in conjunction with a wildcard are NOT supported. If the wildcard is used it should be the only value.
Filter Order

The order in which load filters should execute if there are multiple scripts pertaining to the same source and target data.

Filter On Update

Determines whether the load filter takes action (executes) on a database update statement.

Filter On Insert

Determines whether the load filter takes action (executes) on a database insert statement.

Filter On Delete

Determines whether the load filter takes action (executes) on a database delete statement.

Fail On Error

Whether we should fail the batch if the filter fails.

Target Catalog

The name of the target catalog for which you would like to watch for changes.

Target Schema

The name of the target schema for which you would like to watch for changes.

4.11.1. Load Filter Scripts

Load filters are based on the execution of a script. You have the ability to set the execution point of the script at 6 different points. A script can be provided for one or more of these execution points.

Return Values
  • Return true to load the row of data.

  • Return false to not load the row of data.

Available Load Filter Scripts
Before Write Script

The script to execute before the database write occurs.

After Write Script

The script to execute after the database write occurs.

Batch Complete Script

The script to execute after the entire batch completes.

Batch Commit Script

The script to execute after the entire batch is committed.

Batch Rollback Script

The script to execute if the batch rolls back.

Handle Error Script

A script to execute if data cannot be processed.

Table 16. Variables available within scripts
Variable BSH SQL JAVA Description

engine

X

The Symmetric engine object.

COLUMN_NAME

X

X

The source values for the row being inserted, updated or deleted.

OLD_COLUMN_NAME

X

X

The old values for the row being inserted, updated or deleted.

context

X

X

The data context object for the data being inserted, updated or deleted. .

table

X

X

The table object for the table being inserted, updated or deleted.

data

X

X

The CsvData object for the data change.

error

X

X

java.lang.Exception

Example 21. Example of simple load filter
insert into sym_load_filter
        (LOAD_FILTER_ID, LOAD_FILTER_TYPE, SOURCE_NODE_GROUP_ID,
        TARGET_NODE_GROUP_ID, TARGET_CATALOG_NAME, TARGET_SCHEMA_NAME,
        TARGET_TABLE_NAME, FILTER_ON_UPDATE, FILTER_ON_INSERT, FILTER_ON_DELETE,
        BEFORE_WRITE_SCRIPT, AFTER_WRITE_SCRIPT, BATCH_COMPLETE_SCRIPT,
        BATCH_COMMIT_SCRIPT, BATCH_ROLLBACK_SCRIPT, HANDLE_ERROR_SCRIPT,
        CREATE_TIME, LAST_UPDATE_BY, LAST_UPDATE_TIME, LOAD_FILTER_ORDER,
        FAIL_ON_ERROR) values
        ('SampleFilter','BSH','Client','Server',NULL,NULL,
        'ITEM_SELLING_PRICE',1,1,1,'
        if (OLD_COST > COST) {
                // row will not be loaded
                return false
        }
                // row will be loaded
                return true
        }
        ',
        null,null,null,null,null,
        sysdate,'Documentaion',sysdate,1,1);
Example 22. Example load filter to send email on error
insert into sym_load_filter
        (LOAD_FILTER_ID, LOAD_FILTER_TYPE, SOURCE_NODE_GROUP_ID,
        TARGET_NODE_GROUP_ID, TARGET_CATALOG_NAME, TARGET_SCHEMA_NAME,
        TARGET_TABLE_NAME, FILTER_ON_UPDATE, FILTER_ON_INSERT, FILTER_ON_DELETE,
        BEFORE_WRITE_SCRIPT, AFTER_WRITE_SCRIPT, BATCH_COMPLETE_SCRIPT,
        BATCH_COMMIT_SCRIPT, BATCH_ROLLBACK_SCRIPT, HANDLE_ERROR_SCRIPT,
        CREATE_TIME, LAST_UPDATE_BY, LAST_UPDATE_TIME, LOAD_FILTER_ORDER,
        FAIL_ON_ERROR) values
        ('EmailErrorFilter','BSH','Client','Server',NULL,NULL,
        '*',1,1,1,null,
        null,null,null,null,'
        authListener = new javax.mail.Authenticator() {
        protected javax.mail.PasswordAuthentication getPasswordAuthentication() {
            return new javax.mail.PasswordAuthentication(engine.getParameterService().getString("mail.smtp.username"),
               engine.getParameterService().getString("mail.smtp.password"));
          }
        };

        if (bsh.shared.mailMap == void) {
          bsh.shared.mailMap = new HashMap();
        }

        String batchId = context.getBatch().getSourceNodeBatchId();
        String targetNodeId = context.getBatch().getTargetNodeId();
        if (!bsh.shared.mailMap.containsKey(batchId)) {
          bsh.shared.mailMap.put(batchId, Boolean.TRUE);
          javax.mail.Session session = javax.mail.Session.getInstance
            (engine.getParameterService().getAllParameters(), authListener);
          javax.mail.internet.MimeMessage msg = new
            javax.mail.internet.MimeMessage(session);
          msg.setFrom(new javax.mail.internet.InternetAddress
            (engine.getParameterService().getString("mail.smtp.from")));
          msg.setRecipients(javax.mail.Message.RecipientType.TO,
            engine.getParameterService().getString("mail.smtp.to"));
          msg.setSubject("SymmetricDS - batch " + batchId + " is in error at node " + targetNodeId);
          msg.setSentDate(new java.util.Date());
          msg.setText(org.apache.commons.lang.exception.ExceptionUtils.
            getFullStackTrace(error));
          javax.mail.Transport.send(msg);

        }',
        sysdate,'Documentation',sysdate,1,1);

4.11.2. Custom Load Filters

Custom load filters can be created by implementing the IDatabaseWriterFilter, see IDatabaseWriterFilter for more information.

4.12. Grouplets

Grouplets are user defined sub groups of nodes as defined by a node’s external id. They are used to enable or disable synchronization at a finer grained level than group. Grouplets are typically used to accomplish piloting or rolling out partial synchronization functionality to a smaller group of nodes in a large network of nodes.

Group Id

Identifier used through the system to identify a given Grouplet.

Link Policy

Policy in which to link the Grouplet, inclusive or exclusive.

Description

Description of the Grouplet that is available through the console.

4.13. Extensions

Extensions are custom code written to a plug-in interface, which allows them to run inside the engine and change its default behavior. Saving extension code in the configuration has the advantage of dynamically running without deployment or restarting. Configured extensions are available to other nodes and move between environments when configuration is exported and imported.

Extension Id

Identifier for a unique extension entry.

Extension Type

Type of extension, either written in Java or BeanShell. Java extensions are compiled to bytecode on first use and may be compiled to native code by the Just-In-Time (JIT) compiler, giving them the best performance. BeanShell extensions are parsed on first use and interpreted at runtime, but they are easier to write because of loose typing and short-cuts with syntax.

Table 17. Options for Extension Type

Java

Indicates that Java code is provided in the extension text.

BSH

Indicates that BeanShell code is provided in the extension text. Built-in variables are available for engine, sqlTemplate, and log.

Interface Name

The full class name for the interface implemented by the extension, including the package name. Only needed for extension type of BSH.

Node Group Id

The node group where this extension will be active and run.

Enabled

Whether or not the extension should be run.

Extension Order

The order to register extensions when multiple extensions for the same interface exist.

Extension Text

The code for the extension that will be compiled or interpreted at runtime.

Example 23. BSH extension that adds a new transform for masking characters

Add a new transform type called "mask" that replaces all characters in a string with an asterisk except the last number of characters specified by the user in the expression. This BeanShell extension uses the ISingleValueColumnTransform interface and applies only to the "corp" node group.

insert into sym_extension
   (extension_id, extension_type, interface_name, node_group_id, enabled,
    extension_order, extension_text)
values
   ('mask', 'bsh',
    'org.jumpmind.symmetric.io.data.transform.ISingleValueColumnTransform',
    'corp', 1, 1, '
    import org.apache.commons.lang.StringUtils;

    isExtractColumnTransform() {
        return true;
    }

    isLoadColumnTransform() {
        return true;
    }

    transform(platform, context, column, data, sourceValues, newValue, oldValue) {
        if (StringUtils.isNotBlank(newValue)) {
            String expression = column.getTransformExpression();
            if (StringUtils.isNotBlank(expression)) {
                count = newValue.length() - Integer.parseInt(expression.trim());
                return StringUtils.repeat("*", count) + newValue.substring(count);
            }
        }
        return newValue;
    }
   ');
Example 24. Java IReloadListener extension that disables foreign keys before a load and enables them after the load.
insert into SYM_EXTENSION (EXTENSION_ID, EXTENSION_TYPE, INTERFACE_NAME, NODE_GROUP_ID, ENABLED, EXTENSION_ORDER, EXTENSION_TEXT, CREATE_TIME, LAST_UPDATE_BY, LAST_UPDATE_TIME) values ('disable ref integrity','java','org.jumpmind.symmetric.load.IReloadListener','ALL',1,1,'import org.jumpmind.db.sql.ISqlTransaction;
import org.jumpmind.symmetric.ISymmetricEngine;
import org.jumpmind.symmetric.ext.ISymmetricEngineAware;
import org.jumpmind.symmetric.load.IReloadListener;
import org.jumpmind.symmetric.model.Node;

public class MyReloadListener implements IReloadListener, ISymmetricEngineAware {

    ISymmetricEngine engine;

    @Override
    public void setSymmetricEngine(ISymmetricEngine engine) {
        this.engine = engine;
    }

    @Override
    public void beforeReload(ISqlTransaction transaction, Node node, long loadId) {
        engine.getDataService().insertSqlEvent(transaction, node, "SET REFERENTIAL_INTEGRITY FALSE", true, loadId, "initial load");
    }

    @Override
    public void afterReload(ISqlTransaction transaction, Node node, long loadId) {
        engine.getDataService().insertSqlEvent(transaction, node, "SET REFERENTIAL_INTEGRITY TRUE", true, loadId, "initial load");
    }

}
',current_timestamp,'some user',current_timestamp);
Extensions Tips and Tricks
For BeanShell, implement only the methods needed from an interface, then write a special method of "invoke(method, args) {}" that will be called for any unimplemented methods.

4.14. Parameters

Parameters can be used to help tune and configure your SymmetricDS configuration. Parameters can be set for an individual node or for all nodes in your network.

See Parameter List for a complete list of parameters.

4.15. Mail Server

A mail server can be configured for sending email notifications.

Target Nodes

The node group ID that will use this configuration.

Hostname

The hostname or IP address of the mail server to contact for sending mail.

Transport

The transport mechanism is either SMTP (Simple Mail Transfer Protocol) or SMTPS (encrypted with SSL).

Port

The default port for SMTP is 25, while the default port for SMTPS is 465.

Use StartTLS

After connecting over SMTP, the TLS protocol is used to encrypt content.

Use Authentication

The mail server requires a login and password before email can be sent.

User

The user login to use for authentication.

Password

The login password to use for authentication.

4.16. Monitors

A monitor watches some part of the system for a problem, checking to see if the monitored value exceeds a threshold. (To be notified immediately of new monitor events, configure a notification.)

Monitor ID

The monitor ID is a unique name to refer to the monitor.

Node Group ID

The node group that will run this monitor. Use "ALL" to match all groups.

External ID

The external ID of nodes that will run this monitor. Use "ALL" to match all nodes.

Monitor Type

The monitor type is one of several built-in or custom types that run a specific check and return a numeric value that can be compared to a threshold value.

Type Description

cpu

Percentage from 0 to 100 of CPU usage for the server process.

disk

Percentage from 0 to 100 of disk usage (tmp folder staging area) available to the server process.

memory

Percentage from 0 to 100 of memory usage (tenured heap pool) available to the server process.

batchError

Number of incoming and outgoing batches in error.

batchUnsent

Number of outgoing batches waiting to be sent.

dataUnrouted

Number of change capture rows that are waiting to be batched and sent.

dataGaps

Number of active data gaps that are being checked during routing for data to commit.

nodesOffline

The number of nodes that are offline based on the last heartbeat time. The console.report.as.offline.minutes parameter controls how many minutes before a node is considered offline.

Threshold

When this threshold value is reached or exceeded, an event is recorded.

Run Period

The time in seconds of how often to run this monitor. The monitor job runs on a period also, so the monitor can only run as often as the monitor job.

Run Count

The number of times to run the monitor before calculating an average value to compare against the threshold.

Severity Level

The importance of this monitor event when it exceeds the threshold.

Enabled

Whether or not this monitor is enabled to run.

4.17. Notifications

A notification sends a message to the user when a monitor event records a system problem. First configure a monitor to watch the system and record events with a specific severity level. Then, configure a notification to match the severity level and write to the log or send an email.

Notification ID

The notification ID is a unique name to refer to the notification.

Node Group ID

The node group that will run this monitor. Use "ALL" to match all groups.

External ID

The external ID of nodes that will run this monitor. Use "ALL" to match all nodes.

Notification Type

The notification type is either a built-in or custom type that is given the list of monitor events to send.

Type Description

log

The monitor events are written to the log using the same severity level.

email

The monitor events are sent in an email to a list of recipients. Use the expression for the comma-separated list of email addresses.

Expression

Additional information to configure the notification type.

Severity Level

Find monitor events that occur at this severity level or above.

Enabled

Whether or not this notification is enabled to run.

5. Manage

This section will talk about how to manage and monitor SymmetricDS.

5.1. Nodes

5.1.1. Add Node

Multiple nodes can be hosted in a single SymmetricDS instance. SymmetricDS will start a node for each properties file it finds in the engines directory. Multiple nodes can be hosted in a single SymmetricDS instance. SymmetricDS will start a node for each properties file it finds in the engines directory.

Additional nodes can be added to the same SymmetricDS instance that the master node is running in or they can be added to a different SymmetricDS instance. Either way, you create additional nodes by creating an Node Properties File with the registration.url set to the sync.url of the master node. When the SymmetricDS instance is restarted the new node will attempt to register with the master node.

For the new node to fully become a part of the synchronization network Registration must be opened.

5.1.2. Load Data

A load is the process of seeding tables at a target node with data from a source node. Instead of capturing data, data is selected from the source table using a SQL statement and then it is streamed to the client.

Initial loads, reverse initial loads, and table reloads can utilize the TABLE_RELOAD_REQUEST to request a load with a variety of options.

Initial Load (all tables)

Insert a row into TABLE_RELOAD_REQUEST containing the value 'ALL' for both the trigger_id and router_id.

insert into SYM_TABLE_RELOAD_REQUEST (target_node_id, source_node_id, trigger_id, router_id, create_time, last_update_time)
     values ('store-001', 'corp-000', 'ALL', 'ALL', current_timestamp, current_timestamp);
Partial Load

Insert a row into TABLE_RELOAD_REQUEST for each trigger router combination to load.

insert into SYM_TABLE_RELOAD_REQUEST (target_node_id, source_node_id, trigger_id, router_id, create_time, last_update_time)
     values ('store-001', 'corp-000', 'item_selling_price', 'corp_2_store', current_timestamp, current_timestamp);

insert into SYM_TABLE_RELOAD_REQUEST (target_node_id, source_node_id, trigger_id, router_id, create_time, last_update_time)
     values ('store-001', 'corp-000', 'item', 'corp_2_store', current_timestamp, current_timestamp);
Reverse Initial Load (all tables)

Insert a row into TABLE_RELOAD_REQUEST with the proper source and target nodes for the direction of the load.

insert into SYM_TABLE_RELOAD_REQUEST (target_node_id, source_node_id, trigger_id, router_id, create_time, last_update_time)
     values ('corp-000', 'store-001', 'ALL', 'ALL', current_timestamp, current_timestamp);
Load data and create target tables

Insert a row into TABLE_RELOAD_REQUEST and set the create_table to 1 to send a table creation prior to the load running.

insert into SYM_TABLE_RELOAD_REQUEST (target_node_id, source_node_id, trigger_id, router_id, create_time, create_table, last_update_time)
     values ('corp-000', 'store-001', 'ALL', 'ALL', current_timestamp, 1, current_timestamp);
Load data and delete from target tables

Insert a row into TABLE_RELOAD_REQUEST and set the delete_first to 1 to delete all data in the target table prior to the load running.

insert into SYM_TABLE_RELOAD_REQUEST (target_node_id, source_node_id, trigger_id, router_id, create_time, delete_first, last_update_time)
     values ('corp-000', 'store-001', 'ALL', 'ALL', current_timestamp, 1, current_timestamp);
Load data for a specific table with partial data

Insert a row into TABLE_RELOAD_REQUEST and set the reload_select to the where clause to run while extracting data. There are 3 variables available for replacement.

  • $(groupId)

  • $(nodeId)

  • $(externalId)

insert into SYM_TABLE_RELOAD_REQUEST (target_node_id, source_node_id, trigger_id, router_id, create_time, reload_select, last_update_time)
     values ('store-001', 'corp-000', 'item_selling_price', 'corp_2_store', current_timestamp, 'store_id=$(externalId)', current_timestamp);
Load table with custom SQL run before the load executes.

Insert a row into TABLE_RELOAD_REQUEST and set the before_custom_sql to run before the load runs. The %s variable is available as replacement for the table name.

insert into SYM_TABLE_RELOAD_REQUEST (target_node_id, source_node_id, trigger_id, router_id, create_time, before_custom_sql, last_update_time)
     values ('store-001', 'corp-000', 'ALL', 'ALL', current_timestamp, 'truncate table %s', current_timestamp);

5.1.3. Control

Stopping a Node

Installed nodes are started automatically when the SymmetricDS server is started. An individual node instance can be stopped while other nodes continue to run.

From the command line, you can use JMX to stop a node. The following is an example. You would replace <engine name> with the name of the engine as found in the Node Properties File

bin/jmx --bean org.jumpmind.symmetric.<engine name>:name=Node --method stop

Uninstalling a Node

Uninstalling a node will remove all SymmetricDS database artifacts and delete the engine’s property file.

This can not be undone so be sure to uninstall with caution.

From the command line you can use the symadmin utility to uninstall a node.

bin/symadmin --engine <engine name> uninstall

5.1.4. Registration

In order for a node to synchronize with other nodes it must be registered. When a node is registered, it downloads its SymmetricDS configuration as well as references to the nodes that it should sync with.

A node is considered unregistered if it does not have an NODE_IDENTITY row. When a node is unregistered, it will use the registration.url defined in the Node Properties File to request registration. The registration.url of the new node is the sync.url of the node that is being registered with.

Before a node is allowed to register, it must have an open registration. If there is no open registration, then a REGISTRATION_REQUEST is recorded.

You can open registration from the command line with the following command:

bin/symadmin --engine <engine name> open-registration <node group> <external id>

The <node group> and <external id> should match the group.id and external.id in the registering node’s Node Properties File.

Node registration is stored in the NODE and NODE_SECURITY tables. Nodes are only allowed to register if rows exist for the registering node and the registration_enabled flag is set to 1.

If the auto.registration SymmetricDS parameter is set to true, then when a node attempts to register, the node will automatically be registered.

SymmetricDS allows you to have multiple nodes with the same external_id. In order to enable this you must set external.id.is.unique.enabled to false.

5.1.5. Initial Loads

Loading data for 3.8 and above has been modified, see Load Data.

When a load is requested it will either set the initial_load_enabled or the reverse_initial_load_enabled flag on the appropriate NODE_SECURITY row.

When the Route Job runs next, it will create batches that represent the initial load. Batches will be created on the reload channel for each table that is defined by Table Triggers and linked by Table Routing in the direction that the load was requested. The default reload channel is the "reload" channel. At the same time reload batches are inserted, all previously pending batches for the node are marked as successfully sent.

Each table defined by Table Triggers and linked by Table Routing is represented by a reload OUTGOING_BATCH. The batches are inserted in the defined order. If the initial_load_order is the same then SymmetricDS tries to determine the order the tables need to be loaded in automatically based on foreign key dependencies. A negative value for initial_load_order in Table Routing will result no reload batch being inserted.

If there are cyclical constraints, then foreign keys might need to be turned off or the initial load will need to be manually configured based on knowledge of how the data is structured.

A SQL statement is run against each table to get the data load that will be streamed to the target node. The selected data is filtered through the configured router for the table being loaded. If the data set is going to be large, then SQL criteria can optionally be provided in initial_load_select to pare down the data that is selected out of the database.

Note that if the parent node that a node is registering with is not a registration server node (as can happen when using REGISTRATION_REDIRECT or when using multiple tiers) the parent node’s NODE_SECURITY entry must exist at the parent node and have a non-null value for column initial_load_time. Nodes can’t be registered to a non-registration-server node without this value being set one way or another (i.e., manually, or as a result of an initial load occurring at the parent node).
Partial Initial Loads

An efficient way to select a subset of data from a table for an initial load is to provide an initial_load_select clause in Table Routing . This clause, if present, is applied as a where clause to the SQL used to select the data to be loaded. The clause may use "t" as an alias for the table being loaded, if needed. The $(externalId) token can be used for subsetting the data in the where clause.

In cases where routing is done using a feature like Subselect Router , an initial_load_select clause matching the subselect’s criteria would be a more efficient approach. Some routers will check to see if the initial_load_select clause is provided, and they will not execute assuming that the more optimal path is using the initial_load_select statement.

One example of the use of an initial load select would be if you wished to only load data created more recently than the start of year 2011. Say, for example, the column created_time contains the creation date. Your initial_load_select would read created_time > ts {'2011-01-01 00:00:00.0000'} (using whatever timestamp format works for your database). This then gets applied as a where clause when selecting data from the table.

When providing an initial_load_select be sure to test out the criteria against production data in a query browser. Do an explain plan to make sure you are properly using indexes.
Initial Load Extract In Background

By default, initial loads for a table are broken into multiple batches. SymmetricDS will pre-extract initial load batches versus having them extracted when the batch is pulled or pushed. There are two ways to tell SymmetricDS the number of batches to create for a given table. The first is to specify a positive integer in the initial_load_batch_count column on Table Routing. This number will dictate the number of batches created for the initial load of the given table. The second way is to specify 0 for initial_load_batch_count on Table Routing and specify a max_batch_size on the reload channel for Channels. When 0 is specified for initial_load_batch_count, SymmetricDS will execute a count(*) query on the table during the extract process and pre-create N batches based on the total number of records found in the table divided by the max_batch_size on the reload channel.

By setting the initial.load.use.extract.job.enabled to false all data for a given table will be initial loaded in a single batch, regardless of the max batch size parameter on the reload channel. That is, for a table with one million rows, all rows for that table will be initial loaded and sent to the destination node in a single batch. For large tables, this can result in a batch that can take a long time to extract and load.

Reverse Initial Loads

Normal initial loads load data from the parent node to a client node. Occasionally, there may be need to do a one-time initial load of data in the "reverse" direction. A reverse initial load is started by setting the reverse_initial_load_enabled flag on NODE_SECURITY.

Other Initial Load Settings
Initial Load Parameters

There are several parameters that can be used to modify the behavior of an initial load.

auto.reload

A load is queued up to a node automatically when it registers.

auto.reload.reverse

A reverse initial load is queued up for a node automatically when it registers.

initial.load.delete.first / initial.load.delete.first.sql

By default, an initial load will not delete existing rows from a target table before loading the data. If a delete is desired, the parameter initial.load.delete.first can be set to true. If true, the command found in initial.load.delete.first.sql will be run on each table prior to loading the data. The default value for initial.load.delete.first.sql is

delete from %s

Note that individual reload batches are created that represent the deletes in the reverse order that load batches are created. All delete batches are inserted first. The initial.load.delete.first.sql can be overwritten at the TRIGGER_ROUTER level by entering a initial_load_delete_stmt.

initial.load.create.first

By default, an initial load will not create the table on the target if it doesn’t already exist. If the desired behavior is to create the table on the target if it is not present, set the parameter intial.load.create.first to true. SymmetricDS will attempt to create the table and indexes on the target database before doing the initial load. Note that individual create batches are created to represent each of the table creates.

Sometimes when creating tables across different database platforms default values do not translate. You can turn off the use of default values during the table create by setting create.table.without.defaults.

5.1.6. Send

Events other than data changes can be sent to nodes in the synchronization network. The following can also be sent to nodes:

SQL Scripts

Sql can be sent to be executed on a target node

BSH Scripts

Beanshell scripts can be sent to be executed on a target node

Table Schema

The schema the source node can be replicated to the target node individually

Table Data

There may be times where you find you need to re-send or re-synchronize data when the change itself was not captured. This could be needed, for example, if the data changes occurred prior to SymmetricDS placing triggers on the data tables themselves, or if the data at the destination was accidentally deleted or for some other reason.

Be careful when re-sending data. Be sure you are only sending the rows you intend to send and, more importantly, be sure to re-send the data in a way that won’t cause foreign key constraint issues at the destination. In other words, if more than one table is involved, be sure to send any tables which are referred to by other tables by foreign keys first. Otherwise, the channel’s synchronization will block because SymmetricDS is unable to insert or update the row because the foreign key relationship refers to a non-existent row in the destination!

You can manually insert "reload" events into the DATA table that represent the table/s to reload. These reload events are created in the source database.

To create a reload event, you create a DATA row, using the following values. Any column not specified is not required.

Column

Value

data_id

null

table_name

name of table to be sent

event_type

'R', for reload

row_data

a "where" clause (minus the word 'where') which defines the subset of rows from the table to be sent. To send all rows, one can use 1=1 for this value.

trigger_hist_id

use the id of the most recent entry (i.e., max(trigger_hist_id) ) in TRIGGER_HIST for the trigger-router combination for your table and router.

channel_id

the channel in which the table is routed

create_time

current_timestamp

node_list

A comma separated list of node_ids to route to

The following is an example insert statement:

 insert into sym_data (node_list, table_name, event_type, row_data,
                       trigger_hist_id, channel_id, create_time) (
    select '00001', t.source_table_name, 'R', 'tran_id=''MISSING-ID''',
            h.trigger_hist_id, t.channel_id, current_timestamp
        from sym_trigger t inner join sym_trigger_router tr on
            t.trigger_id=tr.trigger_id inner join sym_trigger_hist h on
            h.trigger_hist_id=(select max(trigger_hist_id) from sym_trigger_hist
                where trigger_id=t.trigger_id)
    where channel_id='sale_transaction' and
        tr.router_id like 'store_corp_identity' and
        (t.source_table_name like 'sale_%')
    order by tr.initial_load_order asc);

This insert statement generates N rows, one for each configured table that starts with sale_. It uses the most recent trigger history id for the corresponding table. It takes advantage of the initial load order for each trigger-router to create the three rows in the correct order (the order corresponding to the order in which the tables would have been initial loaded).

5.2. Jobs

Most work done by SymmetricDS is initiated by jobs. Jobs are tasks that are started and scheduled by a job manager. Jobs are enabled by the start.<name>.job parameter.

Most jobs are enabled by default. The frequency at which a job runs is controlled by one of two parameters:

  • job.<name>.period.time.ms

  • job.<name>.cron

If a valid cron property exists in the configuration, then it will be used to schedule the job. Otherwise, the job manager will attempt to use the period.time.ms property.

The frequency of jobs can be configured in either the Node Properties File or in the PARAMETER table. When managed in PARAMETER table the frequency properties can be changed in the master node and when the updated settings sync to the other nodes in the system the job manager will restart the jobs at the new frequency settings.

SymmetricDS utilizes Spring’s CRON support, which includes seconds as the first parameter. This differs from the typical Unix-based implementation, where the first parameter is usually minutes. For example, */15 * * * * * means every 15 seconds, not every 15 minutes. See Spring’s documentation for more details.

Some jobs cannot be run in parallel against a single node. These jobs use the LOCK table to get an exclusive semaphore to run the job. This table is only used if the cluster.lock.enabled is set to true.

5.2.1. Route Job

The Route Job is responsible for creating outgoing batches of captured data that are targeted at specific nodes.

The job processes Channels, one at a time, reading up to Max Data To Route data rows which have not been routed.

The data is assigned to outgoing batches based on the Batch Algorithm defined for the channel. Note that, for the default and transactional algorithm Max Data To Route rows may be exceeded depending on the transaction boundaries.

An outgoing batch is initially created with a status of "RT". Data is assigned to the batch by inserting into data event. When a batch is complete, the batch is committed and the status is changed to "NE".

The route job will respect the Max Batch Size as configured in Channels. If the max batch size is reached before the end of a captured database transaction and the batch algorithm is set to something other than nontransactional the batch may exceed the specified max size.

The route job delegates to a router to decided which nodes need to received the data. The correct router is looked up by referencing the captured trigger_hist_id in the DATA table and using Table Routing configuration.

After outgoing batches have been created by the Route Job, they need to be transported to the target node.

Data Gaps

The DATA to route is selected based on the values in the DATA_GAP table. For efficiency, DATA_GAP tracks gaps in the data ids in DATA table that have not yet been processed.

A gap while routing in DATA can occur because concurrently running transactions have not yet committed. They can also be caused by rolled back transactions.

Most of gaps are only temporarily and fill in at some point after routing and need to be picked up with the next routing run.

This table completely defines the entire range of data that can be routed at any point in time. For a brand new instance of SymmetricDS, this table is empty and SymmetricDS creates a gap starting from data id of zero and ending with a very large number (defined by routing.largest.gap.size ).

At the start of a route job, the list of valid gaps (gaps with status of 'GP') is collected, and each gap is evaluated in turn. If a gap is sufficiently old (as defined by routing.stale.dataid.gap.time.ms, SymmetricDS assumes that a transaction has been rolled back and deletes the gap.

If the gap is not stale, then DATA_EVENT is searched for data ids present in the gap. If one or more data ids is found in DATA_EVENT, then the current gap is deleted, and new gap(s) are created to represent the data ids still missing in the gap’s range. This process is done for all gaps. If the very last gap contained data, a new gap starting from the highest data id and ending at (highest data id + routing.largest.gap.size) is then created.

This results in an updated gap list that can be used to select DATA for routing.

5.2.2. Push Job

The Push Job is responsible for assigning nodes that need to be pushed to individual threads. See Push Threads for more details.

The job sends Outgoing Batches to the target node using an HTTP PUT. By default an HTTP PUT buffers data at the client. If large batches are going to be sent using the push job, then consider turning on http.push.stream.output.enabled.

The push job is considered to be slightly more efficient than the Pull Job because it only needs to make a network connection if there are batches available to send.

In order to be more efficient, the push job sends an HTTP HEAD to request a reservation at the target node. If the target nodes responds and accepts the request, then the job issues the HTTP PUT with the data pay load in Data Format

5.2.3. Pull Job

The Pull Job is responsible for assigning nodes that need to be pulled to individual threads. See Pull Threads for more details.

The job expects to receive Incoming Batches from a source node using an HTTP GET.

5.2.4. Purge Outgoing Job

The Purge Outgoing Job is responsible to purging outgoing data that has successfully been loaded at the target and is older than purge.retention.minutes.

This job purges the following tables:

5.2.5. Purge Incoming Job

The Purge Incoming Job is responsible for purging the INCOMING_BATCH table.

5.2.6. Statistics Job

The Statistics Job flushes captured statistics to following tables:

It also purges the same tables based on the purge.stats.retention.minutes parameter.

5.2.7. Sync Triggers Job

The Sync Triggers Job runs when a node is started and on the prescribed job schedule. The job checks for missing SymmetricDS database triggers and creates them. It also updates the SymmetricDS database triggers that have had a change to its configuration or the database table has had a change to its structure.

5.2.8. Heartbeat Job

The Heartbeat Job updates its own NODE_HOST row with a new heartbeat_time so that it is synchronized to it’s created_at_node_id node to indicate that the node is online and healthy.

5.2.9. Watchdog Job

The Watchdog Job looks for nodes that have been offline for offline.node.detection.period.minutes and disables them.

5.2.10. Stage Management Job

The Stage Management Job purges the staging area according to the stream.to.file.ttl.ms parameter.

5.2.11. Refresh Cache Job

The Refresh Cache Job checks the last_update_time on each cached configuration resource and determines if it needs to refresh the cached items. This job is mostly relevant when a cluster is deployed.

5.2.12. File Sync Tracker Job

The File System Tracker job is responsible for monitoring and recording the events of files being created, modified, or deleted. It records the current state of files to the FILE_SNAPSHOT table.

See File Synchronization for more details.

5.2.13. File Sync Pull Job

The File Sync Pull Job is responsible for assigning nodes that need to be pulled to individual threads.

See File Synchronization and Pull Threads for more details.

5.2.14. File Sync Push Job

The File Sync Push Job is responsible for assigning nodes that need to be pushed to individual threads.

See File Synchronization and Pull Threads for more details.

5.2.15. Initial Load Extract Job

The Initial Load Extract Job processes EXTRACT_REQUESTs. See Initial Load Extract In Background for more details.

5.3. Installed Triggers

SymmetricDS installs database triggers to capture changes in the DATA table. A record of the triggers that were installed and what columns are being captured is stored in the TRIGGER_HIST table. When data is captured in DATA it references the TRIGGER_HIST record that represented the trigger at the time data was captured.

This is necessary because if a trigger is rebuilt after columns are added or removed and data that was captured by the old trigger has not yet been delivered, we need a record of what columns were in play at the time the data had been captured.

The TRIGGER_HIST table records the reason a trigger was rebuilt. The following reasons are possible:

N

New trigger that has not been created before

S

Schema changes in the table were detected

C

Configuration changes in Trigger

T

Trigger was missing

A configuration entry in Trigger without any history in Trigger Hist results in a new trigger being created (N). The Trigger Hist stores a hash of the underlying table, so any alteration to the table causes the trigger to be rebuilt (S). When the last_update_time is changed on the Trigger entry, the configuration change causes the trigger to be rebuilt ©. If an entry in Trigger Hist is missing the corresponding database trigger, the trigger is created (T).

5.4. Outgoing Batches

Outgoing batches are delivered to the target node when the source node pushes or when the target node pulls.

A single push or pull connection is called a synchronization.

For one synchronization, each enabled channel will be processed. Channels are processed in the order defined by the Processing Order setting on the channel with two exceptions:

  • If there are reload channels available to be sent and the reload channels are not in error, then only reload channels will be sent

  • If a channel is in error it will be moved to the bottom of the list

When outgoing batches are selected for a node and a channel, the maximum number of batches that are extracted per synchronization is controlled by the Max Batch To Send setting on the channel.

There is also a setting that controls the max number of bytes to send in one synchronization. If SymmetricDS has extracted more than the number of bytes configured by the transport.max.bytes.to.sync parameter, then it will finish extracting the current batch and then finish synchronization so the client has a chance to process and acknowledge the "big" batch. This may happen before the configured Max Batch To Send has been reached.

When extracting a batch, data is first extracted to the Staging Area and then sent across the network from the Staging Area. The staging area is used to minimize the amount of time a database connection is being used when streaming over slower networks. The use of the staging area can be turned off by setting the stream.to.file.enabled parameter.

5.4.1. Extract Frequency By Channel

The pull and push frequency cannot be adjusted by channel. If you want to adjust the frequency that outgoing batches for a specific channel are sent, you have two options:

  1. Batches are extracted by channel at an interval controlled by the extract_period_millis in the Channels settings. The last_extract_time is always recorded, by channel, on the NODE_CHANNEL_CTL table for the host node’s id. When the Pull and Push Job run, if the extract period has not passed according to the last extract time, then the channel will be skipped for this run. If the extract_period_millis is set to zero, data extraction will happen every time the jobs run.

  2. SymmetricDS provides the ability to configure windows of time when synchronization is allowed. This is done using the NODE_GROUP_CHANNEL_WND table. A list of allowed time windows can be specified for a node group and a channel. If one or more windows exist, then data will only be extracted and transported if the time of day falls within the window of time specified. The configured times are always for the target node’s local time. If the start_time is greater than the end_time, then the window crosses over to the next day.

5.4.2. Outgoing Sync Status

The status of outgoing synchronization can be queried at the source database.

The following query will show outgoing synchronization failures by node:

select count(*), node_id from sym_outgoing_batch
  where error_flag=1 group by node_id;

The following query will show the number of data rows that have not been delivered to target nodes:

select sum(data_event_count), node_id from sym_outgoing_batch
  where status != 'OK' group by node_id;

The following queries summed together give an approximation of the number of rows that have not been routed:

select sum(end_id-start_id) from sym_data_gap
  where start_id < (select max(start_id) from sym_data_gap);

select count(*) from sym_data
  where data_id >= (select max(start_id) from sym_data_gap);

5.4.3. Outgoing Batch Errors

By design, whenever SymmetricDS encounters an issue with synchronization, the batch containing the error is marked as being in an error state, and all subsequent batches on the same channel to the same node are not synchronized until the batch error is resolved.

SymmetricDS will retry the batch in error until the situation creating the error is resolved (or the data for the batch itself is changed). If the error is caused by network or database failures, then the error might eventually resolve itself when the network or database failures are resolved.

Analyzing and resolving issues can take place on the outgoing or incoming side. The techniques for analysis are slightly different in the two cases, however, due to the fact that the node with outgoing batch data also has the data and data events associated with the batch in the database. On the incoming node, however, all that is available is the incoming batch header and data present in an incoming error table.

Analyzing the Issue

The first step in analyzing the cause of a failed batch is to locate information about the data in the batch.

To locate batches in error, run the following SQL query:

select * from sym_outgoing_batch where error_flag=1;

Several useful pieces of information are available from this query:

  • The batch number of the failed batch, available in column BATCH_ID.

  • The node to which the batch is being sent, available in column NODE_ID.

  • The channel to which the batch belongs, available in column CHANNEL_ID. All subsequent batches on this channel to this node will be held until the error condition is resolved.

  • The specific data id in the batch which is causing the failure, available in column FAILED_DATA_ID.

  • Any SQL message, SQL State, and SQL Codes being returned during the synchronization attempt, available in columns SQL_MESSAGE, SQL_STATE, and SQL_CODE, respectively.

Using the error_flag on the batch table, as shown above, is more reliable than using the status column. The status column can change from 'ER' to a different status temporarily as the batch is retried.
The query above will also show you any recent batches that were originally in error and were changed to be manually skipped. See the end of Outgoing Batches for more details.

To get a full picture of the batch, you can query for information representing the complete list of all data changes associated with the failed batch by joining DATA and DATA_EVENT, such as:

select * from sym_data where data_id in
   (select data_id from sym_data_event where batch_id='XXXXXX');

where XXXXXX is the batch id of the failing batch.

This query returns a wealth of information about each data change in a batch, including:

  • The table involved in each data change, available in column TABLE_NAME,

  • The event type (Update [U], Insert [I], or Delete [D]), available in column EVENT_TYPE,

  • A comma separated list of the new data and (optionally) the old data, available in columns ROW_DATA and OLD_DATA, respectively.

  • The primary key data, available in column PK_DATA

  • The channel id, trigger history information, transaction id if available, and other information.

More importantly, if you narrow your query to just the failed data id you can determine the exact data change that is causing the failure:

select * from sym_data where data_id in
    (select failed_data_id from sym_outgoing_batch where batch_id='XXXXX'
    and node_id='YYYYY');

where XXXXXX is the batch id and YYYYY is the node id of the batch that is failing.

The queries above usually yield enough information to be able to determine why a particular batch is failing.

Common reasons a batch might fail include:

  • The schema at the destination has a column that is not nullable yet the source has the column defined as nullable and a data change was sent with the column as null.

  • A foreign key constraint at the destination is preventing an insertion or update, which could be caused from data being deleted at the destination or the foreign key constraint is not in place at the source.

  • The data size of a column on the destination is smaller than the data size in the source, and data that is too large for the destination has been synced.

Resolving the Issue

Once you have decided upon the cause of the issue, you’ll have to decide the best course of action to fix the issue. If, for example, the problem is due to a database schema mismatch, one possible solution would be to alter the destination database in such a way that the SQL error no longer occurs. Whatever approach you take to remedy the issue, once you have made the change, on the next push or pull SymmetricDS will retry the batch and the channel’s data will start flowing again.

If you have instead decided that the batch itself is wrong, or does not need synchronized, or you wish to remove a particular data change from a batch, you do have the option of changing the data associated with the batch directly.

Be cautious when using the following two approaches to resolve synchronization issues. By far, the best approach to solving a synchronization error is to resolve what is truly causing the error at the destination database. Skipping a batch or removing a data id as discussed below should be your solution of last resort, since doing so results in differences between the source and destination databases.

Now that you’ve read the warning, if you still want to change the batch data itself, you do have several options, including:

  • Causing SymmetricDS to skip the batch completely. This is accomplished by setting the batch’s status to 'OK', as in:

    update sym_outgoing_batch set status='OK' where batch_id='XXXXXX'

    where XXXXXX is the failing batch. On the next pull or push, SymmetricDS will skip this batch since it now thinks the batch has already been synchronized. Note that you can still distinguish between successful batches and ones that you’ve artificially marked as 'OK', since the error_flag column on the failed batch will still be set to '1' (in error).

  • Removing the failing data id from the batch by deleting the corresponding row in DATA_EVENT. Eliminating the data id from the list of data ids in the batch will cause future synchronization attempts of the batch to no longer include that particular data change as part of the batch. For example:

    delete from sym_data_event where batch_id='XXXXXX' and data_id='YYYYYY'
    where XXXXXX is the failing batch and YYYYYY is the data id to longer be included in the batch.

    After modifying the batch you will have to clear the Staging Area manually or wait for the staged version of the batch to timeout and clear itself.

5.5. Incoming Batches

Incoming batches are delivered to the target node when the source node pushes or when the target node pulls.

Incoming batches are written to the Staging Area first and then loaded. The use of the staging area can be turned off by setting the stream.to.file.enabled parameter.

5.5.1. Incoming Sync Status

The status of incoming synchronization can be queried at the target database.

The following query will show incoming synchronization failures by node:

select count(*), node_id from sym_incoming_batch
  where error_flag=1 group by node_id;

Client nodes update their heartbeat_time in the NODE_HOST table. If a client node is online and actively syncing you can monitor the NODE_HOST table at the server to find client nodes that are offline. Note that at times there could be more than one NODE_HOST row per node_id. This could be the case if the nodes are clustered or the host_name changes.

The following query will give you nodes that have not synchronized in the last 24 hours. Note that the SQL might vary slightly for some databases as some of the supported databases do not support current_timestamp.

select node_id, host_name from sym_node_host
  where heartbeat_time < current_timestamp-1;

5.5.2. Incoming Batch Errors

When a batch fails to load it is marked with an Error status.

Analyzing the Issue

Analysis using an incoming batch is different than that of outgoing batches.

For incoming batches, you will rely on two tables, INCOMING_BATCH and INCOMING_ERROR.

The first step in analyzing the cause of an incoming failed batch is to locate information about the batch, starting with INCOMING_BATCH . To locate batches in error, use:

select * from sym_incoming_batch where error_flag=1;

Several useful pieces of information are available from this query:

  • The batch number of the failed batch, available in column BATCH_ID. Note that this is the batch number of the outgoing batch on the outgoing node.

  • The node the batch is being sent from, available in column NODE_ID.

  • The channel to which the batch belongs, available in column CHANNEL_ID. All subsequent batches on this channel from this node will be held until the error condition is resolved.

  • The data_id that was being processed when the batch failed, available in column FAILED_DATA_ID.

  • Any SQL message, SQL State, and SQL Codes being returned during the synchronization attempt, available in columns SQL_MESSAGE, SQL_STATE, and SQL_CODE, respectively.

For incoming batches, we do not have data and data event entries in the database we can query. We do, however, have a table, INCOMING_ERROR, which provides some information about the batch.

select * from sym_incoming_error
where batch_id='XXXXXX' and node_id='YYYYY';

where XXXXXX is the batch id and YYYYY is the node id of the failing batch.

This query returns a wealth of information about each data change in a batch, including:

  • The table involved in each data change, available in column TARGET_TABLE_NAME,

  • The event type (Update [U], Insert [I], or Delete [D]), available in column EVENT_TYPE,

  • A comma separated list of the new data and (optionally) the old data, available in columns ROW_DATA and OLD_DATA, respectively,

  • The column names of the table, available in column COLUMN_NAMES,

  • The primary key column names of the table, available in column PK_COLUMN_NAMES,

Resolving the Issue

For batches in error, from the incoming side you’ll also have to decide the best course of action to fix the issue.

Incoming batch errors that are in conflict can by fixed by taking advantage of two columns in INCOMING_ERROR which are examined each time batches are processed. The first column, resolve_data if filled in will be used in place of row_data. The second column, resolve_ignore if set will cause this particular data item to be ignored and batch processing to continue. This is the same two columns used when a manual conflict resolution strategy is chosen, as discussed in Conflicts.

5.6. Staging Area

SymmetricDS creates temporary extraction and data load files with the CSV payload of a synchronization when the value of the stream.to.file.threshold.bytes SymmetricDS property has been reached. Before reaching the threshold, files are streamed to/from memory. The default threshold value is 0 bytes. This feature may be turned off by setting the stream.to.file.enabled property to false.

SymmetricDS creates these temporary files in the directory specified by the java.io.tmpdir Java System property.

The location of the temporary directory may be changed by setting the Java System property passed into the Java program at startup. For example,

-Djava.io.tmpdir=/home/.symmetricds/tmp

5.7. Pull Threads

Both the Pull Job and the File Sync Pull Job can be configured to pull multiple nodes in parallel. In order to take advantage of this the pull.thread.per.server.count or file.pull.thread.per.server.count should be adjusted (from the default value of 1) to the number to the number of concurrent pulls you want to occur per period on each SymmetricDS instance.

Pull activity is recorded in the NODE_COMMUNICATION table. This table is also used as a semaphore to lock pull activity across multiple servers in a cluster.

5.8. Push Threads

Both the Push Job and the File Sync Push Job can be configured to push multiple nodes in parallel. In order to take advantage of this the push.thread.per.server.count or file.push.thread.per.server.count should be adjusted (from the default value of 1) to the number to the number of concurrent pushes you want to occur per period on each SymmetricDS instance.

Push activity is recorded in the NODE_COMMUNICATION table. This table is also used as a semaphore to lock push activity across multiple servers in a cluster.

5.9. Monitors

When a Monitor is configured, it is run periodically to check the current value of a system metric and compare it to a threshold value. Different monitor types can check the CPU usage, disk usage, memory usage, batch errors, outstanding batches, unrouted data, and number of data gaps. Custom monitor types can be created using Extensions that use the IMonitorType interface. When the value returned from the check meets or exceeds the threshold value, a MONITOR_EVENT is recorded. The MONITOR_EVENT table is synchronized on the heartbeat channel, which allows a central server to see events from remote nodes, but this behavior can be disabled by setting the monitor.events.capture.enabled parameter to false.

To be immediately notified of a monitor event, use Notifications to match on the severity level. Different notification type can send a message by writing to the log or sending an email. Custom notification types can be created using Extensions that use the INotificationType interface. In order to send email, the Mail Server should be configured.

5.10. Logging

The standalone SymmetricDS installation uses Log4J for logging. The configuration file is conf/log4j.xml. The log4j.xml file has hints as to what logging can be enabled for useful, finer-grained logging.

There is a command line option to turn on preconfigured debugging levels. When the --debug option is used the conf/debug-log4j.xml is used instead of log4j.xml.

SymmetricDS proxies all of its logging through SLF4J. When deploying to an application server or if Log4J is not being leveraged, then the general rules for for SLF4J logging apply.

6. Advanced Topics

This chapter focuses on a variety of topics, including deployment options, jobs, clustering, encryptions, synchronization control, and configuration of SymmetricDS.

6.1. Advanced Synchronization

6.1.1. Disabling Synchronization

All data loading may be disabled by setting the dataloader.enable property to false. This has the effect of not allowing incoming synchronizations, while allowing outgoing synchronizations. All data extractions may be disabled by setting the dataextractor.enable property to false. These properties can be controlled by inserting into the master node’s PARAMETER table. These properties affect every channel with the exception of the 'config' channel.

6.1.2. Bi-Directional Synchronization

SymmetricDS allows tables to be synchronized bi-directionally. Note that an outgoing synchronization does not process changes during an incoming synchronization on the same node unless the trigger was created with the sync_on_incoming_batch flag set. If the sync_on_incoming_batch flag is set, then update loops are prevented by a feature that is available in most database dialects. More specifically, during an incoming synchronization the source node_id is put into a database session variable that is available to the database trigger. Data events are not generated if the target node_id on an outgoing synchronization is equal to the source node_id.

By default, only the columns that changed will be updated in the target system.

Conflict resolution strategies can be configured for specific links and/or sets of tables.

6.1.3. Multi-Tiered Synchronization

There may be scenarios where data needs to flow through multiple tiers of nodes that are organized in a tree-like network with each tier requiring a different subset of data. For example, you may have a system where the lowest tier may be a computer or device located in a store. Those devices may connect to a server located physically at that store. Then the store server may communicate with a corporate server for example. In this case, the three tiers would be device, store, and corporate. Each tier is typically represented by a node group. Each node in the tier would belong to the node group representing that tier.

A node can only pull and push data to other nodes that are represented in the node’s NODE table and in cases where that node’s sync_enabled column is set to 1. Because of this, a tree-like hierarchy of nodes can be created by having only a subset of nodes belonging to the same node group represented at the different branches of the tree.

If auto registration is turned off, then this setup must occur manually by opening registration for the desired nodes at the desired parent node and by configuring each node’s registration.url to be the parent node’s URL. The parent node is always tracked by the setting of the parent’s node_id in the created_at_node_id column of the new node. When a node registers and downloads its configuration it is always provided the configuration for nodes that might register with the node itself based on the Node Group Links defined in the parent node.

Registration Redirect

When deploying a multi-tiered system it may be advantageous to have only one registration server, even though the parent node of a registering node could be any of a number of nodes in the system. In SymmetricDS the parent node is always the node that a child registers with. The REGISTRATION_REDIRECT table allows a single node, usually the root server in the network, to redirect registering nodes to their true parents. It does so based on a mapping found in the table of the external id (registrant_external_id) to the parent’s node id (registration_node_id).

For example, if it is desired to have a series of regional servers that workstations at retail stores get assigned to based on their external_id, the store number, then you might insert into REGISTRATION_REDIRECT the store number as the registrant_external_id and the node_id of the assigned region as the registration_node_id. When a workstation at the store registers, the root server sends an HTTP redirect to the sync_url of the node that matches the registration_node_id.

Please see Initial Loads for important details around initial loads and registration when using registration redirect.

6.2. Offline Synchronization

6.2.1. Setup an Offline Node

Configuring a node as offline will still allow changes to be captured and batched for replication. However the push and/or pull jobs that are used to interact with this node will not use the standard http or https protocols to communicate with other nodes for changes. Instead the local file system will be used for replication. It is up to the user transport batch (*.csv) files to and from the node based on incoming or outgoing changes.

  • Turn on the offline push and pull jobs.

INSERT INTO sym_parameter
(external_id, node_group_id, param_key, param_value, create_time, last_update_by, last_update_time) VALUES
('ALL', 'ALL', 'start.offline.pull.job', 'true', current_timestamp, 'userid', current_timestamp);

INSERT INTO sym_parameter
(external_id, node_group_id, param_key, param_value, create_time, last_update_by, last_update_time) VALUES
('ALL', 'ALL', 'start.offline.push.job', 'true', current_timestamp, 'userid', current_timestamp);
  • Turn on the offline.node parameter for the node that should be offline.

INSERT INTO sym_parameter
(external_id, node_group_id, param_key, param_value, create_time, last_update_by, last_update_time) VALUES
('001', 'STORE', 'node.offline', 'true', current_timestamp, 'userid', current_timestamp);
  • Setting these parameters immediately affects the behavior of the push and pull jobs, so outgoing batches intended for the offline node are now written as files. Other nodes are unaffected and will continue to synchronize normally. All outstanding batches for this node are immediately written to files. As new changes occur going forward, any batches for this will also be written as files. In this example, two batches of data were waiting to sync, so they are written to files.

  • At the offline store node, the parameter immediately affects the behavior of the push and pull jobs, so outgoing batches intended for CORP are now written as files. Other nodes are unaffected. All outstanding batches for CORP are immediately written to files. As new changes occur going forward, any batches for CORP will also be written as files. In this example, two batches of data are written.

offline sync node1 before
  • Finally, move the batch files to their respective incoming folder. After moving the files, the folders will contain the files depicted below.

offline sync

6.2.2. Turn offline node online again

To configure the node online again simply remove the parameter entries from step 2 above. As an additional step to save resources the offline jobs can be stopped as well from step 1 above if there are not any nodes operating in an offline mode.

6.3. Encrypted Passwords

The db.user and db.password properties will accept encrypted text, which protects against casual observation. The text is prefixed with enc: to indicate that it is encrypted. To encrypt text, use the following command:

symadmin -e {engine name} encrypt-text text-to-encrypt

or

symadmin -p {properties file} encrypt-text text-to-encrypt

The text is encrypted using a secret key named "sym.secret" that is retrieved from a keystore file. By default, the keystore is located in security/keystore. The location and filename of the keystore can be overridden by setting the "sym.keystore.file" system property. If the secret key is not found, the system will generate and install a secret key for use with Triple DES cipher.

Generate a new secret key for encryption using the keytool command that comes with the JRE. If there is an existing key in the keystore, first remove it:

keytool -keystore keystore -storepass changeit -storetype jceks \
   -alias sym.secret -delete

Then generate a secret key, specifying a cipher algorithm and key size. Commonly used algorithms that are supported include aes, blowfish, desede, and rc4.

keytool -keystore keystore -storepass changeit -storetype jceks \
   -alias sym.secret -genseckey -keyalg aes -keysize 128

If using an alternative provider, place the provider JAR file in the SymmetricDS lib folder. The provider class name should be installed in the JRE security properties or specified on the command line. To install in the JRE, edit the JRE lib/security/java.security file and set a security.provider.i property for the provider class name. Or, the provider can be specified on the command line instead. Both keytool and sym accept command line arguments for the provider class name. For example, using the Bouncy Castle provider, the command line options would look like:

keytool -keystore keystore -storepass changeit -storetype jceks \
   -alias sym.secret -genseckey -keyalg idea -keysize 56 \
   -providerClass org.bouncycastle.jce.provider.BouncyCastleProvider \
   -providerPath ..\lib\bcprov-ext.jar
symadmin -providerClass org.bouncycastle.jce.provider.BouncyCastleProvider -e secret

To customize the encryption, write a Java class that implements the ISecurityService or extends the default SecurityService, and place the class on the classpath in either lib or web/WEB-INF/lib folders. Then, in the symmetric.properties specify your class name for the security service.

security.service.class.name=org.jumpmind.security.SecurityService

Remember to specify your properties file when encrypting passwords, so it will use your custom ISecurityService.

symadmin -p symmetric.properties -e secret

6.4. Secure Transport

By specifying the "https" protocol for a URL, SymmetricDS will communicate over Secure Sockets Layer (SSL) for an encrypted transport. The following properties need to be set with "https" in the URL:

sync.url

This is the URL of the current node, so if you want to force other nodes to communicate over SSL with this node, you specify "https" in the URL.

registration.url

This is the URL where the node will connect for registration when it first starts up. To protect the registration with SSL, you specify "https" in the URL.

For incoming HTTPS connections, SymmetricDS depends on the webserver where it is deployed, so the webserver must be configured for HTTPS. As a standalone deployment, the "sym" launcher command provides options for enabling HTTPS support.

6.4.1. Sym Launcher

The "sym" launch command uses Jetty as an embedded web server. Using command line options, the web server can be told to listen for HTTP, HTTPS, or both.

sym --port 8080 --server
sym --secure-port 8443 --secure-server
sym --port 8080 --secure-port 8443 --mixed-server

6.4.2. Tomcat

If you deploy SymmetricDS to Apache Tomcat, it can be secured by editing the TOMCAT_HOME/conf/server.xml configuration file. There is already a line that can be uncommented and changed to the following:

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
  maxThreads="150" scheme="https" secure="true"
  clientAuth="false" sslProtocol="TLS"
  keystoreFile="/symmetric-ds-1.x.x/security/keystore" />

6.4.3. Keystores

When SymmetricDS connects to a URL with HTTPS, Java checks the validity of the certificate using the built-in trusted keystore located at JRE_HOME/lib/security/cacerts. The "sym" launcher command overrides the trusted keystore to use its own trusted keystore instead, which is located at security/cacerts. This keystore contains the certificate aliased as "sym" for use in testing and easing deployments. The trusted keystore can be overridden by specifying the javax.net.ssl.trustStore system property.

When SymmetricDS is run as a secure server with the "sym" launcher, it accepts incoming requests using the key installed in the keystore located at security/keystore. The default key is provided for convenience of testing, but should be re-generated for security.

6.4.4. Generating Keys

To generate new keys and install a server certificate, use the following steps:

  • Open a command prompt and navigate to the security subdirectory of your SymmetricDS installation on the server to which communication will be secured (typically the "root" or "central office" server).

  • Delete the old key pair and certificate.

keytool -keystore keystore -delete -alias sym
If you receive a message like "keytool error: java.io.IOException: Invalid keystore format," try adding a parameter of "-storetype jceks". Or, if you receive a message like, "keytool error: java.lang.Exception: Alias <sym> does not exist" - then the alias does not exist and you can skip this step.
keytool -keystore cacerts -delete -alias sym
See above for possible errors from this command.
Enter keystore password:  changeit
  • Generate a new key pair. Note that the first name/last name (the "CN") must match the fully qualified hostname the client will be using to communcate to the server.

keytool -keystore keystore -alias sym -genkey -keyalg RSA -validity 10950
Enter keystore password:  changeit
What is your first and last name?
  [Unknown]:  localhost
What is the name of your organizational unit?
  [Unknown]:  SymmetricDS
What is the name of your organization?
  [Unknown]:  JumpMind
What is the name of your City or Locality?
  [Unknown]:
What is the name of your State or Province?
  [Unknown]:
What is the two-letter country code for this unit?
  [Unknown]:
Is CN=localhost, OU=SymmetricDS, O=JumpMind, L=Unknown, ST=Unknown, C=Unknown
correct?
  [no]:  yes

Enter key password for <sym>
        (RETURN if same as keystore password):
  • Export the certificate from the private keystore.

keytool -keystore keystore -export -alias sym -rfc -file sym.cer
  • Install the certificate in the trusted keystore.

keytool -keystore cacerts -import -alias sym -file sym.cer
  • Copy the cacerts file that is generated by this process to the security directory of each client’s SymmetricDS installation.

6.4.5. Importing Signed Certificates from PKCS 12 files

You would use the following command to import a p12 certificate into the SymmetricDS keystore:

keytool -delete -alias sym -noprompt -keystore keystore -storetype jceks -storepass changeit

keytool -importkeystore -deststorepass changeit -destkeypass changeit -destkeystore keystore -storetype jceks -srckeystore {yourcert.p12} -srcstoretype PKCS12 -srcstorepass {pkcs12 password} -srcalias {pkcs12 alias} -destalias sym

6.4.6. Changing Keystore Password

The keystore and each key entry is protected with a password that defaults to "changeit". To change the password, use the following steps:

  • Open a command prompt and navigate to the security subdirectory of your SymmetricDS installation.

  • Use the keytool command to enter the old and new password for the keystore and each key entry.

keytool -keystore keystore -storetype jceks -storepasswd
keytool -keystore keystore -storetype jceks -alias sym -keypasswd
keytool -keystore keystore -storetype jceks -alias sym.secret -keypasswd
  • (Optional) Obfuscate the password to prevent casual observation from the configuration files. An obfuscated password starts with "obf:" while a cleartext password does not.

syadmin obfuscate-text changeit
  • Edit bin/setenv (or bin\setenv.bat on Windows) and conf/sym_service.conf files to find a similar line as below to change the password.

-Djavax.net.ssl.keyStorePassword=changeit

6.5. Java Management Extensions

Monitoring and administrative operations can be performed using Java Management Extensions (JMX). SymmetricDS uses MX4J to expose JMX attributes and operations that can be accessed from the built-in web console, Java’s jconsole, or an application server. By default, the web management console can be opened from the following address:

http://localhost:31416/

In order to use jconsole, you must enable JMX remote management in the JVM. You can edit the startup scripts to set the following system parameters.

-Dcom.sun.management.jmxremote.port=31417
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false

More details about enabling JMX for JConsole can be found here.

Using the Java jconsole command, SymmetricDS is listed as a local process named SymmetricLauncher. In jconsole, SymmetricDS appears under the MBeans tab under the name defined by the engine.name property. The default value is SymmetricDS.

The management interfaces under SymmetricDS are organized as follows:

Node

administrative operations

Parameters

access to properties set through the parameter service

6.6. JMS Publishing

With the proper configuration SymmetricDS can publish XML messages of captured data changes to JMS during routing or transactionally while data loading synchronized data into a target database. The following explains how to publish to JMS during synchronization to the target database.

The XmlPublisherDatabaseWriterFilter is a IDatabaseWriterFilter that may be configured to publish specific tables as an XML message to a JMS provider. See Extensions for information on how to configure an extension point. If the publish to JMS fails, the batch will be marked in error, the loaded data for the batch will be rolled back and the batch will be retried during the next synchronization run.

The following is an example extension point configuration that will publish four tables in XML with a root tag of 'sale'. Each XML message will be grouped by the batch and the column names identified by the groupByColumnNames property which have the same values.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="http://www.springframework.org/schema/beans
           http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
           http://www.springframework.org/schema/context
           http://www.springframework.org/schema/context/spring-context-3.0.xsd">

    <bean id="configuration-publishingFilter"
      class="org.jumpmind.symmetric.integrate.XmlPublisherDatabaseWriterFilter">
        <property name="xmlTagNameToUseForGroup" value="sale"/>
        <property name="tableNamesToPublishAsGroup">
            <list>
               <value>SALE_TX</value>
               <value>SALE_LINE_ITEM</value>
               <value>SALE_TAX</value>
               <value>SALE_TOTAL</value>
            </list>
        </property>
        <property name="groupByColumnNames">
            <list>
               <value>STORE_ID</value>
               <value>BUSINESS_DAY</value>
               <value>WORKSTATION_ID</value>
               <value>TRANSACTION_ID</value>
            </list>
        </property>
        <property name="publisher">
           <bean class="org.jumpmind.symmetric.integrate.SimpleJmsPublisher">
               <property name="jmsTemplate" ref="definedSpringJmsTemplate"/>
           </bean>
        </property>
    </bean>
</beans>

The publisher property on the XmlPublisherDatabaseWriterFilter takes an interface of type IPublisher. The implementation demonstrated here is an implementation that publishes to JMS using Spring’s JMS template. Other implementations of IPublisher could easily publish the XML to other targets like an HTTP server, the file system or secure copy it to another server.

The above configuration will publish XML similar to the following:

<?xml version="1.0" encoding="UTF-8"?>
<sale xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  id="0012010-01-220031234" nodeid="00001" time="1264187704155">
  <row entity="SALE_TX" dml="I">
    <data key="STORE_ID">001</data>
    <data key="BUSINESS_DAY">2010-01-22</data>
    <data key="WORKSTATION_ID">003</data>
    <data key="TRANSACTION_ID">1234</data>
    <data key="CASHIER_ID">010110</data>
  </row>
  <row entity="SALE_LINE_ITEM" dml="I">
    <data key="STORE_ID">001</data>
    <data key="BUSINESS_DAY">2010-01-22</data>
    <data key="WORKSTATION_ID">003</data>
    <data key="TRANSACTION_ID">1234</data>
    <data key="SKU">9999999</data>
    <data key="PRICE">10.00</data>
    <data key="DESC" xsi:nil="true"/>
  </row>
  <row entity="SALE_LINE_ITEM" dml="I">
    <data key="STORE_ID">001</data>
    <data key="BUSINESS_DAY">2010-01-22</data>
    <data key="WORKSTATION_ID">003</data>
    <data key="TRANSACTION_ID">1234</data>
    <data key="SKU">9999999</data>
    <data key="PRICE">10.00</data>
    <data key="DESC" xsi:nil="true"/>
  </row>
  <row entity="SALE_TAX" dml="I">
    <data key="STORE_ID">001</data>
    <data key="BUSINESS_DAY">2010-01-22</data>
    <data key="WORKSTATION_ID">003</data>
    <data key="TRANSACTION_ID">1234</data>
    <data key="AMOUNT">1.33</data>
  </row>
  <row entity="SALE_TOTAL" dml="I">
    <data key="STORE_ID">001</data>
    <data key="BUSINESS_DAY">2010-01-22</data>
    <data key="WORKSTATION_ID">003</data>
    <data key="TRANSACTION_ID">1234</data>
    <data key="AMOUNT">21.33</data>
  </row>
</sale>

To publish JMS messages during routing the same pattern is valid, with the exception that the extension point would be the XmlPublisherDataRouter and the router would be configured by setting the router_type of a ROUTER to the Spring bean name of the registered extension point. Of course, the router would need to be linked through TRIGGER_ROUTERs to each TRIGGER table that needs published.

6.7. File Synchronization

SymmetricDS not only supports the synchronization of database tables, but it also supports the synchronization of files and folders from one node to another.

6.7.1. File Synchronization Overview

File synchronization features include:
  • Monitoring one or more file system directory locations for file and folder changes

  • Support synchronizing a different target directory than the source directory

  • Use of wild card expressions to “include” or “exclude” files

  • Choice of whether to recurse into subfolders of monitored directories

  • Use of existing SymmetricDS routers to subset target nodes based on file and directory metadata

  • Ability to specify if files will be synchronized on creation, or deletion, and/or modification

  • Ability to specify the frequency with which file systems are monitored for changes

  • Ability to extend file synchronization through scripts that run before or after a file is copied to its source location

  • Support for bidirectional file synchronization

  • Like database synchronization, file synchronization is configured in a series of database tables. The configuration was designed to be similar to database synchronization in order to maintain consistency and to give database synchronization users a sense of familiarity.

For database synchronization, SymmetricDS uses Table Triggers to configure which tables will capture data for synchronization and Routers to designate which nodes will be the source of data changes and which nodes will receive the data changes. Table Routing links triggers to routers.

Likewise, for file synchronization, SymmetricDS uses File Triggers to designate which base directories will be monitored. Each entry in File Triggers designates one base directory to monitor for changes on the source system. The columns on File Triggers provide additional settings for choosing specific files in the base directory that will be monitored, and whether to recurse into subdirectories, etc. File triggers are linked to routers using File Routing. The file trigger router not only links the source and the target node groups, but it also optionally provides the ability to override the base directory name at the target. File Routing also provides a flag that indicates if the target node should be seeded with the files from the source node during SymmetricDS’s initial load process.

File synchronization does require a database for runtime information about the synchronization scenario. File Triggers will also need to be linked to an appropriate router like table triggers in order to complete the setup.
H2 database works great as a small lightweight database to support file synchronization runtime information if you do not have a relational database readily available to support file sync.

6.7.2. How File Synchronization Works

Not only is file synchronization configured similar to database synchronization, but it also operates in a very similar way. The file system is monitored for changes via a background job that tracks the file system changes (this parallels the use of triggers to monitor for changes when synchronizing database changes). When a change is detected it is written to the FILE_SNAPSHOT table. The file snapshot table represents the most recent known state of the monitored files. The file snapshot table has a SymmetricDS database trigger automatically installed on it so that when it is updated the changes are captured by SymmetricDS on an internal channel named filesync.

The changes to FILE_SNAPSHOT are then routed and batched by a file-synchronization-specific router that delegates to the configured router based on the File Routing configuration. The file sync router can make routing decisions based on the column data of the snapshot table, columns which contain attributes of the file like the name, path, size, and last modified time. Both old and new file snapshot data are also available. The router can, for example, parse the path or name of the file and use it as the node id to route to.

Batches of file snapshot changes are stored on the filesync channel in OUTGOING_BATCH. The existing SymmetricDS pull and push jobs ignore the filesync channel. Instead, they are processed by file-synchronization-specific push and pull jobs.

When transferring data, the file sync push and pull jobs build a zip file dynamically based on the batched snapshot data. The zip file contains a directory per batch. The directory name is the batch_id. A sync.bsh Bean Shell script is generated and placed in the root of each batch directory. The Bean Shell script contains the commands to copy or delete files at their file destination from an extracted zip in the staging directory on the target node. The zip file is downloaded in the case of a pull, or, in the case of a push, is uploaded as an HTTP multi-part attachment. Outgoing zip files are written and transferred from the outgoing staging directory. Incoming zip files are staged in the filesync_incoming staging directory by source node id. The filesync_incoming/{node_id} staging directory is cleared out before each subsequent delivery of files.

The acknowledgement of a batch happens the same way it is acknowledged in database synchronization. The client responds with an acknowledgement as part of the response during a file push or pull.

7. Developer

This chapter focuses on a variety of ways for developers to build upon and extend some of the existing features found within SymmetricDS.

7.1. Extension Points

SymmetricDS has a pluggable architecture that can be extended. A Java class that implements the appropriate extension point interface, can implement custom logic and change the behavior of SymmetricDS to suit special needs. All supported extension points extend the IExtensionPoint interface. The available extension points are documented in the following sections.

When SymmetricDS starts up, the ExtensionPointManager searches a Spring Framework context for classes that implement the IExtensionPoint interface, then creates and registers the class with the appropriate SymmetricDS component.

Extensions should be configured in the conf/symmetric-extensions.xml file as Spring beans. The jar file that contains the extension should be placed in the web/WEB-INF/lib directory.

If an extension point needs access to SymmetricDS services or needs to connect to the database it may implement the ISymmetricEngineAware interface in order to get a handle to the ISymmetricEngine.

The INodeGroupExtensionPoint interface may be optionally implemented to indicate that a registered extension point should only be registered with specific node groups.

/**
 * Only apply this extension point to the 'root' node group.
 */
 public String[] getNodeGroupIdsToApplyTo() {
     return new String[] { "root" };
 }
IParameterFilter

Parameter values can be specified in code using a parameter filter. Note that there can be only one parameter filter per engine instance. The IParameterFilter replaces the deprecated IRuntimeConfig from prior releases.

public class MyParameterFilter
    implements IParameterFilter, INodeGroupExtensionPoint {

    /**
     * Only apply this filter to stores
     */
    public String[] getNodeGroupIdsToApplyTo() {
        return new String[] { "store" };
    }

    public String filterParameter(String key, String value) {
        // look up a store number from an already existing properties file.
        if (key.equals(ParameterConstants.EXTERNAL_ID)) {
            return StoreProperties.getStoreProperties().
              getProperty(StoreProperties.STORE_NUMBER);
        }
        return value;
    }

    public boolean isAutoRegister() {
        return true;
    }

}
IDatabaseWriterFilter

Data can be filtered or manipulated before it is loaded into the target database. A filter can change the data in a column, save it somewhere else or do something else with the data entirely. It can also specify by the return value of the function call that the data loader should continue on and load the data (by returning true) or ignore it (by returning false). One possible use of the filter, for example, might be to route credit card data to a secure database and blank it out as it loads into a less-restricted reporting database.

A DataContext is passed to each of the callback methods. A new context is created for each synchronization. The context provides a mechanism to share data during the load of a batch between different rows of data that are committed in a single database transaction.

The filter also provides callback methods for the batch lifecycle. The DatabaseWriterFilterAdapter may be used if not all methods are required.

A class implementing the IDatabaseWriterFilter interface is injected onto the DataLoaderService in order to receive callbacks when data is inserted, updated, or deleted.

public class MyFilter extends DatabaseWriterFilterAdapter {

    @Override
    public boolean beforeWrite(DataContext context, Table table, CsvData data) {
        if (table.getName().equalsIgnoreCase("CREDIT_CARD_TENDER")
                && data.getDataEventType().equals(DataEventType.INSERT)) {
            String[] parsedData = data.getParsedData(CsvData.ROW_DATA);
            // blank out credit card number
            parsedData[table.getColumnIndex("CREDIT_CARD_NUMBER")] = null;
        }
        return true;
    }
}

The filter class should be specified in conf/symmetric-extensions.xml as follows.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="http://www.springframework.org/schema/beans
           http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
           http://www.springframework.org/schema/context
           http://www.springframework.org/schema/context/spring-context-3.0.xsd">

    <bean id="myFilter" class="com.mydomain.MyFilter"/>

</beans>
IDatabaseWriterErrorHandler

Implement this extension point to override how errors are handled. You can use this extension point to ignore rows that produce foreign key errors.

IDataLoaderFactory

Implement this extension point to provide a different implementation of the org.jumpmind.symmetric.io.data.IDataWriter that is used by the SymmetricDS data loader. Data loaders are configured for a channel. After this extension point is registered it can be activated for a CHANNEL by indicating the data loader name in the data_loader_type column.

SymmetricDS has two out of the box extensions of IDataLoaderFactory already implemented in its PostgresBulkDataLoaderFactory and OracleBulkDataLoaderFactory classes. These extension points implement bulk data loading capabilities for Oracle, Postgres and Greenplum dialects. See Appendix C. Database Notes for details.

Another possible use of this extension point is to route data to a NOSQL data sink.

IAcknowledgeEventListener

Implement this extension point to receive callback events when a batch is acknowledged. The callback for this listener happens at the point of extraction.

IReloadListener

Implement this extension point to listen in and take action before or after a reload is requested for a Node. The callback for this listener happens at the point of extraction.

ISyncUrlExtension

This extension point is used to select an appropriate URL based on the URI provided in the sync_url column of sym_node.

To use this extension point configure the sync_url for a node with the protocol of ext://beanName. The beanName is the name you give the extension point in the extension xml file.

IColumnTransform

This extension point allows custom column transformations to be created. There are a handful of out-of-the-box implementations. If any of these do not meet the column transformation needs of the application, then a custom transform can be created and registered. It can be activated by referencing the column transform’s name transform_type column of TRANSFORM_COLUMN

INodeIdCreator

This extension point allows SymmetricDS users to implement their own algorithms for how node ids and passwords are generated or selected during the registration process. There may be only one node creator per SymmetricDS instance (Please note that the node creator extension has replaced the node generator extension).

ITriggerCreationListener

Implement this extension point to get status callbacks during trigger creation.

IBatchAlgorithm

Implement this extension point and set the name of the Spring bean on the batch_algorithm column of the Channel table to use. This extension point gives fine grained control over how a channel is batched.

IDataRouter

Implement this extension point and set the name of the Spring bean on the router_type column of the Router table to use. This extension point gives the ability to programmatically decide which nodes data should be routed to.

IHeartbeatListener

Implement this extension point to get callbacks during the heartbeat job.

IOfflineClientListener

Implement this extension point to get callbacks for offline events on client nodes.

IOfflineServerListener

Implement this extension point to get callbacks for offline events detected on a server node during monitoring of client nodes.

INodePasswordFilter

Implement this extension point to intercept the saving and rendering of the node password.

7.2. Embedding in Android

SymmetricDS has its web-enabled, fault-tolerant, database synchronization software available on the Android mobile computing platform. The Android client follows all of the same concepts and brings to Android all of the same core SymmetricDS features as the full-featured, Java-based SymmetricDS client. The Android client is a little bit different in that it is not a stand-alone application, but is designed to be referenced as a library to run in-process with an Android application requiring synchronization for its SQLite database.

By using SymmetricDS, mobile application development is simplified, in that the mobile application developer can now focus solely on interacting with their local SQLite database. SymmetricDS takes care of capturing and moving data changes to and from a centralized database when the network is available

The same core libraries that are used for the SymmetricDS server are also used for Android. SymmetricDS’s overall footprint is reduced by eliminating a number of external dependencies in order to fit better on an Android device. The database access layer is abstracted so that the Android specific database access layer could be used. This allows SymmetricDS to be efficient in accessing the SQLite database on the Android device.

In order to convey how to use the SymmetricDS Android libraries, the example below will show how to integrate SymmetricDS into the NotePad sample application that comes with the Android ADK.

The NotePad sample application is a very simple task list application that persists notes to a SQLite database table called Notes. Eclipse 3.7.2 and Android ADK 20.0.3 were used for this example. The NOTES table definition is below.

CREATE TABLE NOTES (
_ID VARCHAR(50) PRIMARY KEY,
TITLE TEXT,
NOTE TEXT,
CREATED VARCHAR(50),
MODIFIED VARCHAR(50),
);

Create the NotePad project. You do this by adding a new Android Sample Project. Select the NotePad project.

sync android 1
Figure 2. New Sample NotePad Project

SymmetricDS for Android comes as a zip file of Java archives (jar files) that are required by the SymmetricDS client at runtime. This zip file (symmetric-ds-3.4.7-android.zip) can be downloaded from the SymmetricDS.org website. The first step to using SymmetricDS in an Android application is to unzip the jar files into a location where the project will recognize them. The latest Android SDK and the Eclipse ADK requires that these jar files be put into a libs directory under the Android application project.

sync android 2
Figure 3. New Sample NotePad Project

Unzip the symmetric-ds-x.x.x-android.zip file to the NotePad project directory. Refresh the NotePad project in Eclipse. You should end up with a libs directory that is automatically added to the Android Dependencies.

sync android 3
Figure 4. Jar Files Added to Libs

The Android version of the SymmetricDS engine is a Java class that can be instantiated directly or wired into an application via a provided Android service. Whether you are using the service or the engine directly you need to provide a few required startup parameters to the engine:

SQLiteOpenHelper

It is best (but not required) if the SQLiteOpenHelper is shared with the application that will be sharing the SQLite database. This core Android Java class provides software synchronization around the access to the database and minimizes locking errors.

registrationUrl

This is the URL of where the centralized SymmetricDS instance is hosted.

externalId

This is the identifier that can be used by the centralized SymmetricDS server to identify whether this instance should get data changes that happen on the server. It could be the serial number of the device, an account username, or some other business concept like store number.

nodeGroupId

This is the group id for the mobile device in the synchronization configuration. For example, if the nodeGroupId is 'handheld', then the SymmetricDS configuration might have a group called 'handheld' and a group called 'corp' where 'handheld' is configured to push and pull data from 'corp.'

properties

Optionally tweak the settings for SymmetricDS.

In order to integrate SymmetricDS into the NotePad application, the Android-specific SymmetricService will be used, and we need to tell the Android application this by adding the service to the AndroidManifest.xml file. Add the following snipped to the Manifest as the last entry under the <application> tag.

<service android:name="org.jumpmind.symmetric.android.SymmetricService"
android:enabled="true" >
    <intent-filter>
                  <action android:name="org.jumpmind.symmetric.android.
                  SymmetricService" />
          </intent-filter>
</service>

The other change required in the Manifest is to give the application permission to use the Internet. Add this as the first entry in the AndroidManifest.xml right before the <application> tag.

<uses-permission android:name="android.permission.INTERNET"></uses-permission>

The only additional change needed is the call to start the service in the application. The service needs to be started manually because we need to give the application a chance to provide configuration information to the service.

In NotePadProvider.java add the following code snippet in the onCreate method.

sync android 4
Figure 5. NotePadProvider.java
final String HELPER_KEY = "NotePadHelperKey";

// Register the database helper, so it can be shared with the SymmetricService
SQLiteOpenHelperRegistry.register(HELPER_KEY, mOpenHelper);

Intent intent = new Intent(getContext(), SymmetricService.class);

// Notify the service of the database helper key
intent.putExtra(SymmetricService.INTENTKEY_SQLITEOPENHELPER_REGISTRY_KEY,
HELPER_KEY);
intent.putExtra(SymmetricService.INTENTKEY_REGISTRATION_URL,
"http://10.0.2.2:31415/sync/server");
intent.putExtra(SymmetricService.INTENTKEY_EXTERNAL_ID,
"android-simulator");
intent.putExtra(SymmetricService.INTENTKEY_NODE_GROUP_ID, "client");
intent.putExtra(SymmetricService.INTENTKEY_START_IN_BACKGROUND,
true);
`
Properties properties = new Properties();
intent.putExtra(SymmetricService.INTENTKEY_PROPERTIES, properties);

getContext().startService(intent);

This code snippet shows how the SQLiteOpenHelper is shared. The application’s SQLiteOpenHelper is registered in a static registry provided by the SymmetricDS Android library. When the service is started, the key used to store the helper is passed to the service so that the service may pull the helper back out of the registry.

The various parameters needed by SymmetricDS are being set in the Intent which will be used by the SymmetricService to start the engine.

Next, set up an Android Emulator. This can be done by opening the Android Virtual Device Manager. Click New and follow the steps. The higher the Emulator’s API, the better.

Run your NotePad project by pressing Run on NotePadProvider.java in Eclipse. When prompted, select the emulator you just created. Monitor the Console in Eclipse. Let the NotePad.apk install on the emulator. Now watch the LogCat and wait as it attempts to register with your SymmetricDS Master Node.

7.3. Embedding in C/C++

A minimal implementation of the SymmetricDS client is written in C, which includes a shared library named "libsymclient" and a command line executable named "sym" for synchronizing a database. It currently only supports the SQLite database. The SymmetricDS C library and client are built from the following projects:

symmetric-client-clib

This project contains most of the code and builds the libsymclient C library. It depends on libcurl, libsqlite3, and libcsv.

symmetric-client-clib-test

This project links against the C library to runs unit tests. It also depends on the CUnit library.

symmetric-client-native

This project links against the C library to build the sym executable.

The binaries are built using Eclipse CDT (C/C++ Development Tooling), which is an Integrated Developer Environment based on the Eclipse platform. A distribution of Eclipse CDT can be downloaded or an existing Eclipse installation can be updated to include the CDT. (See https://eclipse.org/cdt/ for information and downloads.) In the future, the projects above will switch to a general build system like Autotools for automating builds, but for now Eclipse is required.

The "sym" executable can be run from the command line and expects the "libsymclient.so" library to be installed on the system. If running from the project directories during development, the path to the library can be specified with the LD_LIBRARY_PATH environment variable on Linux, the DYLD_LIBRARY_PATH on Mac OS X, or PATH on Windows. The executable will look for a "symmetric.properties" file containing startup parameters in the user’s home directory or in the current directory:

LD_LIBRARY_PATH=../../symmetric-client-clib/Debug ./sym

It will also accept an argument of the path and filename of the properties file to use:

LD_LIBRARY_PATH=../../symmetric-client-clib/Debug ./sym /path/to/client.properties

The client uses Startup Parameters to connect to a database, identify itself, and register with a server to request synchronization. Here is an example client.properties file:

db.url=sqlite:file:test.db
group.id=store
external.id=003
registration.url=http://localhost:31415/sync/corp-000

The symmetric-client-native project is an example of how to use the SymEngine API provided by the C library. The C library uses an object-oriented pattern and follows the same naming conventions as the Java project. All symbol names in the C library are prefixed with "Sym". Each Java class is represented in C with a struct that contains member data and pointers to member functions. Here is an example C program that runs the SymmetricDS engine:

#include "libsymclient.h"

int main(int argCount, char **argValues) {

    // Startup and runtime parameters
    SymProperties *prop = SymProperties_new(NULL);
    prop->put(prop, SYM_PARAMETER_DB_URL, "sqlite:file:data.db");
    prop->put(prop, SYM_PARAMETER_GROUP_ID, "store");
    prop->put(prop, SYM_PARAMETER_EXTERNAL_ID, "003");
    prop->put(prop, SYM_PARAMETER_REGISTRATION_URL, "http://localhost:31415/sync/corp-000");

    // Uncomment to read parameters from a file instead
    //SymProperties *prop = SymProperties_newWithFile(NULL, fileName);

    SymEngine *engine = SymEngine_new(NULL, prop);
    // Connects to database, creates config/runtime tables and triggers
    engine->start(engine);

    // Pull changes from remote nodes
    engine->pull(engine);

    // Create batches of captured changes
    engine->route(engine);

    // Push changes to remote nodes
    engine->push(engine);

    // Create a heartbeat batch with current host information
    engine->heartbeat(engine, 0);

    // Purge old batch data that has successfully synced
    engine->purge(engine);

    // Clean up
    engine->stop(engine);
    engine->destroy(engine);
    prop->destroy(prop);

    return 0;
}

8. By Example

This chapter focuses on using examples for a variety of use cases with SymmetricDS.

8.1. Kafka Integration

Use SymmetricDS to capture changes in your database and publish the changes to a Kafka message queue.

8.1.1. Kafka Setup

If you already have a Kafka server running proceed to step 2. Otherwise download and follow the quick start quide provided by Kafka.

https://kafka.apache.org/quickstart

If your using the quick start you can run through steps 1-5 and finish with setting up a consumer. This will allow you to see the messages that arrive on your Kafka queue from SymmetricDS.

8.1.2. SymmetricDS Setup

  • Create a new extension point for the IDatabaseWriterFilter implementation. For this example we will setup a java based implementation that will write either CSV or JSON data to the Kafka queue for the "client" target node group.

Java based extension points require a full JDK (not just the JRE).
Run the following SQL to create the Kafka extension.
insert into SYM_EXTENSION (EXTENSION_ID, EXTENSION_TYPE, INTERFACE_NAME, NODE_GROUP_ID, ENABLED, EXTENSION_ORDER, EXTENSION_TEXT, CREATE_TIME, LAST_UPDATE_BY, LAST_UPDATE_TIME) values ('KafkaDataWriter','java','org.jumpmind.symmetric.io.data.writer.IDatabaseWriterFilter','client',1,1,'
import java.io.File;
import java.util.HashMap;
import java.util.Map;

import org.apache.commons.io.FileUtils;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.jumpmind.db.model.Table;
import org.jumpmind.symmetric.io.data.CsvData;
import org.jumpmind.symmetric.io.data.DataContext;
import org.jumpmind.symmetric.io.data.DataEventType;
import org.jumpmind.symmetric.io.data.writer.IDatabaseWriterFilter;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class KafkaWriterFilter implements IDatabaseWriterFilter {
    protected final String KAKFA_TEXT_CACHE = "KAKFA_TEXT_CACHE" + this.hashCode();

    private final Logger log = LoggerFactory.getLogger(getClass());

    public boolean beforeWrite(DataContext context, Table table, CsvData data) {
        if (table.getName().toUpperCase().startsWith("SYM_")) {
        return true;
        }
        else {
            log.info("Processing table " + table + " for Kafka");

            String[] rowData = data.getParsedData(CsvData.ROW_DATA);
            if (data.getDataEventType() == DataEventType.DELETE) {
                rowData = data.getParsedData(CsvData.OLD_DATA);
            }

            StringBuffer kafkaText = new StringBuffer();
            if (context.get(KAKFA_TEXT_CACHE) != null) {
                kafkaText = (StringBuffer) context.get(KAKFA_TEXT_CACHE);
            }

            boolean useJson = false;

            if (useJson) {
                kafkaText.append("{\"")
                    .append(table.getName())
                    .append("\": {")
                    .append("\"eventType\": \"" + data.getDataEventType() + "\",")
                    .append("\"data\": { ");
                for (int i = 0; i < table.getColumnNames().length; i++) {
                    kafkaText.append("\"" + table.getColumnNames()[i] + "\": \"" + rowData[i]);
                    if (i + 1 < table.getColumnNames().length) {
                        kafkaText.append("\",");
                    }
                }
                kafkaText.append(" } } }");
            }
            else {
                kafkaText.append("\nTABLE")
                    .append(",")
                    .append(table.getName())
                    .append(",")
                    .append("EVENT")
                    .append(",")
                    .append(data.getDataEventType())
                    .append(",");

                for (int i = 0; i < table.getColumnNames().length; i++) {
                    kafkaText.append(table.getColumnNames()[i])
                        .append(",")
                        .append(rowData[i]);
                    if (i + 1 < table.getColumnNames().length) {
                        kafkaText.append(",");
                    }
                }
            }
            context.put(KAKFA_TEXT_CACHE, kafkaText);
        }
        return false;
    }

    public void afterWrite(DataContext context, Table table, CsvData data) {
    }

    public boolean handlesMissingTable(DataContext context, Table table) {
        return true;
    }

    public void earlyCommit(DataContext context) {
    }

    public void batchComplete(DataContext context) {
        if (!context.getBatch().getChannelId().equals("heartbeat") && !context.getBatch().getChannelId().equals("config")) {
            String batchFileName = "batch-" + context.getBatch().getSourceNodeId() + "-" + context.getBatch().getBatchId();
            log.info("Processing batch " + batchFileName + " for Kafka");
            try {
                File batchesDir = new File("batches");
                if (!batchesDir.exists()) {
                    batchesDir.mkdir();
                }
                File batchFile = new File(batchesDir.getAbsoluteFile() + "/" + batchFileName);

                if (context.get(KAKFA_TEXT_CACHE) != null) {
                    String kafkaText =  ((StringBuffer) context.get(KAKFA_TEXT_CACHE)).toString();
                    FileUtils.writeStringToFile(batchFile, KAKFA_TEXT_CACHE);
                    sendKafkaMessage(kafkaText);
                } else {
                    log.info("No text found to write to kafka queue");
                }
            }
            catch (Exception e) {
                log.warn("Unable to write batch to Kafka " + batchFileName, e);
                e.printStackTrace();
            }
        }
    }

    public void batchCommitted(DataContext context) {
    }

    public void batchRolledback(DataContext context) {
    }

    public void sendKafkaMessage(String kafkaText) {
        Map<String,Object> configs = new HashMap<String, Object>();

        configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        configs.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        configs.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        configs.put(ProducerConfig.CLIENT_ID_CONFIG, "symmetricds-producer");

        KafkaProducer<String, String> producer = new KafkaProducer<String, String>(configs);

        producer.send(new ProducerRecord<String, String>("test", kafkaText));
        log.debug("Data to be sent to Kafka-" + kafkaText);

        producer.close();
    }
}

',{ts '2017-01-09 10:58:17.981'},'admin',{ts '2017-01-09 13:04:37.490'});
  • The default kafka server and port are set to localhost:9092 with a client id of "symmetricds-producer". You will need to adjust these variables in the sendKafkaMessage function to match your Kafka setup if they are different.

configs.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
configs.put(ProducerConfig.CLIENT_ID_CONFIG, "symmetricds-producer");
  • JSON or CSV can be adjusted at line 40 of the script. The script defaults to CSV. By setting this variable at line 40 to true the JSON will be sent to the queue. Additional implementations for XML or other formats could be added here if necessary.

// Line 40
boolean useJson = false;
  • Testing. You are now ready to test your Kafka messaging. Make a change to to a table that is configured to replicate to the target node group used in step 1 of this example. This example was setup for the 'client' node group so any changes that are designed to replicate to the client node group will run through this extension point and should be sent to your Kafka queue.

Appendix A: Data Model

What follows is the complete SymmetricDS data model. Note that all tables are prepended with a configurable prefix so that multiple instances of SymmetricDS may coexist in the same database. The default prefix is sym_.

SymmetricDS configuration is entered by the user into the data model to control the behavior of what data is synchronized to which nodes.

data model config
Figure 6. Configuration Data Model

At runtime, the configuration is used to capture data changes and route them to nodes. The data changes are placed together in a single unit called a batch that can be loaded by another node. Outgoing batches are delivered to nodes and acknowledged. Incoming batches are received and loaded. History is recorded for batch status changes and statistics.

data model runtime
Figure 7. Runtime Data Model

A.1. CHANNEL

This table represents a category of data that can be synchronized independently of other channels. Channels allow control over the type of data flowing and prevents one type of synchronization from contending with another.

Table 18. CHANNEL

Name

Type

Size

Default

Keys

Not Null

Description

CHANNEL_ID

VARCHAR

128

PK

X

A unique identifer, usually named something meaningful, like 'sales' or 'inventory'.

PROCESSING_ORDER

INTEGER

1

X

Order of sequence to process channel data.

MAX_BATCH_SIZE

INTEGER

1000

X

The maximum number of Data Events to process within a batch for this channel.

MAX_BATCH_TO_SEND

INTEGER

60

X

The maximum number of batches to send during a 'synchronization' between two nodes. A 'synchronization' is equivalent to a push or a pull. If there are 12 batches ready to be sent for a channel and max_batch_to_send is equal to 10, then only the first 10 batches will be sent.

MAX_DATA_TO_ROUTE

INTEGER

100000

X

The maximum number of data rows to route for a channel at a time.

EXTRACT_PERIOD_MILLIS

INTEGER

0

X

The minimum number of milliseconds allowed between attempts to extract data for targeted at a node_id.

ENABLED

TINYINT

1

1

X

Indicates whether channel is enabled or not.

USE_OLD_DATA_TO_ROUTE

TINYINT

1

1

X

Indicates whether to read the old data during routing.

USE_ROW_DATA_TO_ROUTE

TINYINT

1

1

X

Indicates whether to read the row data during routing.

USE_PK_DATA_TO_ROUTE

TINYINT

1

1

X

Indicates whether to read the pk data during routing.

RELOAD_FLAG

TINYINT

1

0

X

Indicates that this channel is used for reloads.

FILE_SYNC_FLAG

TINYINT

1

0

X

Indicates that this channel is used for file sync.

CONTAINS_BIG_LOB

TINYINT

1

0

X

Provides SymmetricDS a hint on how to treat captured data. Currently only supported by Oracle, Interbase and Firebird. If set to '0', then selects for routing and data extraction will be more efficient and lobs will be truncated at 4k in the trigger text. When it is set to '0' there is a 4k limit on the total size of a row and on the size of a LOB column. Note, when switching this value back and forth triggers need to be forced to regenerate.

BATCH_ALGORITHM

VARCHAR

50

default

X

The algorithm to use when batching data on this channel. Possible values are: 'default', 'transactional', and 'nontransactional'

DATA_LOADER_TYPE

VARCHAR

50

default

X

Identify the type of data loader this channel should use. Allows for the default dataloader to be swapped out via configuration for more efficient platform specific data loaders.

DESCRIPTION

VARCHAR

255

Description on the type of data carried in this channel.

QUEUE

VARCHAR

25

default

X

User provided queue name for channel to operate on. Creates multi-threaded channels. Defaults to 'default' thread

MAX_NETWORK_KBPS

DECIMAL

10,3

0.000

X

The maximum network transfer rate in kilobytes per second. Zero or negative means unlimited. Channels running in serial or parallel can have an effect on how much bandwidth can be used and when a channel will be processed. This is currently only implemented when staging is enabled.

DATA_EVENT_ACTION

CHAR

1

For a node group link with a data event action of B (both), select how to send changes to the target node group. (P = Push, W = Wait for Pull)

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when a user last updated this entry.

A.2. CONFLICT

Defines how conflicts in row data should be handled during the load process.

Table 19. CONFLICT

Name

Type

Size

Default

Keys

Not Null

Description

CONFLICT_ID

VARCHAR

50

PK

X

Unique identifier for a specific conflict detection setting.

SOURCE_NODE_GROUP_ID

VARCHAR

50

FK

X

The source node group for which this setting will be applied to. References a node group link.

TARGET_NODE_GROUP_ID

VARCHAR

50

FK

X

The target node group for which this setting will be applied to. References a node group link.

TARGET_CHANNEL_ID

VARCHAR

128

Optional channel that this setting will be applied to.

TARGET_CATALOG_NAME

VARCHAR

255

Optional database catalog that the target table belongs to. Only use this if the target table is not in the default catalog.

TARGET_SCHEMA_NAME

VARCHAR

255

Optional database schema that the target table belongs to. Only use this if the target table is not in the default schema.

TARGET_TABLE_NAME

VARCHAR

255

Optional database table that this setting will apply to. If left blank, the setting will be for any table in the channel (if set) and in the specified node group link.

DETECT_TYPE

VARCHAR

128

X

Indicates the strategy to use for detecting conflicts during a dml action. The possible values are: use_pk_data (manual, fallback, ignore), use_changed_data (manual, fallback, ignore), use_old_data (manual, fallback, ignore), use_timestamp (newer_wins), use_version (newer_wins)

DETECT_EXPRESSION

LONGVARCHAR

An expression that provides additional information about the detection mechanism. If the detection mechanism is use_timestamp or use_version then this expression will be the name of the timestamp or version column.

RESOLVE_TYPE

VARCHAR

128

X

Indicates the strategy for resolving update conflicts. The possible values differ based on the detect_type that is specified.

PING_BACK

VARCHAR

128

X

Indicates the strategy for sending resolved conflicts back to the source system. Possible values are: OFF, SINGLE_ROW, and REMAINING_ROWS.

RESOLVE_CHANGES_ONLY

TINYINT

1

0

Indicates that when applying changes during an update that only data that has changed should be applied. Otherwise, all the columns will be updated. This really only applies to updates.

RESOLVE_ROW_ONLY

TINYINT

1

0

Indicates that an action should take place for the entire batch if possible. This applies to a resolve type of 'ignore'. If a row is in conflict and the resolve type is 'ignore', then the entire batch will be ignored.

CREATE_TIME

TIMESTAMP

X

The date and time when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

The date and time when a user last updated this entry.

A.3. CONTEXT

Context variables used by runtime services on a single node

Table 20. CONTEXT

Name

Type

Size

Default

Keys

Not Null

Description

NAME

VARCHAR

80

PK

X

The name of the context variable.

CONTEXT_VALUE

LONGVARCHAR

The value of the context variable.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when emtry was last updated.

A.4. DATA

The captured data change that occurred to a row in the database. Entries in data are created by database triggers.

Table 21. DATA

Name

Type

Size

Default

Keys

Not Null

Description

DATA_ID

BIGINT

PK

X

Unique identifier for a data.

TABLE_NAME

VARCHAR

255

X

The name of the table in which a change occurred that this entry records.

EVENT_TYPE

CHAR

1

X

The type of event captured by this entry. For triggers, this is the change that occurred, which is 'I' for insert, 'U' for update, or 'D' for delete. Other events include: 'R' for reloading the entire table (or subset of the table) to the node; 'S' for running dynamic SQL at the node, which is used for adhoc administration.

ROW_DATA

LONGVARCHAR

The captured data change from the synchronized table. The column values are stored in comma-separated values (CSV) format.

PK_DATA

LONGVARCHAR

The primary key values of the captured data change from the synchronized table. This data is captured for updates and deletes. The primary key values are stored in comma-separated values (CSV) format.

OLD_DATA

LONGVARCHAR

The captured data values prior to the update. The column values are stored in CSV format.

TRIGGER_HIST_ID

INTEGER

X

The foreign key to the trigger_hist entry that contains the primary key and column names for the table being synchronized.

CHANNEL_ID

VARCHAR

128

The channel that this data belongs to, such as 'prices'

TRANSACTION_ID

VARCHAR

255

An optional transaction identifier that links multiple data changes together as the same transaction.

SOURCE_NODE_ID

VARCHAR

50

If the data was inserted by a SymmetricDS data loader, then the id of the source node is record so that data is not re-routed back to it.

EXTERNAL_DATA

VARCHAR

50

A field that can be populated by a trigger that uses the EXTERNAL_SELECT

NODE_LIST

VARCHAR

255

A field that can be populated with a comma separated subset of node ids which will be the only nodes available to the router

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

A.5. DATA_EVENT

Each row represents the mapping between a data change that was captured and the batch that contains it. Entries in data_event are created as part of the routing process.

Table 22. DATA_EVENT

Name

Type

Size

Default

Keys

Not Null

Description

DATA_ID

BIGINT

PK

X

Id of the data to be routed.

BATCH_ID

BIGINT

PK

X

Id of the batch containing the data.

ROUTER_ID

VARCHAR

50

PK

X

Id of the router that routed this data_event.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

A.6. DATA_GAP

Used only when routing.data.reader.type is set to 'gap.' Table that tracks gaps in the data table so that they may be processed efficiently, if data shows up. Gaps can show up in the data table if a database transaction is rolled back.

Table 23. DATA_GAP

Name

Type

Size

Default

Keys

Not Null

Description

START_ID

BIGINT

PK

X

The first missing data_id from the data table where a gap is detected. This could be the last data_id inserted plus one.

END_ID

BIGINT

PK

X

The last missing data_id from the data table where a gap is detected. If the start_id is the last data_id inserted plus one, then this field is filled in with a -1.

STATUS

CHAR

2

GP, SK, or FL. GP means there is a detected gap. FL means that the gap has been filled. SK means that the gap has been skipped either because the gap expired or because no database transaction was detected which means that no data will be committed to fill in the gap.

CREATE_TIME

TIMESTAMP

X

Timestamp when this entry was created.

LAST_UPDATE_HOSTNAME

VARCHAR

255

The host who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

A.7. EXTENSION

Dynamic extensions stored in the database that plug-in to the running engine and receive callbacks according to their interface.

Table 24. EXTENSION

Name

Type

Size

Default

Keys

Not Null

Description

EXTENSION_ID

VARCHAR

50

PK

X

The unique id of the extension.

EXTENSION_TYPE

VARCHAR

10

X

The type of the extension. Types are 'java' and 'bsh'

INTERFACE_NAME

VARCHAR

255

Name of interface, required for 'bsh' only.

NODE_GROUP_ID

VARCHAR

50

X

Target the extension at a specific node group id. To target all groups, use the value of 'ALL'.

ENABLED

TINYINT

1

1

X

Whether or not the extension is enabled.

EXTENSION_ORDER

INTEGER

1

X

Specifies the order in which to install extensions when multiple extensions implement the same interface.

EXTENSION_TEXT

LONGVARCHAR

The script or code of the extension.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when a user last updated this entry.

A.8. EXTRACT_REQUEST

This table is used internally to request the extract of initial loads asynchronously when the initial load extract job is enabled.

Table 25. EXTRACT_REQUEST

Name

Type

Size

Default

Keys

Not Null

Description

REQUEST_ID

BIGINT

PK

X

Unique identifier for a request.

NODE_ID

VARCHAR

50

X

The node_id of the batch being loaded.

QUEUE

VARCHAR

128

The channel queue name of the batch being loaded.

STATUS

CHAR

2

NE, OK

START_BATCH_ID

BIGINT

X

A load can be split across multiple batches. This is the first of N batches the load will be split across.

END_BATCH_ID

BIGINT

X

This is the last of N batches the load will be split across.

TRIGGER_ID

VARCHAR

128

X

Unique identifier for a trigger associated with the extract request.

ROUTER_ID

VARCHAR

50

X

Unique description of the router associated with the extract request.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when a process last updated this entry.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

A.9. FILE_INCOMING

As files are loaded from another node the file and source node are captured here for file sync to use to prevent file ping backs in bidirectional file synchronization.

Table 26. FILE_INCOMING

Name

Type

Size

Default

Keys

Not Null

Description

RELATIVE_DIR

VARCHAR

255

PK

X

The path to the file starting at the base_dir and excluding the file name itself.

FILE_NAME

VARCHAR

128

PK

X

The name of the file that has been loaded.

LAST_EVENT_TYPE

CHAR

1

X

The type of event that caused the file to be loaded from another node. 'C' is for create, 'M' is for modified, and 'D' is for deleted.

NODE_ID

VARCHAR

50

X

The node_id of the source of the batch being loaded.

FILE_MODIFIED_TIME

BIGINT

The last modified time of the file at the time the file was loaded.

A.10. FILE_SNAPSHOT

Table used to capture file changes. Updates to the table are captured and routed according to the configured file trigger routers.

Table 27. FILE_SNAPSHOT

Name

Type

Size

Default

Keys

Not Null

Description

TRIGGER_ID

VARCHAR

128

PK

X

The id of the trigger that caused this snapshot to be taken.

ROUTER_ID

VARCHAR

50

PK

X

The id of the router that caused this snapshot to be taken.

RELATIVE_DIR

VARCHAR

255

PK

X

The path to the file starting at the base_dir

FILE_NAME

VARCHAR

128

PK

X

The name of the file that changed.

CHANNEL_ID

VARCHAR

128

filesync

X

The channel_id of the channel that data changes will flow through.

RELOAD_CHANNEL_ID

VARCHAR

128

filesync_reload

X

The channel_id of the channel that data changes will flow through.

LAST_EVENT_TYPE

CHAR

1

X

The type of event captured by this entry. 'C' is for create, 'M' is for modified, and 'D' is for deleted.

CRC32_CHECKSUM

BIGINT

File checksum. Can be used to determine if file content has changed.

FILE_SIZE

BIGINT

The size in bytes of the file at the time this change was detected.

FILE_MODIFIED_TIME

BIGINT

The last modified time of the file at the time this change was detected.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

CREATE_TIME

TIMESTAMP

X

Timestamp when this entry was created.

A.11. FILE_TRIGGER

This table defines files or sets of files for which changes will be captured for file synchronization

Table 28. FILE_TRIGGER

Name

Type

Size

Default

Keys

Not Null

Description

TRIGGER_ID

VARCHAR

128

PK

X

Unique identifier for a trigger.

CHANNEL_ID

VARCHAR

128

filesync

X

The channel_id of the channel that data changes will flow through.

RELOAD_CHANNEL_ID

VARCHAR

128

filesync_reload

X

The channel_id of the channel that will be used for reloads.

BASE_DIR

VARCHAR

255

X

The base directory on the client that will be synchronized.

RECURSE

TINYINT

1

1

X

Whether to synchronize child directories.

INCLUDES_FILES

VARCHAR

255

Wildcard-enabled, comma-separated list of file to include in synchronization.

EXCLUDES_FILES

VARCHAR

255

Wildcard-enabled, comma-separated list of file to exclude from synchronization.

SYNC_ON_CREATE

TINYINT

1

1

X

Whether to capture and send files when they are created.

SYNC_ON_MODIFIED

TINYINT

1

1

X

Whether to capture and send files when they are modified.

SYNC_ON_DELETE

TINYINT

1

1

X

Whether to capture and remove files when they are deleted.

SYNC_ON_CTL_FILE

TINYINT

1

0

X

Combined with sync_on_create, determines whether to capture and send files when a matching control file exists. The control file is a file of the same name with a '.ctl' extension appended to the end.

DELETE_AFTER_SYNC

TINYINT

1

0

X

Determines whether to delete the file after it has synced successfully.

BEFORE_COPY_SCRIPT

LONGVARCHAR

A bsh script that is run right before the file copy.

AFTER_COPY_SCRIPT

LONGVARCHAR

A bsh script that is run right after the file copy.

CREATE_TIME

TIMESTAMP

X

Timestamp of when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp of when a user last updated this entry.

A.12. FILE_TRIGGER_ROUTER

Maps a file trigger to a router.

Table 29. FILE_TRIGGER_ROUTER

Name

Type

Size

Default

Keys

Not Null

Description

TRIGGER_ID

VARCHAR

128

PK FK

X

The id of a file trigger.

ROUTER_ID

VARCHAR

50

PK FK

X

The id of a router.

ENABLED

TINYINT

1

1

X

Indicates whether this file trigger router is enabled or not.

INITIAL_LOAD_ENABLED

TINYINT

1

1

X

Indicates whether this file trigger should be initial loaded.

TARGET_BASE_DIR

VARCHAR

255

The base directory on the destination that files will be synchronized to.

CONFLICT_STRATEGY

VARCHAR

128

source_wins

X

The strategy to employ when a file has been modified at both the client and the server. Possible values are: source_wins, target_wins, manual

CREATE_TIME

TIMESTAMP

X

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

A.13. GROUPLET

This tables defines named groups to which nodes can belong to based on their external id. Grouplets are used to designate that synchronization should only affect an explicit subset of nodes in a node group.

Table 30. GROUPLET

Name

Type

Size

Default

Keys

Not Null

Description

GROUPLET_ID

VARCHAR

50

PK

X

Unique identifier for the grouplet.

GROUPLET_LINK_POLICY

CHAR

1

I

X

Specified whether the external ids in the grouplet_link are included in the group or excluded from the grouplet. In the case of excluded, the grouplet starts with all external ids and removes the excluded ones listed. Use 'I' for inclusive and 'E' for exclusive.

DESCRIPTION

VARCHAR

255

A description of this grouplet.

CREATE_TIME

TIMESTAMP

X

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

This tables defines nodes belong to a grouplet based on their external.id

Table 31. GROUPLET_LINK

Name

Type

Size

Default

Keys

Not Null

Description

GROUPLET_ID

VARCHAR

50

PK FK

X

Unique identifier for the grouplet.

EXTERNAL_ID

VARCHAR

255

PK

X

Provides a means to select the nodes that belong to a grouplet.

CREATE_TIME

TIMESTAMP

X

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

A.15. INCOMING_BATCH

The incoming_batch is used for tracking the status of loading an outgoing_batch from another node. Data is loaded and commited at the batch level. The status of the incoming_batch is either successful (OK) or error (ER).

Table 32. INCOMING_BATCH

Name

Type

Size

Default

Keys

Not Null

Description

BATCH_ID

BIGINT

50

PK

X

The id of the outgoing_batch that is being loaded.

NODE_ID

VARCHAR

50

PK

X

The node_id of the source of the batch being loaded.

CHANNEL_ID

VARCHAR

128

The channel_id of the batch being loaded.

STATUS

CHAR

2

The current status of the batch can be loading (LD), successfully loaded (OK), in error (ER) or skipped (SK)

ERROR_FLAG

TINYINT

1

0

A flag that indicates that this batch was in error during the last synchornization attempt.

NETWORK_MILLIS

BIGINT

0

X

The number of milliseconds spent transfering this batch across the network.

FILTER_MILLIS

BIGINT

0

X

The number of milliseconds spent in filters processing data.

DATABASE_MILLIS

BIGINT

0

X

The number of milliseconds spent loading the data into the target database.

FAILED_ROW_NUMBER

BIGINT

0

X

This numbered data event that failed as read from the CSV.

FAILED_LINE_NUMBER

BIGINT

0

X

The current line number in the CSV for this batch that failed.

BYTE_COUNT

BIGINT

0

X

The number of bytes that were sent as part of this batch.

STATEMENT_COUNT

BIGINT

0

X

The number of statements run to load this batch.

FALLBACK_INSERT_COUNT

BIGINT

0

X

The number of times an update was turned into an insert because the data was not already in the target database.

FALLBACK_UPDATE_COUNT

BIGINT

0

X

The number of times an insert was turned into an update because a data row already existed in the target database.

IGNORE_COUNT

BIGINT

0

X

The number of times a batch was ignored.

IGNORE_ROW_COUNT

BIGINT

0

X

The number of times a row was ignored.

MISSING_DELETE_COUNT

BIGINT

0

X

The number of times a delete did not affect the database because the row was already deleted.

SKIP_COUNT

BIGINT

0

X

The number of times a batch was sent and skipped because it had already been loaded according to incoming_batch.

SQL_STATE

VARCHAR

10

For a status of error (ER), this is the XOPEN or SQL 99 SQL State.

SQL_CODE

INTEGER

0

X

For a status of error (ER), this is the error code from the database that is specific to the vendor.

SQL_MESSAGE

LONGVARCHAR

For a status of error (ER), this is the error message that describes the error.

LAST_UPDATE_HOSTNAME

VARCHAR

255

The host name of the process that last did work on this batch.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when a process last updated this entry.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

SUMMARY

VARCHAR

255

A high level summary of what is included in a batch, often a list of table names.

A.16. INCOMING_ERROR

The captured data change that is in error for a batch. The user can tell the system what to do by updating the resolve columns. Entries in data_error are created when an incoming batch encounters an error.

Table 33. INCOMING_ERROR

Name

Type

Size

Default

Keys

Not Null

Description

BATCH_ID

BIGINT

50

PK

X

The id of the outgoing_batch that is being loaded.

NODE_ID

VARCHAR

50

PK

X

The node_id of the source of the batch being loaded. A node_id of -1 means that the batch was 'unrouted'.

FAILED_ROW_NUMBER

BIGINT

PK

X

The row number in the batch that encountered an error when loading.

FAILED_LINE_NUMBER

BIGINT

0

X

The current line number in the CSV for this batch that failed.

TARGET_CATALOG_NAME

VARCHAR

255

The catalog name for the table being loaded.

TARGET_SCHEMA_NAME

VARCHAR

255

The schema name for the table being loaded.

TARGET_TABLE_NAME

VARCHAR

255

X

The table name for the table being loaded.

EVENT_TYPE

CHAR

1

X

The type of event captured by this entry. For triggers, this is the change that occurred, which is 'I' for insert, 'U' for update, or 'D' for delete. Other events include: 'R' for reloading the entire table (or subset of the table) to the node; 'S' for running dynamic SQL at the node, which is used for adhoc administration.

BINARY_ENCODING

VARCHAR

10

HEX

X

The type of encoding the source system used for encoding binary data.

COLUMN_NAMES

LONGVARCHAR

X

The column names defined on the table. The column names are stored in comma-separated values (CSV) format.

PK_COLUMN_NAMES

LONGVARCHAR

X

The primary key column names defined on the table. The column names are stored in comma-separated values (CSV) format.

ROW_DATA

LONGVARCHAR

The row data from the batch as captured from the source. The column values are stored in comma-separated values (CSV) format.

OLD_DATA

LONGVARCHAR

The old row data prior to update from the batch as captured from the source. The column values are stored in CSV format.

CUR_DATA

LONGVARCHAR

The current row data that caused the error to occur. The column values are stored in CSV format.

RESOLVE_DATA

LONGVARCHAR

The capture data change from the user that is used instead of row_data. This is useful when resolving a conflict manually by specifying the data that should load.

RESOLVE_IGNORE

TINYINT

1

0

Indication from the user that the row_data should be ignored and the batch can continue loading with the next row.

CONFLICT_ID

VARCHAR

50

Unique identifier for the conflict detection setting that caused the error

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

A.17. LOAD_FILTER

A table that allows you to dynamically define filters using bsh.

Table 34. LOAD_FILTER

Name

Type

Size

Default

Keys

Not Null

Description

LOAD_FILTER_ID

VARCHAR

50

PK

X

The id of the load filter.

LOAD_FILTER_TYPE

VARCHAR

10

X

The type of load filter. Possible values include: BSH, JAVA, SQL

SOURCE_NODE_GROUP_ID

VARCHAR

50

X

The source node group for the filter.

TARGET_NODE_GROUP_ID

VARCHAR

50

X

The destination node group for the filter.

TARGET_CATALOG_NAME

VARCHAR

255

Optional name for the catalog the configured table is in.

TARGET_SCHEMA_NAME

VARCHAR

255

Optional name for the schema a configured table is in.

TARGET_TABLE_NAME

VARCHAR

255

The name of the target table that will trigger the bsh filter.

FILTER_ON_UPDATE

TINYINT

1

1

X

Whether or not the filter should apply on an update.

FILTER_ON_INSERT

TINYINT

1

1

X

Whether or not the filter should apply on an insert.

FILTER_ON_DELETE

TINYINT

1

1

X

Whether or not the filter should apply on a delete.

BEFORE_WRITE_SCRIPT

LONGVARCHAR

The script to apply before the write is completed.

AFTER_WRITE_SCRIPT

LONGVARCHAR

The script to apply after the write is completed.

BATCH_COMPLETE_SCRIPT

LONGVARCHAR

The script to apply on batch complete.

BATCH_COMMIT_SCRIPT

LONGVARCHAR

The script to apply on batch commit.

BATCH_ROLLBACK_SCRIPT

LONGVARCHAR

The script to apply on batch rollback.

HANDLE_ERROR_SCRIPT

LONGVARCHAR

The script to apply when data cannot be processed.

CREATE_TIME

TIMESTAMP

X

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

LOAD_FILTER_ORDER

INTEGER

1

X

Specifies the order in which to apply load filters if more than one target operation occurs.

FAIL_ON_ERROR

TINYINT

1

0

X

Whether we should fail the batch if the filter fails.

A.18. LOCK

Contains semaphores that are set when processes run, so that only one server can run a process at a time. Enable this feature by using the cluster.lock.during.xxxx parameters.

Table 35. LOCK

Name

Type

Size

Default

Keys

Not Null

Description

LOCK_ACTION

VARCHAR

50

PK

X

The process that needs a lock.

LOCK_TYPE

VARCHAR

50

X

Type of lock that indicates differently locking behavior. Types include cluster, exclusive, and shared. Cluster lock is used to allow one server to run at a time, but any process from the same server can overtake the lock, which avoids stalled processing. Exclusive lock is owned by one process, regardless of which server it is on, but another process can acquire the lock after lock_time is older than exclusive.lock.timeout.ms. Shared lock allows multiple processes to use the same lock, incrementing the shared_count, but requires no exclusive lock exists and prevents an exclusive lock.

LOCKING_SERVER_ID

VARCHAR

255

The name of the server that currently has a lock. This is typically a host name, but it can be overridden using the -Druntime.symmetric.cluster.server.id=name System property.

LOCK_TIME

TIMESTAMP

The time a lock is aquired. Use the cluster.lock.timeout.ms to specify a lock timeout period.

SHARED_COUNT

INTEGER

0

X

For a lock_type of SHARED, this is the number of processes sharing the same lock. After the shared_count drops to zero, a shared lock is removed.

SHARED_ENABLE

INTEGER

0

X

For a lock_type of SHARED, this flag set to 1 indicates that more processes can share the lock. If an exclusive lock is needed, the flag is set to 0 to prevent further shared locks from accumulating.

LAST_LOCK_TIME

TIMESTAMP

Timestamp when a process last updated this entry.

LAST_LOCKING_SERVER_ID

VARCHAR

255

The server id of the process that last did work on this batch.

A.19. NODE

Representation of an instance of SymmetricDS that synchronizes data with one or more additional nodes. Each node has a unique identifier (nodeId) that is used when communicating, as well as a domain-specific identifier (externalId) that provides context within the local system.

Table 36. NODE

Name

Type

Size

Default

Keys

Not Null

Description

NODE_ID

VARCHAR

50

PK

X

A unique identifier for a node.

NODE_GROUP_ID

VARCHAR

50

X

The node group that this node belongs to, such as 'store'.

EXTERNAL_ID

VARCHAR

255

X

A domain-specific identifier for context within the local system. For example, the retail store number.

SYNC_ENABLED

TINYINT

1

0

Indicates whether this node should be sent synchronization. Disabled nodes are ignored by the triggers, so no entries are made in data_event for the node.

SYNC_URL

VARCHAR

255

The URL to contact the node for synchronization.

SCHEMA_VERSION

VARCHAR

50

The version of the database schema this node manages. Useful for specifying synchronization by version.

SYMMETRIC_VERSION

VARCHAR

50

The version of SymmetricDS running at this node.

CONFIG_VERSION

VARCHAR

50

The version of configuration running at this node.

DATABASE_TYPE

VARCHAR

50

The database product name at this node as reported by JDBC.

DATABASE_VERSION

VARCHAR

50

The database product version at this node as reported by JDBC.

HEARTBEAT_TIME

TIMESTAMP

Deprecated. Use node_host.heartbeat_time instead.

TIMEZONE_OFFSET

VARCHAR

6

Deprecated. Use node_host.timezone_offset instead.

BATCH_TO_SEND_COUNT

INTEGER

0

The number of outgoing batches that have not yet been sent. This field is updated as part of the heartbeat job if the heartbeat.update.node.with.batch.status property is set to true.

BATCH_IN_ERROR_COUNT

INTEGER

0

The number of outgoing batches that are in error at this node. This field is updated as part of the heartbeat job if the heartbeat.update.node.with.batch.status property is set to true.

CREATED_AT_NODE_ID

VARCHAR

50

The node_id of the node where this node was created. This is typically filled automatically with the node_id found in node_identity where registration was opened for the node.

DEPLOYMENT_TYPE

VARCHAR

50

An indicator as to the type of SymmetricDS software that is running. Possible values are, but not limited to: engine, standalone, war, professional, mobile

A.20. NODE_COMMUNICATION

This table is used to coordinate communication with other nodes.

Table 37. NODE_COMMUNICATION

Name

Type

Size

Default

Keys

Not Null

Description

NODE_ID

VARCHAR

50

PK

X

Unique identifier for a node.

QUEUE

VARCHAR

25

PK

X

The queue name to use in relation to the channel.

COMMUNICATION_TYPE

VARCHAR

10

PK

X

The type of communication that is taking place with this node. Valid values are: PULL, PUSH

LOCK_TIME

TIMESTAMP

The timestamp when this node was locked

LOCKING_SERVER_ID

VARCHAR

255

The name of the server that currently has a pull lock for the node. This is typically a host name, but it can be overridden using the -Druntime.symmetric.cluster.server.id=name System property.

LAST_LOCK_TIME

TIMESTAMP

The timestamp when this node was last locked

LAST_LOCK_MILLIS

BIGINT

0

The amount of time the last communication took.

SUCCESS_COUNT

BIGINT

0

The number of successive successful communication attempts.

FAIL_COUNT

BIGINT

0

The number of successive failed communication attempts.

SKIP_COUNT

BIGINT

0

The number of skipped communication attempts.

TOTAL_SUCCESS_COUNT

BIGINT

0

The total number of successful communication attempts with the node.

TOTAL_FAIL_COUNT

BIGINT

0

The total number of failed communication attempts with the node.

TOTAL_SUCCESS_MILLIS

BIGINT

0

The total amount of time spent during successful communication attempts with the node.

TOTAL_FAIL_MILLIS

BIGINT

0

The total amount of time spent during failed communication attempts with the node.

BATCH_TO_SEND_COUNT

BIGINT

0

The number of batches this node has queued for pull.

NODE_PRIORITY

INTEGER

0

Used to order nodes when initiating a pull operation. Can be used to move a node to the top of the list to pull from it as quickly as possible.

A.21. NODE_CHANNEL_CTL

Used to ignore or suspend a channel. A channel that is ignored will have its data_events batched and they will immediately be marked as 'OK' without sending them. A channel that is suspended is skipped when batching data_events.

Table 38. NODE_CHANNEL_CTL

Name

Type

Size

Default

Keys

Not Null

Description

NODE_ID

VARCHAR

50

PK

X

Unique identifier for a node.

CHANNEL_ID

VARCHAR

128

PK

X

The name of the channel_id that is being controlled.

SUSPEND_ENABLED

TINYINT

1

0

Indicates if this channel is suspended, which prevents batches from being sent, although new batches can still be created.

IGNORE_ENABLED

TINYINT

1

0

Indicates if this channel is ignored, which marks batches with a status of OK like they were actually processed.

LAST_EXTRACT_TIME

TIMESTAMP

Record the last time data was extract for a node and a channel.

A.22. NODE_GROUP

A category of Nodes that synchronizes data with one or more NodeGroups. A common use of NodeGroup is to describe a level in a hierarchy of data synchronization.

Table 39. NODE_GROUP

Name

Type

Size

Default

Keys

Not Null

Description

NODE_GROUP_ID

VARCHAR

50

PK

X

Unique identifier for a node group, usually named something meaningful, like 'store' or 'warehouse'.

DESCRIPTION

VARCHAR

255

A description of this node group.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when a user last updated this entry.

A.23. NODE_GROUP_CHANNEL_WND

An optional window of time for which a node group and channel will extract and send data.

Table 40. NODE_GROUP_CHANNEL_WND

Name

Type

Size

Default

Keys

Not Null

Description

NODE_GROUP_ID

VARCHAR

50

PK

X

The node_group_id that this window applies to.

CHANNEL_ID

VARCHAR

128

PK

X

The channel_id that this window applies to.

START_TIME

TIMESTAMP

PK

X

The start time for the active window.

END_TIME

TIMESTAMP

PK

X

The end time for the active window. Note that if the end_time is less than the start_time then the window crosses a day boundary.

ENABLED

TINYINT

1

0

X

Enable this window. If this is set to '0' then this window is ignored.

A source node_group sends its data updates to a target NodeGroup using a pull, push, or custom technique.

Table 41. NODE_GROUP_LINK

Name

Type

Size

Default

Keys

Not Null

Description

SOURCE_NODE_GROUP_ID

VARCHAR

50

PK FK

X

The node group where data changes should be captured.

TARGET_NODE_GROUP_ID

VARCHAR

50

PK FK

X

The node group where data changes will be sent.

DATA_EVENT_ACTION

CHAR

1

W

X

The notification scheme used to send data changes to the target node group. (P = Push, W = Wait for Pull, B = Both Push and Wait for Pull (control from channel), R = Route-Only)

SYNC_CONFIG_ENABLED

TINYINT

1

1

X

Indicates whether configuration that has changed should be synchronized to target nodes on this link.

IS_REVERSIBLE

TINYINT

1

0

X

Indicates if communication can work in reverse as specified on the channel. A reversible push link can be overridden to pull, and a reversible pull link can be overridden to push on the channel.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when a user last updated this entry.

A.25. NODE_HOST

Representation of an physical workstation or server that is hosting the SymmetricDS software. In a clustered environment there may be more than one entry per node in this table.

Table 42. NODE_HOST

Name

Type

Size

Default

Keys

Not Null

Description

NODE_ID

VARCHAR

50

PK

X

A unique identifier for a node.

HOST_NAME

VARCHAR

60

PK

X

The host name of a workstation or server. If more than one instance of SymmetricDS runs on the same server, then this value can be a 'server id' specified by -Druntime.symmetric.cluster.server.id

IP_ADDRESS

VARCHAR

50

The ip address for the host.

OS_USER

VARCHAR

50

The user SymmetricDS is running under

OS_NAME

VARCHAR

50

The name of the OS

OS_ARCH

VARCHAR

50

The hardware architecture of the OS

OS_VERSION

VARCHAR

50

The version of the OS

AVAILABLE_PROCESSORS

INTEGER

0

The number of processors available to use.

FREE_MEMORY_BYTES

BIGINT

0

The amount of free memory available to the JVM.

TOTAL_MEMORY_BYTES

BIGINT

0

The amount of total memory available to the JVM.

MAX_MEMORY_BYTES

BIGINT

0

The max amount of memory available to the JVM.

JAVA_VERSION

VARCHAR

50

The version of java that SymmetricDS is running as.

JAVA_VENDOR

VARCHAR

255

The vendor of java that SymmetricDS is running as.

JDBC_VERSION

VARCHAR

255

The verision of the JDBC driver that is being used.

SYMMETRIC_VERSION

VARCHAR

50

The version of SymmetricDS running at this node.

TIMEZONE_OFFSET

VARCHAR

6

The time zone offset in RFC822 format at the time of the last heartbeat.

HEARTBEAT_TIME

TIMESTAMP

The last timestamp when the node sent a heartbeat, which is attempted every ten minutes by default.

LAST_RESTART_TIME

TIMESTAMP

X

Timestamp when this instance was last restarted.

CREATE_TIME

TIMESTAMP

X

Timestamp when this entry was created.

A.26. NODE_HOST_CHANNEL_STATS

Table 43. NODE_HOST_CHANNEL_STATS

Name

Type

Size

Default

Keys

Not Null

Description

NODE_ID

VARCHAR

50

PK

X

A unique identifier for a node.

HOST_NAME

VARCHAR

60

PK

X

The host name of a workstation or server. If more than one instance of SymmetricDS runs on the same server, then this value can be a 'server id' specified by -Druntime.symmetric.cluster.server.id

CHANNEL_ID

VARCHAR

128

PK

X

The channel_id of the channel that data changes will flow through.

START_TIME

TIMESTAMP

PK

X

The start time for the period which this row represents.

END_TIME

TIMESTAMP

PK

X

The end time for the period which this row represents.

DATA_ROUTED

BIGINT

0

Indicate the number of data rows that have been routed during this period.

DATA_UNROUTED

BIGINT

0

The amount of data that has not yet been routed at the time this stats row was recorded.

DATA_EVENT_INSERTED

BIGINT

0

Indicate the number of data rows that have been routed during this period.

DATA_EXTRACTED

BIGINT

0

The number of data rows that were extracted during this time period.

DATA_BYTES_EXTRACTED

BIGINT

0

The number of bytes that were extracted during this time period.

DATA_EXTRACTED_ERRORS

BIGINT

0

The number of errors that occurred during extraction during this time period.

DATA_BYTES_SENT

BIGINT

0

The number of bytes that were sent during this time period.

DATA_SENT

BIGINT

0

The number of rows that were sent during this time period.

DATA_SENT_ERRORS

BIGINT

0

The number of errors that occurred while sending during this time period.

DATA_LOADED

BIGINT

0

The number of rows that were loaded during this time period.

DATA_BYTES_LOADED

BIGINT

0

The number of bytes that were loaded during this time period.

DATA_LOADED_ERRORS

BIGINT

0

The number of errors that occurred while loading during this time period.

A.27. NODE_HOST_JOB_STATS

Table 44. NODE_HOST_JOB_STATS

Name

Type

Size

Default

Keys

Not Null

Description

NODE_ID

VARCHAR

50

PK

X

A unique identifier for a node.

HOST_NAME

VARCHAR

60

PK

X

The host name of a workstation or server. If more than one instance of SymmetricDS runs on the same server, then this value can be a 'server id' specified by -Druntime.symmetric.cluster.server.id

JOB_NAME

VARCHAR

50

PK

X

The name of the job.

START_TIME

TIMESTAMP

PK

X

The start time for the period which this row represents.

END_TIME

TIMESTAMP

PK

X

The end time for the period which this row represents.

PROCESSED_COUNT

BIGINT

0

The number of items that were processed during the job run.

A.28. NODE_HOST_STATS

Table 45. NODE_HOST_STATS

Name

Type

Size

Default

Keys

Not Null

Description

NODE_ID

VARCHAR

50

PK

X

A unique identifier for a node.

HOST_NAME

VARCHAR

60

PK

X

The host name of a workstation or server. If more than one instance of SymmetricDS runs on the same server, then this value can be a 'server id' specified by -Druntime.symmetric.cluster.server.id

START_TIME

TIMESTAMP

PK

X

The end time for the period which this row represents.

END_TIME

TIMESTAMP

PK

X

RESTARTED

BIGINT

0

X

Indicate that a restart occurred during this period.

NODES_PULLED

BIGINT

0

TOTAL_NODES_PULL_TIME

BIGINT

0

NODES_PUSHED

BIGINT

0

TOTAL_NODES_PUSH_TIME

BIGINT

0

NODES_REJECTED

BIGINT

0

NODES_REGISTERED

BIGINT

0

NODES_LOADED

BIGINT

0

NODES_DISABLED

BIGINT

0

PURGED_DATA_ROWS

BIGINT

0

PURGED_DATA_EVENT_ROWS

BIGINT

0

PURGED_BATCH_OUTGOING_ROWS

BIGINT

0

PURGED_BATCH_INCOMING_ROWS

BIGINT

0

TRIGGERS_CREATED_COUNT

BIGINT

TRIGGERS_REBUILT_COUNT

BIGINT

TRIGGERS_REMOVED_COUNT

BIGINT

A.29. NODE_IDENTITY

After registration, this table will have one row representing the identity of the node. For a root node, the row is entered by the user.

Table 46. NODE_IDENTITY

Name

Type

Size

Default

Keys

Not Null

Description

NODE_ID

VARCHAR

50

PK FK

X

Unique identifier for a node.

A.30. NODE_SECURITY

Security features like node passwords and open registration flag are stored in the node_security table.

Table 47. NODE_SECURITY

Name

Type

Size

Default

Keys

Not Null

Description

NODE_ID

VARCHAR

50

PK FK

X

Unique identifier for a node.

NODE_PASSWORD

VARCHAR

50

X

The password used by the node to prove its identity during synchronization.

REGISTRATION_ENABLED

TINYINT

1

0

Indicates whether registration is open for this node. Re-registration may be forced for a node if this is set back to '1' in a parent database for the node_id that should be re-registred.

REGISTRATION_TIME

TIMESTAMP

The timestamp when this node was last registered.

INITIAL_LOAD_ENABLED

TINYINT

1

0

Indicates whether an initial load will be sent to this node.

INITIAL_LOAD_TIME

TIMESTAMP

The timestamp when an initial load was started for this node.

INITIAL_LOAD_ID

BIGINT

A reference to the load_id in outgoing_batch for the last load that occurred.

INITIAL_LOAD_CREATE_BY

VARCHAR

255

The user that created the initial load. A null value means that the system created the batch.

REV_INITIAL_LOAD_ENABLED

TINYINT

1

0

Indicates that this node should send a reverse initial load.

REV_INITIAL_LOAD_TIME

TIMESTAMP

The timestamp when this node last sent an initial load.

REV_INITIAL_LOAD_ID

BIGINT

A reference to the load_id in outgoing_batch for the last reverse load that occurred.

REV_INITIAL_LOAD_CREATE_BY

VARCHAR

255

The user that created the reverse initial load. A null value means that the system created the batch.

CREATED_AT_NODE_ID

VARCHAR

50

The node_id of the node where this node was created. This is typically filled automatically with the node_id found in node_identity where registration was opened for the node.

A.31. MONITOR

Defines monitors that will run periodically to look for problems in the system.

Table 48. MONITOR

Name

Type

Size

Default

Keys

Not Null

Description

MONITOR_ID

VARCHAR

128

PK

X

Unique identifier for a monitor.

NODE_GROUP_ID

VARCHAR

50

ALL

X

Target a specific node group to run this monitor. Target all groups, use a value of 'ALL'.

EXTERNAL_ID

VARCHAR

255

ALL

X

Target a specific node by its external ID to run this monitor. Target all nodes, use a value of 'ALL'.

TYPE

VARCHAR

50

X

Monitor type to execute. Built-in types are cpu, disk, memory, batchError, batchUnsent, dataGap, and dataUnrouted.

EXPRESSION

LONGVARCHAR

An expression used by the monitor to set options specific to the monitor type.

THRESHOLD

BIGINT

0

X

The minimum value returned when the monitor runs that will cause a monitor event to be recorded.

RUN_PERIOD

INTEGER

0

X

Run this monitor periodically every number of seconds.

RUN_COUNT

INTEGER

0

X

Average the value across a number of runs before checking threshold.

SEVERITY_LEVEL

INTEGER

0

X

ENABLED

TINYINT

1

0

X

Whether or not this monitor is enabled for execution.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when a user last updated this entry.

A.32. MONITOR_EVENT

Records an event of when a system problem occurred.

Table 49. MONITOR_EVENT

Name

Type

Size

Default

Keys

Not Null

Description

MONITOR_ID

VARCHAR

128

PK

X

Unique identifier for a monitor that caused the event.

NODE_ID

VARCHAR

50

PK

X

Unique identifier for the node that created the event.

EVENT_TIME

TIMESTAMP

PK

X

Timestamp when the event was created.

HOST_NAME

VARCHAR

60

Host name of the node that created the event.

TYPE

VARCHAR

50

X

Monitor type that detected the value recorded.

THRESHOLD

BIGINT

0

X

Minimum value for the monitor to cause an event.

EVENT_VALUE

BIGINT

0

X

Actual value detected by the monitor.

EVENT_COUNT

INTEGER

0

X

Number of times this event has occurred and been updated.

SEVERITY_LEVEL

INTEGER

0

X

Severity level configured for the monitor.

IS_RESOLVED

TINYINT

0

X

Whether an event is resolved because its value dropped below the threshold.

IS_NOTIFIED

TINYINT

0

X

Whether a notification was run.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when the event was last updated.

A.33. NOTIFICATION

Defines what notification to send when a monitor detects a problem in the system.

Table 50. NOTIFICATION

Name

Type

Size

Default

Keys

Not Null

Description

NOTIFICATION_ID

VARCHAR

128

PK

X

Unique identifier for a notification.

NODE_GROUP_ID

VARCHAR

50

ALL

X

Target a specific node group to run this notification. To target all groups, use a value of 'ALL'.

EXTERNAL_ID

VARCHAR

255

ALL

X

Target a specific node by its external ID to run this notification. To target all nodes, use a value of 'ALL'.

SEVERITY_LEVEL

INTEGER

0

X

Look for monitor events using this severity level or above. To match all severity levels, use a value of 0.

TYPE

VARCHAR

50

X

Notification type that will send a message. Built-in types are mail and log.

EXPRESSION

LONGVARCHAR

An expression used by the notification to set options specific to the notification type.

ENABLED

TINYINT

1

0

X

Whether or not this notication is enabled.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when a user last updated this entry.

A.34. OUTGOING_BATCH

Used for tracking the sending a collection of data to a node in the system. A new outgoing_batch is created and given a status of 'NE'. After sending the outgoing_batch to its target node, the status becomes 'SE'. The node responds with either a success status of 'OK' or an error status of 'ER'. An error while sending to the node also results in an error status of 'ER' regardless of whether the node sends that acknowledgement.

Table 51. OUTGOING_BATCH

Name

Type

Size

Default

Keys

Not Null

Description

BATCH_ID

BIGINT

PK

X

A unique id for the batch.

NODE_ID

VARCHAR

50

PK

X

The node that this batch is targeted at.

CHANNEL_ID

VARCHAR

128

The channel that this batch is part of.

STATUS

CHAR

2

The current status of a batch can be routing (RT), requested to be extracted in the background (RQ), newly created and ready for replication (NE), being queried from the database (QY), sent to a Node (SE), ready to be loaded (LD) and acknowledged as successful (OK), ignored (IG) or in error (ER).

LOAD_ID

BIGINT

An id that ties multiple batches together to identify them as being part of an initial load.

EXTRACT_JOB_FLAG

TINYINT

1

0

A flag that indicates that this batch is going to be extracted by another job.

LOAD_FLAG

TINYINT

1

0

A flag that indicates that this batch is part of an initial load.

ERROR_FLAG

TINYINT

1

0

A flag that indicates that this batch was in error during the last synchornization attempt.

COMMON_FLAG

TINYINT

1

0

A flag that indicates that the data in this batch is shared by other nodes (they will have the same batch_id). Shared batches will be extracted to a common location.

IGNORE_COUNT

BIGINT

0

X

The number of times a batch was ignored.

BYTE_COUNT

BIGINT

0

X

The number of bytes that were sent as part of this batch.

EXTRACT_COUNT

BIGINT

0

X

The number of times this an attempt to extract this batch occurred.

SENT_COUNT

BIGINT

0

X

The number of times this batch was sent. A batch can be sent multiple times if an ACK is not received.

LOAD_COUNT

BIGINT

0

X

The number of times an attempt to load this batch occurred.

DATA_EVENT_COUNT

BIGINT

0

X

The number of data_events that are part of this batch.

RELOAD_EVENT_COUNT

BIGINT

0

X

The number of reload events that are part of this batch.

INSERT_EVENT_COUNT

BIGINT

0

X

The number of insert events that are part of this batch.

UPDATE_EVENT_COUNT

BIGINT

0

X

The number of update events that are part of this batch.

DELETE_EVENT_COUNT

BIGINT

0

X

The number of delete events that are part of this batch.

OTHER_EVENT_COUNT

BIGINT

0

X

The number of other event types that are part of this batch. This includes any events types that are not a reload, insert, update or delete event type.

ROUTER_MILLIS

BIGINT

0

X

The number of milliseconds spent creating this batch.

NETWORK_MILLIS

BIGINT

0

X

The number of milliseconds spent transfering this batch across the network.

FILTER_MILLIS

BIGINT

0

X

The number of milliseconds spent in filters processing data.

LOAD_MILLIS

BIGINT

0

X

The number of milliseconds spent loading the data into the target database.

EXTRACT_MILLIS

BIGINT

0

X

The number of milliseconds spent extracting the data out of the source database.

TRANSFORM_EXTRACT_MILLIS

BIGINT

0

X

Not implemented. The number of milliseconds spent transforming the data on the extract side.

TRANSFORM_LOAD_MILLIS

BIGINT

0

X

Not implemented. The number of milliseconds spent transforming the data on the load side.

TOTAL_EXTRACT_MILLIS

BIGINT

0

X

Not implemented. The total number of milliseconds spent processing a batch on the extract side.

TOTAL_LOAD_MILLIS

BIGINT

0

X

Not implemented. The total number of milliseconds spent processing a batch on the load side.

SQL_STATE

VARCHAR

10

For a status of error (ER), this is the XOPEN or SQL 99 SQL State.

SQL_CODE

INTEGER

0

X

For a status of error (ER), this is the error code from the database that is specific to the vendor.

SQL_MESSAGE

LONGVARCHAR

For a status of error (ER), this is the error message that describes the error.

FAILED_DATA_ID

BIGINT

0

X

For a status of error (ER), this is the data_id that was being processed when the batch failed.

FAILED_LINE_NUMBER

BIGINT

0

X

The current line number in the CSV for this batch that failed.

LAST_UPDATE_HOSTNAME

VARCHAR

255

The host name of the process that last did work on this batch.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when a process last updated this entry.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

CREATE_BY

VARCHAR

255

The user that created the batch. A null value means that the system created the batch.

SUMMARY

VARCHAR

255

A high level summary of what is included in a batch, often a list of table names.

A.35. PARAMETER

Provides a way to manage most SymmetricDS settings in the database.

Table 52. PARAMETER

Name

Type

Size

Default

Keys

Not Null

Description

EXTERNAL_ID

VARCHAR

255

PK

X

Target the parameter at a specific external id. To target all nodes, use the value of 'ALL.'

NODE_GROUP_ID

VARCHAR

50

PK

X

Target the parameter at a specific node group id. To target all groups, use the value of 'ALL.'

PARAM_KEY

VARCHAR

80

PK

X

The name of the parameter.

PARAM_VALUE

LONGVARCHAR

The value of the parameter.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when a user last updated this entry.

A.36. REGISTRATION_REDIRECT

Provides a way for a centralized registration server to redirect registering nodes to their prospective parent node in a multi-tiered deployment.

Table 53. REGISTRATION_REDIRECT

Name

Type

Size

Default

Keys

Not Null

Description

REGISTRANT_EXTERNAL_ID

VARCHAR

255

PK

X

Maps the external id of a registration request to a different parent node.

REGISTRATION_NODE_ID

VARCHAR

50

X

The node_id of the node that a registration request should be redirected to.

A.37. REGISTRATION_REQUEST

Audits when a node registers or attempts to register.

Table 54. REGISTRATION_REQUEST

Name

Type

Size

Default

Keys

Not Null

Description

NODE_GROUP_ID

VARCHAR

50

PK

X

The node group that this node belongs to, such as 'store'.

EXTERNAL_ID

VARCHAR

255

PK

X

A domain-specific identifier for context within the local system. For example, the retail store number.

STATUS

CHAR

2

X

The current status of the registration attempt. Valid statuses are NR (not registered), IG (ignored), OK (sucessful)

HOST_NAME

VARCHAR

60

X

The host name of a workstation or server. If more than one instance of SymmetricDS runs on the same server, then this value can be a 'server id' specified by -Druntime.symmetric.cluster.server.id

IP_ADDRESS

VARCHAR

50

X

The ip address for the host.

ATTEMPT_COUNT

INTEGER

0

The number of registration attempts.

REGISTERED_NODE_ID

VARCHAR

50

A unique identifier for a node.

ERROR_MESSAGE

LONGVARCHAR

Record any errors or warnings that occurred when attempting to register.

CREATE_TIME

TIMESTAMP

PK

X

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

A.38. ROUTER

Configure a type of router from one node group to another. Note that routers are mapped to triggers through trigger_routers.

Table 55. ROUTER

Name

Type

Size

Default

Keys

Not Null

Description

ROUTER_ID

VARCHAR

50

PK

X

Unique description of a specific router

TARGET_CATALOG_NAME

VARCHAR

255

Optional name of catalog where a target table is located. If this field is unspecified, the catalog will be either the default catalog at the target node or the source_catalog_name from the trigger, depending on how use_source_catalog_schema is set on the router. Variables are substituted for $(sourceNodeId), $(sourceExternalId), $(sourceNodeGroupId), $(targetNodeId), $(targetExternalId), $(targetNodeGroupId), and $(none).

TARGET_SCHEMA_NAME

VARCHAR

255

Optional name of schema where a target table is located. If this field is unspecified, the schema will be either the default schema at the target node or the source_schema_name from the trigger, depending on how use_source_catalog_schema is set on the router. Variables are substituted for $(sourceNodeId), $(sourceExternalId), $(sourceNodeGroupId), $(targetNodeId), $(targetExternalId), $(targetNodeGroupId), and $(none).

TARGET_TABLE_NAME

VARCHAR

255

Optional name for a target table. Only use this if the target table name is different than the source.

SOURCE_NODE_GROUP_ID

VARCHAR

50

FK

X

Routers with this node_group_id will install triggers that are mapped to this router.

TARGET_NODE_GROUP_ID

VARCHAR

50

FK

X

The node_group_id for nodes to route data to. Note that routing can be further narrowed down by the configured router_type and router_expression.

ROUTER_TYPE

VARCHAR

50

The name of a specific type of router. Out of the box routers are 'default','column','bsh', 'subselect' and 'audit.' Custom routers can be configured as extension points.

ROUTER_EXPRESSION

LONGVARCHAR

An expression that is specific to the type of router that is configured in router_type. See the documentation for each router for more details.

SYNC_ON_UPDATE

TINYINT

1

1

X

Flag that indicates that this router should route updates.

SYNC_ON_INSERT

TINYINT

1

1

X

Flag that indicates that this router should route inserts.

SYNC_ON_DELETE

TINYINT

1

1

X

Flag that indicates that this router should route deletes.

USE_SOURCE_CATALOG_SCHEMA

TINYINT

1

1

X

Whether or not to assume that the target catalog/schema name should be the same as the source catalog/schema name. The target catalog or schema name will still override if not blank.

CREATE_TIME

TIMESTAMP

X

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

A.39. SEQUENCE

A table that supports application level sequence numbering.

Table 56. SEQUENCE

Name

Type

Size

Default

Keys

Not Null

Description

SEQUENCE_NAME

VARCHAR

50

PK

X

Unique identifier of a specific sequence.

CURRENT_VALUE

BIGINT

0

X

The current value of the sequence.

INCREMENT_BY

INTEGER

1

X

Specify the interval between sequence numbers. This integer value can be any positive or negative integer, but it cannot be 0.

MIN_VALUE

BIGINT

1

X

Specify the minimum value of the sequence.

MAX_VALUE

BIGINT

9999999999

X

Specify the maximum value the sequence can generate.

CYCLE

TINYINT

1

0

Indicate whether the sequence should automatically cycle once a boundary is hit.

CACHE_SIZE

INTEGER

0

X

Specify the number of sequence numbers to acquire and cache when one is requested.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

A.40. TABLE_RELOAD_REQUEST

This table acts as a means to queue up a reload of a specific table. Either the target or the source node may insert into this table to queue up a load. If the target node inserts into the table, then the row will be synchronized to the source node and the reload events will be queued up during routing.

Table 57. TABLE_RELOAD_REQUEST

Name

Type

Size

Default

Keys

Not Null

Description

TARGET_NODE_ID

VARCHAR

50

PK

X

Unique identifier for the node to receive the table reload.

SOURCE_NODE_ID

VARCHAR

50

PK

X

Unique identifier for the node that will be the source of the table reload.

TRIGGER_ID

VARCHAR

128

PK

X

Unique identifier for a trigger associated with the table reload. Note the trigger must be linked to the router.

ROUTER_ID

VARCHAR

50

PK

X

Unique description of the router associated with the table reload. Note the router must be linked to the trigger.

CREATE_TIME

TIMESTAMP

PK

X

Timestamp when this entry was created.

CREATE_TABLE

TINYINT

1

0

X

Flag that indicates that a table create script will be sent as part of the reload

DELETE_FIRST

TINYINT

1

0

X

Flag that indicates that the table will be deleted before loading.

RELOAD_SELECT

LONGVARCHAR

Overrides the initial load select.

BEFORE_CUSTOM_SQL

LONGVARCHAR

SQL Statement to run prior to loading the table

RELOAD_TIME

TIMESTAMP

The timestamp when the reload was started for this node.

LOAD_ID

BIGINT

An id that ties multiple batches together to identify them as being part of a load.

PROCESSED

TINYINT

1

0

X

Flag that indicates that this load was processed into batches.

CHANNEL_ID

VARCHAR

128

The channel that was specified as a source of the load.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

A.41. TRANSFORM_TABLE

Defines a data loader transformation which can be used to map arbitrary tables and columns to other tables and columns.

Table 58. TRANSFORM_TABLE

Name

Type

Size

Default

Keys

Not Null

Description

TRANSFORM_ID

VARCHAR

50

PK

X

Unique identifier of a specific transform.

SOURCE_NODE_GROUP_ID

VARCHAR

50

PK FK

X

The node group where data changes are captured.

TARGET_NODE_GROUP_ID

VARCHAR

50

PK FK

X

The node group where data changes will be sent.

TRANSFORM_POINT

VARCHAR

10

X

The point during the transport of captured data that a transform happens. Support values are EXTRACT or LOAD.

SOURCE_CATALOG_NAME

VARCHAR

255

Optional name for the catalog the configured table is in.

SOURCE_SCHEMA_NAME

VARCHAR

255

Optional name for the schema a configured table is in.

SOURCE_TABLE_NAME

VARCHAR

255

X

The name of the source table that will be transformed.

TARGET_CATALOG_NAME

VARCHAR

255

Optional name for the catalog a target table is in. Only use this if the target table is not in the default catalog.

TARGET_SCHEMA_NAME

VARCHAR

255

Optional name of the schema a target table is in. Only use this if the target table is not in the default schema.

TARGET_TABLE_NAME

VARCHAR

255

The name of the target table.

UPDATE_FIRST

TINYINT

1

0

If true, the target actions are attempted as updates first, regardless of whether the source operation was an insert or an update.

UPDATE_ACTION

VARCHAR

255

UPDATE_COL

X

An action to take upon update of a row. Possible values are: DEL_ROW, UPD_ROW, INS_ROW or NONE.

DELETE_ACTION

VARCHAR

10

X

An action to take upon delete of a row. Possible values are: DEL_ROW, UPD_ROW, or NONE.

TRANSFORM_ORDER

INTEGER

1

X

Specifies the order in which to apply transforms if more than one target operation occurs.

COLUMN_POLICY

VARCHAR

10

SPECIFIED

X

Specifies whether all columns need to be specified or whether they are implied. Possible values are SPECIFIED or IMPLIED.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when a user last updated this entry.

A.42. TRANSFORM_COLUMN

Defines the column mappings and optional data transformation for a data loader transformation.

Table 59. TRANSFORM_COLUMN

Name

Type

Size

Default

Keys

Not Null

Description

TRANSFORM_ID

VARCHAR

50

PK

X

Unique identifier of a specific transform.

INCLUDE_ON

CHAR

1

*

PK

X

Indicates whether this mapping is included during an insert (I), update (U), delete (D) operation at the target based on the dml type at the source. A value of * represents the fact that you want to map the column for all operations.

TARGET_COLUMN_NAME

VARCHAR

128

PK

X

Name of the target column.

SOURCE_COLUMN_NAME

VARCHAR

128

Name of the source column.

PK

TINYINT

1

0

Indicates whether this mapping defines a primary key to be used to identify the target row. At least one row must be defined as a pk for each transform_id.

TRANSFORM_TYPE

VARCHAR

50

copy

The name of a specific type of transform. Custom transformers can be configured as extension points.

TRANSFORM_EXPRESSION

LONGVARCHAR

An expression that is specific to the type of transform that is configured in transform_type. See the documentation for each transformer for more details.

TRANSFORM_ORDER

INTEGER

1

X

Specifies the order in which to apply transforms if more than one target operation occurs.

CREATE_TIME

TIMESTAMP

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

Timestamp when a user last updated this entry.

A.43. TRIGGER

Configures database triggers that capture changes in the database. Configuration of which triggers are generated for which tables is stored here. Triggers are created in a node’s database if the source_node_group_id of a router is mapped to a row in this table.

Table 60. TRIGGER

Name

Type

Size

Default

Keys

Not Null

Description

TRIGGER_ID

VARCHAR

128

PK

X

Unique identifier for a trigger.

SOURCE_CATALOG_NAME

VARCHAR

255

Optional name for the catalog the configured table is in. If the name includes * then a wildcard match on the table name will be attempted. Wildcard names can include a list of names that are comma separated. The ! symbol may be used to indicate a NOT match condition.

SOURCE_SCHEMA_NAME

VARCHAR

255

Optional name for the schema a configured table is in. If the name includes * then a wildcard match on the table name will be attempted. Wildcard names can include a list of names that are comma separated. The ! symbol may be used to indicate a NOT match condition.

SOURCE_TABLE_NAME

VARCHAR

255

X

The name of the source table that will have a trigger installed to watch for data changes. If the name includes * then a wildcard match on the table name will be attempted. Wildcard names can include a list of names that are comma separated. The ! symbol may be used to indicate a NOT match condition.

CHANNEL_ID

VARCHAR

128

FK

X

The channel_id of the channel that data changes will flow through.

RELOAD_CHANNEL_ID

VARCHAR

128

reload

FK

X

The channel_id of the channel that will be used for reloads.

SYNC_ON_UPDATE

TINYINT

1

1

X

Whether or not to install an update trigger.

SYNC_ON_INSERT

TINYINT

1

1

X

Whether or not to install an insert trigger.

SYNC_ON_DELETE

TINYINT

1

1

X

Whether or not to install an delete trigger.

SYNC_ON_INCOMING_BATCH

TINYINT

1

0

X

Whether or not an incoming batch that loads data into this table should cause the triggers to capture data_events. Be careful turning this on, because an update loop is possible.

NAME_FOR_UPDATE_TRIGGER

VARCHAR

255

Override the default generated name for the update trigger.

NAME_FOR_INSERT_TRIGGER

VARCHAR

255

Override the default generated name for the insert trigger.

NAME_FOR_DELETE_TRIGGER

VARCHAR

255

Override the default generated name for the delete trigger.

SYNC_ON_UPDATE_CONDITION

LONGVARCHAR

Specify a condition for the update trigger firing using an expression specific to the database.

SYNC_ON_INSERT_CONDITION

LONGVARCHAR

Specify a condition for the insert trigger firing using an expression specific to the database.

SYNC_ON_DELETE_CONDITION

LONGVARCHAR

Specify a condition for the delete trigger firing using an expression specific to the database.

CUSTOM_BEFORE_UPDATE_TEXT

LONGVARCHAR

Specify update trigger text to execute before the SymmetricDS trigger text runs. If you need to modify data, use custom_on_update_text instead, so data is captured in order. This field is not applicable for H2, HSQLDB 1.x or Apachy Derby.

CUSTOM_BEFORE_INSERT_TEXT

LONGVARCHAR

Specify insert trigger text to execute before the SymmetricDS trigger text runs. If you need to modify data, use custom_on_insert_text instead, so data is captured in order. This field is not applicable for H2, HSQLDB 1.x or Apachy Derby.

CUSTOM_BEFORE_DELETE_TEXT

LONGVARCHAR

Specify delete trigger text to execute brefore the SymmetricDS trigger text runs. If you need to modify data, use custom_on_delete_text instead, so data is captured in order. This field is not applicable for H2, HSQLDB 1.x or Apachy Derby.

CUSTOM_ON_UPDATE_TEXT

LONGVARCHAR

Specify update trigger text to execute after the SymmetricDS trigger text runs. This field is not applicable for H2, HSQLDB 1.x or Apachy Derby.

CUSTOM_ON_INSERT_TEXT

LONGVARCHAR

Specify insert trigger text to execute after the SymmetricDS trigger text runs. This field is not applicable for H2, HSQLDB 1.x or Apachy Derby.

CUSTOM_ON_DELETE_TEXT

LONGVARCHAR

Specify delete trigger text to execute after the SymmetricDS trigger text runs. This field is not applicable for H2, HSQLDB 1.x or Apachy Derby.

EXTERNAL_SELECT

LONGVARCHAR

Specify a SQL select statement that returns a single result. It will be used in the generated database trigger to populate the EXTERNAL_DATA field on the data table.

TX_ID_EXPRESSION

LONGVARCHAR

Override the default expression for the transaction identifier that groups the data changes that were committed together.

CHANNEL_EXPRESSION

LONGVARCHAR

An expression that will be used to capture the channel id in the trigger. This expression will only be used if the channel_id is set to 'dynamic.'

EXCLUDED_COLUMN_NAMES

LONGVARCHAR

Specify a comma-delimited list of columns that should not be synchronized from this table. Note that if a primary key is found in this list, it will be ignored.

INCLUDED_COLUMN_NAMES

LONGVARCHAR

Specify a comma-delimited list of columns that should only be synchronized from this table. Note that if a primary key is found in this list, it will be ignored.

SYNC_KEY_NAMES

LONGVARCHAR

Specify a comma-delimited list of columns that should be used as the key for synchronization operations. By default, if not specified, then the primary key of the table will be used.

USE_STREAM_LOBS

TINYINT

1

0

X

Specifies whether to capture lob data as the trigger is firing or to stream lob columns from the source tables using callbacks during extraction. A value of 1 indicates to stream from the source via callback; a value of 0, lob data is captured by the trigger.

USE_CAPTURE_LOBS

TINYINT

1

0

X

Provides a hint as to whether this trigger will capture big lobs data. If set to 1 every effort will be made during data capture in trigger and during data selection for initial load to use lob facilities to extract and store data in the database. On Oracle, this may need to be set to 1 to get around 4k concatenation errors during data capture and during initial load.

USE_CAPTURE_OLD_DATA

TINYINT

1

1

X

Set this to 1 to capture old data. Old data is used for conflict resolution and it also is used to calculate which columns have changed. The software will only update changed columns when old data is captured.

USE_HANDLE_KEY_UPDATES

TINYINT

1

0

X

Allows handling of primary key updates (SQLServer dialect only)

STREAM_ROW

TINYINT

1

0

X

Captures only the primary key when the trigger fires and creates a reload event to pull the full row during extraction.

CREATE_TIME

TIMESTAMP

X

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

A.44. TRIGGER_HIST

A history of a table’s definition and the trigger used to capture data from the table. When a database trigger captures a data change, it references a trigger_hist entry so it is possible to know which columns the data represents. trigger_hist entries are made during the sync trigger process, which runs at each startup, each night in the syncTriggersJob, or any time the syncTriggers() JMX method is manually invoked. A new entry is made when a table definition or a trigger definition is changed, which causes a database trigger to be created or rebuilt.

Table 61. TRIGGER_HIST

Name

Type

Size

Default

Keys

Not Null

Description

TRIGGER_HIST_ID

INTEGER

PK

X

Unique identifier for a trigger_hist entry

TRIGGER_ID

VARCHAR

128

X

Unique identifier for a trigger

SOURCE_TABLE_NAME

VARCHAR

255

X

The name of the source table that will have a trigger installed to watch for data changes.

SOURCE_CATALOG_NAME

VARCHAR

255

The catalog name where the source table resides.

SOURCE_SCHEMA_NAME

VARCHAR

255

The schema name where the source table resides.

NAME_FOR_UPDATE_TRIGGER

VARCHAR

255

The name used when the insert trigger was created.

NAME_FOR_INSERT_TRIGGER

VARCHAR

255

The name used when the update trigger was created.

NAME_FOR_DELETE_TRIGGER

VARCHAR

255

The name used when the delete trigger was created.

TABLE_HASH

BIGINT

0

X

A hash of the table definition, used to detect changes in the definition.

TRIGGER_ROW_HASH

BIGINT

0

X

A hash of the trigger definition. If changes are detected to the values that affect a trigger definition, then the trigger will be regenerated.

TRIGGER_TEMPLATE_HASH

BIGINT

0

X

A hash of the trigger text. If changes are detected to the values that affect a trigger text then the trigger will be regenerated.

COLUMN_NAMES

LONGVARCHAR

X

The column names defined on the table. The column names are stored in comma-separated values (CSV) format.

PK_COLUMN_NAMES

LONGVARCHAR

X

The primary key column names defined on the table. The column names are stored in comma-separated values (CSV) format.

LAST_TRIGGER_BUILD_REASON

CHAR

1

X

The following reasons for a change are possible: New trigger that has not been created before (N); Schema changes in the table were detected (S); Configuration changes in Trigger ©; Trigger was missing (T), Trigger template changed (E), Forced rebuild (F).

ERROR_MESSAGE

LONGVARCHAR

Record any errors or warnings that occurred when attempting to build the trigger.

CREATE_TIME

TIMESTAMP

X

Timestamp when this entry was created.

INACTIVE_TIME

TIMESTAMP

The date and time when a trigger was inactivated.

A.45. TRIGGER_ROUTER

Map a trigger to a router.

Table 62. TRIGGER_ROUTER

Name

Type

Size

Default

Keys

Not Null

Description

TRIGGER_ID

VARCHAR

128

PK FK

X

The id of a trigger.

ROUTER_ID

VARCHAR

50

PK FK

X

The id of a router.

ENABLED

TINYINT

1

1

X

Indicates whether this trigger router is enabled or not.

INITIAL_LOAD_ORDER

INTEGER

1

X

Order sequence of this table when an initial load is sent to a node. If this value is the same for multiple tables, then SymmetricDS will attempt to order the tables according to FK constraints. If this value is set to a negative number, then the table will be excluded from an initial load.

INITIAL_LOAD_SELECT

LONGVARCHAR

Optional expression that can be used to pare down the data selected from a table during the initial load process.

INITIAL_LOAD_DELETE_STMT

LONGVARCHAR

The expression that is used to delete data when an initial load occurs. If this field is empty, no delete will occur before the initial load. If this field is not empty, the text will be used as a sql statement and executed for the initial load delete.

INITIAL_LOAD_BATCH_COUNT

INTEGER

1

Only applicable if the initial load extract job is enabled. The number of batches to split an initial load of a table across. If 0 then a select count(*) will be used to dynamically determine the number of batches based on the max_batch_size of the reload channel.

PING_BACK_ENABLED

TINYINT

1

0

X

When enabled, the node will route data that originated from a node back to that node. This attribute is only effective if sync_on_incoming_batch is set to 1.

CREATE_TIME

TIMESTAMP

X

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

A.46. TRIGGER_ROUTER_GROUPLET

This tables defines what grouplets are associated with what trigger routers. The existence of the grouplet for a trigger_router enables nodes associated with the grouplet and at the same time it disables the trigger router for all other nodes.

Table 63. TRIGGER_ROUTER_GROUPLET

Name

Type

Size

Default

Keys

Not Null

Description

GROUPLET_ID

VARCHAR

50

PK FK

X

Unique identifier for the grouplet.

TRIGGER_ID

VARCHAR

128

PK FK

X

The id of a trigger.

ROUTER_ID

VARCHAR

50

PK FK

X

The id of a router.

APPLIES_WHEN

CHAR

1

PK

X

Indicates the side that a grouplet should be applied to. Use 'T' for target and 'S' for source and 'B' for both source and target.

CREATE_TIME

TIMESTAMP

X

Timestamp when this entry was created.

LAST_UPDATE_BY

VARCHAR

50

The user who last updated this entry.

LAST_UPDATE_TIME

TIMESTAMP

X

Timestamp when a user last updated this entry.

Appendix B: Parameter List

There are two kinds of parameters that can be used to configure the behavior of SymmetricDS: Startup Parameters and Runtime Parameters . Startup Parameters are required to be in a system property or a property file, while Runtime Parameters can also be found in the Parameter table from the database. Parameters are re-queried from their source at a configured interval and can also be refreshed on demand by using the JMX API.

The following table shows the source of parameters and the hierarchy of precedence.

Table 64. Parameter Discovery Precedence
Location Required Description

symmetric-default.properties

Y

Packaged inside symmetric-core jar file. This file has all the default settings along with descriptions.

conf/symmetric.properties

N

Changes to this file in the conf directory of a standalone install apply to all engines in the JVM.

symmetric-override.properties

N

Changes to this file, provided by the end user in the JVM’s classpath, apply to all engines in the JVM.

engines/*.properties

N

Properties for a specific engine or node that is hosted in a standalone install.

Java System Properties

N

Any SymmetricDS property can be passed in as a -D property to the runtime. It will take precedence over any properties file property.

Parameter table

N

A table which contains SymmetricDS parameters. Parameters can be targeted at a specific node group and even at a specific external id. These settings will take precedence over all of the above.

IParameterFilter

N

An extension point which allows parameters to be sourced from another location or customized. These settings will take precedence over all of the above.

B.1. Startup Parameters

Startup parameters are read once from properties files and apply only during start up. The following properties are used:

auto.config.database

If this is true, when symmetric starts up it will try to create the necessary tables.

Default: true

auto.config.registration.svr.sql.script

Provide the path to a SQL script that can be run to do initial setup of a registration server. This script will only be run on a registration server if the node_identity cannot be found.

Default:

auto.insert.registration.svr.if.not.found

If this is true, then node, group, security and identity rows will be inserted if the registration.url is blank and there is no configured node identity.

Default: true

auto.sync.config.after.upgrade

If this is true, then check if configuration should be pulled from registration server after upgrading. If the config version in the database does not match the software version, it will pull config.

Default: true

auto.update.node.values.from.properties

Update the node row in the database from the local properties during a heartbeat operation.

Default: true

cache.table.time.ms

This is the amount of time table meta data will be cached before re-reading it from the database

Default: 3600000

cluster.server.id

Set this if you want to give your server a unique name to be used to identify which server did what action. Typically useful when running in a clustered environment. This is currently used by the ClusterService when locking for a node.

Default:

db.connection.properties

These are settings that will be passed to the JDBC driver as connection properties. Suggested settings by database are as follows: Oracle db.connection.properties=oracle.net.CONNECT_TIMEOUT=300000;oracle.net.READ_TIMEOUT=300000;SetBigStringTryClob=true

Default:

db.delimited.identifier.mode

Determines whether delimited identifiers are used or normal SQL92 identifiers (which may only contain alphanumerical characters and the underscore, must start with a letter and cannot be a reserved keyword).

Default: true

db.driver

Specify your database driver

Default: org.h2.Driver

db.init.sql

Specify a SQL statement that will be run when a database connection is created

Default:

db.jdbc.execute.batch.size

This is the default number of rows that will be sent to the database as a batch when SymmetricDS uses the JDBC batch API. Currently, only routing uses JDBC batch. The data loader does not.

Default: 100

db.jdbc.isolation.level

Override the JDBC isolation level. The isolation level is detected by platform and automatically set, but it can be overridden here. Most platforms need at least read committed level to prevent phantom reads. (0=none, 1=read uncommitted, 2=read committed, 4=repeatable read, 8=serializable)

Default:

db.jdbc.streaming.results.fetch.size

This is the default fetch size for streaming result sets.

Default: 100

db.jndi.name

Name of a JNDI data source to use instead of using SymmetricDS’s connection pool. When this is set the db.url is ignored. Using a JNDI data source is relevant when deploying to an application server.

Default:

db.metadata.ignore.case

Indicates that case should be ignored when looking up references to tables using the database’s metadata api.

Default: true

db.native.extractor

Name of class that can extract native JDBC objects and interact directly with the driver. Spring uses this to perform operations specific to database, like handling LOBs on Oracle.

Default: org.springframework.jdbc.support.nativejdbc.CommonsDbcpNativeJdbcExtractor

db.password

Specify your database password

Default:

db.pool.initial.size

The initial size of the connection pool

Default: 5

db.pool.max.active

The maximum number of connections that will be allocated in the pool The http.concurrent.workers.max value should be half or less than half of this value.

Default: 40

db.pool.max.idle

The maximum number of connections that can remain idle in the pool, without extra ones being released

Default: 20

db.pool.max.wait.millis

This is how long a request for a connection from the datasource will wait before giving up.

Default: 30000

db.pool.min.evictable.idle.millis

This is how long a connection can be idle before it will be evicted.

Default: 120000

db.pool.min.idle

The minimum number of connections that can remain idle in the pool, without extra ones being created

Default: 5

db.read.strings.as.bytes

If set to true forces database columns that contain character data to be read as bytes (bypassing JDBC driver character encoding) so the raw values be encoded using the system default character set (usually UTF8). This property was added to bypass MySQL character encoding so the raw data can be converted to utf8 directly.

Default: false

db.sql.query.timeout.seconds

Most symmetric queries have a timeout associated with them. This is the default.

Default: 300

db.url

Specify your database URL

Default: jdbc:h2:mem:setme

db.user

Specify your database user

Default: please set me

db.validation.query

This is the query to validate the database connection in Connection Pool. It is database specific. The following are example statements for different databases. MySQL db.validation.query=select 1 Oracle db.validation.query=select 1 from dual DB2 db.validation.query=select max(1) from syscat.datatypes

Default:

db2.capture.transaction.id

Turn on the capture of transaction id for DB2 systems that support it.

Default: false

db2.zseries.version

Use to map the version string a zseries jdbc driver returns to the 'zseries' dialect

Default: DSN08015

engine.name

This is the engine name. This should be set if you have more than one engine running in the same JVM. It is used to name the JMX management bean. Please do not use underscores in this name.

Default: SymmetricDS

external.id

The external id for this SymmetricDS node. The external id is usually used as all or part of the node id.

Default: please set me

group.id

The node group id that this node belongs to

Default: please set me

hsqldb.initialize.db

If using the HsqlDbDialect, this property indicates whether Symmetric should setup the embedded database properties or if an external application will be doing so.

Default: true

http.concurrent.reservation.timeout.ms

This is the amount of time the host will keep a concurrent connection reservation after it has been attained by a client node while waiting for the subsequent reconnect to push.

Default: 20000

jmx.line.feed

Specify the type of line feed to use in JMX console methods. Possible values are: text or html.

Default: text

job.random.max.start.time.ms

When starting jobs, symmetric attempts to randomize the start time to spread out load. This is the maximum wait period before starting a job.

Default: 10000

mysql.bulk.load.local

Whether or not files are local to client only, so we must send the file to MySQL to load. If client is running on same server as MySQL, then this can be set to false to have MySQL read file directly.

Default: true

mysql.bulk.load.max.bytes.before.flush

Maximum number of bytes to write to file before running with 'LOAD DATA INFILE' to MySQL

Default: 1000000000

mysql.bulk.load.max.rows.before.flush

Maximum number of rows to write to file before running with 'LOAD DATA INFILE' to MySQL

Default: 100000

registration.url

This is the URL this node will use to register and pull it’s configuration. If this is the root server, then this may remain blank and the configuration should be inserted directly into the database

Default: please set me

route.on.extract

Whether the routing job will start manually when a push or pull is started

Default: false

routing.use.fast.gap.detector

Use a faster method of gap detection that uses the output of the work from router service instead of querying for it.

Default: true

security.service.class.name

The class name for the Security Service to use for encrypting and decrypting database passwords

Default: org.jumpmind.security.SecurityService

staging.dir

This is the location the staging directory will be put. If it isn’t set the staging directory will be located according to java.io.tmpdir.

Default:

start.heartbeat.job

Whether the heartbeat job is enabled for this node. The heartbeat job simply inserts an event to update the heartbeat_time column on the node_host table for the current node.

Default: true

start.initial.load.extract.job

Whether the background initial load extractor job is started.

Default: true

start.monitor.job

Whether the monitor job is started.

Default: true

start.pull.job

Whether the pull job is enabled for this node.

Default: true

start.purge.job

Whether the purge job is enabled for this node.

Default: true

start.push.job

Whether the push job is enabled for this node.

Default: true

start.refresh.cache.job

Whether the refresh cache job is enabled for this node.

Default: false

start.route.job

Whether the routing job is enabled for this node.

Default: true

start.stage.management.job

Whether the stage management job is enabled for this node.

Default: true

start.stat.flush.job

Whether the statistic flush job is enabled for this node.

Default: true

start.synctriggers.job

Whether the sync triggers job is enabled for this node.

Default: true

start.watchdog.job

Whether the watchdog job is enabled for this node.

Default: true

sync.table.prefix

When symmetric tables are created and accessed, this is the prefix to use for the tables.

Default: sym

sync.url

The url that can be used to access this SymmetricDS node. The default setting of http://$(hostName):31415/sync should be valid of the standalone launcher is used with the default settings The tokens of $(hostName) and $(ipAddress) are supported for this property.

transport.type

Specify the transport type. Supported values currently include: http, file, internal.

Default: http

web.batch.servlet.enable

Indicate whether the batch servlet (which allows specific batches to be requested) is enabled.

Default: true

B.2. Runtime Parameters

Runtime parameters are read periodically from properties files or the database. The following properties are used:

as400.cast.clob.to

Specify the database type to cast clob values to

Default: DBCLOB

auto.registration

If this is true, registration is opened automatically for nodes requesting it.

Default: false

auto.reload

If this is true, a reload is automatically sent to nodes when they register

Default: false

auto.reload.reverse

If this is true, a reload is automatically sent from a source node to all target nodes after the source node has registered.

Default: false

auto.resolve.foreign.key.violation

If this is true, when a batch receives a foreign key violation, the missing data will be automatically sent to resolve it.

Default: true

auto.start.engine

This indicates whether this node engine should be started when the instance is restarted

Default: true

auto.sync.config.at.startup

If this is true, then check if configuration should be pulled from registration server at startup. If the config version in the database does not match the software version, it will pull config.

Default: true

auto.sync.configuration

Capture and send SymmetricDS configuration changes to client nodes.

Default: true

auto.sync.configuration.on.incoming

Whether triggers should fire when changes sync into the node that this property is configured for.

Default: true

auto.sync.triggers

If this is true, triggers will be created or dropped to match configuration during the sync triggers process.

Default: true

auto.sync.triggers.after.config.change

If this is true, when a configuration change is detected during routing, symmetric will make sure all triggers in the database are up to date.

Default: true

auto.sync.triggers.after.config.loaded

If this is true, when a configuration change is detected while being loaded onto a target node, symmetric will make sure all triggers in the database are up to date.

Default: true

auto.sync.triggers.at.startup

If this is true, then run the sync triggers process at startup

Default: true

bsh.load.filter.handles.missing.tables

This parameter can be used to indicate that bean shell load filters will handle missing tables. Useful for the case where you want to make, for example, global catalog or schema changes at the destination in the case where the catalog, schema, or table doesn’t exist but the BSH will handle it.

Default: false

bsh.transform.global.script

BeanShell script to include at the beginning of all scripts used in transforms

Default:

cache.channel.common.batches.time.ms

This is the amount of time the routing service will cache the common batch status of channels.

Default: 600000

cache.channel.default.router.time.ms

This is the amount of time the routing service will cache the default router status of channels.

Default: 600000

cache.channel.time.ms

This is the amount of time channel entries will be cached before re-reading them from the database.

Default: 600000

cache.conflict.time.ms

This is the amount of time conflict setting entries will be cached before re-reading them from the database.

Default: 600000

cache.grouplets.time.ms

This is the amount of time grouplet entries will be cached before re-reading them from the database.

Default: 600000

cache.load.filter.time.ms

This is the amount of time load filter entries will be cached before re-reading them from the database.

Default: 600000

cache.monitor.time.ms

This is the amount of time monitor entries will be cached before re-reading them from the database.

Default: 60000

cache.node.group.link.time.ms

This is the amount of time node group links entries will be cached before re-reading them from the database.

Default: 600000

cache.node.security.time.ms

This is the amount of time node security entries will be cached before re-reading them from the database.

Default: 600000

cache.node.time.ms

This is the amount of time node entries will be cached before re-reading them from the database.

Default: 600000

cache.notification.time.ms

This is the amount of time notification entries will be cached before re-reading them from the database.

Default: 60000

cache.transform.time.ms

This is the amount of time transform entries will be cached before re-reading them from the database.

Default: 600000

cache.trigger.router.time.ms

This is the amount of time trigger entries will be cached before re-reading them from the database.

Default: 600000

cluster.lock.enabled

Enables clustering of jobs.

Default: false

cluster.lock.timeout.ms

Indicate that this node is being run on a farm or cluster of servers and it needs to use the database to 'lock' out other activity when actions are taken.

Default: 7200000

compression.level

Set the compression level this node will use when compressing synchronization payloads. @see java.util.zip.Deflater NO_COMPRESSION = 0 BEST_SPEED = 1 BEST_COMPRESSION = 9 DEFAULT_COMPRESSION = -1

Default: -1

compression.strategy

Set the compression strategy this node will use when compressing synchronization payloads. @see java.util.zip.Deflater FILTERED = 1 HUFFMAN_ONLY = 2 DEFAULT_STRATEGY = 0

Default: 0

console.report.as.offline.minutes

Setting that defines when a Node should be considered "offline." The node offline monitor uses this setting.

Default: 1440

create.table.not.null.columns.supported

If set to true, when a table’s schema is sent to the target database it will use NOT NULL statements to match the source. If this is false NOT NULL will not be included in the sql

Default: true

create.table.without.defaults

If set to true, when a table’s schema is sent to the target database default values will not be included.

Default: false

create.table.without.foreign.keys

If set to true, when a table’s schema is sent to the target database foreign keys will not be included.

Default: false

create.table.without.pk.if.source.without.pk

If set to true, when a table’s schema is sent to the target database it will not have all columns set as the primary key if the source does not have any primary keys.

Default: false

data.id.increment.by

This is the expected increment value for the data_id in the data table. This is useful if you use auto_increment_increment and auto_increment_offset in MySQL. Note that these settings require innodb_autoinc_lock_mode=0, otherwise the increment and offset are not guaranteed.

Default: 1

dataextractor.enable

Disable the extraction of all channels with the exception of the config channel

Default: true

dataextractor.text.column.expression

Provide an expression that will be used in the trigger templates, and in the initial load and the sym_data extraction SQL for all text based column values (like varchar, char, nvarchar, clob and nchar columns). The expression can be used to make scenario based casts. For example, if the data in the database was inserted under a different character set that the default character set on Oracle, then a helpful expression might be something like this: convert($(columnName), 'AR8ISO8859P6', 'AR8MSWIN1256')

Default:

dataloader.apply.changes.only

Indicates that old data should be used to create the update statement. If old data is equal to the new data and this property is set to true, then no update statement will be run.

Default: true

dataloader.create.table.alter.to.match.db.case

Whether to alter the case of the database tables that are created by the SymmetricDS data loader to match the default case of the target database.

Default: true

dataloader.enable

Disable the loading of all channel with the exception of the config channel. This property can be set to allow all changes to be extracted without introducing other changes in order to allow maintenance operations.

Default: true

dataloader.error.save.curval

Indicates that the current value of the row should be recorded in the incoming_error table

Default: false

dataloader.fit.to.column

Indicate that the data loader should truncate data that is bigger than the target columns can handle. This applies to text-based columns only.

Default: false

dataloader.ignore.missing.tables

Tables that are missing at the target database will be ignored. This should be set to true if you expect that in some clients a table might not exist. If set to false, the batch will fail.

Default: false

dataloader.max.rows.before.commit

This is the maximum number of rows that will be supported in a single transaction. If the database transaction row count reaches a size that is greater than this number then the transaction will be auto committed. The default value of -1 indicates that there is no size limit.

Default: 10000

dataloader.sleep.time.after.early.commit

Amount of time to sleep before continuing data load after dataloader.max.rows.before.commit rows have been loaded. This is useful to give other application threads a chance to do work before continuing to load.

Default: 5

dataloader.text.column.expression

Provide a SQL expression that will be used by the data loader in DML statements for all text based column values (like varchar, char, nvarchar, clob and nchar columns). The expression can be used to make scenario based casts. For example, if the data in the database should be converted to a different character set that the default character set on Oracle, then a helpful expression might be something like this: convert($(columnName), 'AR8MSWIN1256', 'AR8ISO8859P6')

Default:

dataloader.use.primary.keys.from.source

Indicates that the database writer should use the primary keys from the source. Flip this to false if you want update and deletes to be based on the primary key as defined by the target table

Default: true

datareload.batch.insert.transactional

Indicate whether the process of inserting data, data_events and outgoing_batches for a reload is transactional. The only reason this might be marked as false is to reduce possible contention while multiple nodes connect for reloads at the same time.

Default: true

db.master.collation

For Sql Server, work around "Implicit conversion of varchar" issues by explictly collating varchar columns in the database trigger. Relevant when the default database collation does not match the collation of the varchar columns of a table

Default:

db.treat.date.time.as.varchar.enabled

This is a setting that instructs the data capture and data load to treat JDBC TIME, DATE, and TIMESTAMP columns as if they were VARCHAR columns. This means that the columns will be captured and loaded in the form that the database stores them. Setting this to true on MySQL will allow datetime columns with the value of '0000-00-00 00:00:00' to be synchronized.

Default: false

dbf.router.validate.header

Determines if the *.DBF file headers should be validated when using the DBF Router

Default: true

extensions.xml

Spring xml configuration for extension points. This property enables maintaining Spring extension point configuration in the database. After changing this property a server restart is required.

Default:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xmlns:context="http://www.springframework.org/schema/context"
           xmlns:util="http://www.springframework.org/schema/util"
           xsi:schemaLocation="http://www.springframework.org/schema/beans
           http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
           http://www.springframework.org/schema/util
           http://www.springframework.org/schema/util/spring-util-3.0.xsd
           http://www.springframework.org/schema/context
           http://www.springframework.org/schema/context/spring-context-3.0.xsd">
</beans>
external.id.is.unique.enabled

null

Default: true

file.pull.lock.timeout.ms

null

Default: 7200000

file.pull.period.minimum.ms

null

Default: 0

file.pull.thread.per.server.count

null

Default: 1

file.push.lock.timeout.ms

null

Default: 7200000

file.push.period.minimum.ms

null

Default: 0

file.push.thread.per.server.count

null

Default: 1

file.sync.delete.ctl.file.after.sync

If the ctl file is used to control file triggers this will allow the system to remove the ctl file after sync but leave the source file.

Default: false

file.sync.enable

Enables File Synchronization capabilities

Default: false

file.sync.fast.scan

Fast scan will look for files that were modified since the last run of file sync tracker and commit their changes using the data loader max commit row setting. When it finds modified directories, it compares to the file snapshot to find changes. For a large file system, this is faster and more efficient than the normal tracker.

Default: false

file.sync.lock.wait.ms

How long file sync should wait in millis for the exclusive lock used by file tracker or the shared lock used by file sync push/pull jobs.

Default: 300000

file.sync.prevent.ping.back

Record each file received in the sym_incoming_file table, which is checked when syncing outgoing files to prevent a "ping back" where the same file change is sent back and forth during bi-directional sync. If you aren’t using bi-directional sync, turn this off for better performance.

Default: true

file.sync.use.crc

Calculate a checksum for each file (using CRC32), which is used to detect a file collision if the target file has a different checksum. If you don’t need to detect conflicts, turn this off for better performance.

Default: true

file.sync.use.ctl.as.file.ext

If the ctl file is used to control file triggers this will look for a control file with the same name but .ctl replacing the existing extension. Example: temp.txt would need control file called temp.ctl instead of temp.txt.ctl

Default: false

firebird.extract.varchar.row.old.pk.data

On Firebird database, the varchar sizes to use during extracting the row_data, old_data, and pk_data from sym_data. Specify the values as comma-separated for row, old, and pk respectively. By casting to varchar and using small sizes, performance of extract is improved. The entire row size must be under 64K or you will get a "block size exceeds implementation restriction" error. If you need to extract larger sizes, then enable the contains_big_lobs on the channel.

Default: 20000,20000,1000

heartbeat.sync.on.push.enabled

Specify whether to push node_host records to configured push clients. If this is true the node for this instance and the node_host rows for all children instances will be pushed to all nodes that this node is configured to push to.

Default: true

heartbeat.sync.on.push.period.sec

This is the number of seconds between when the sym_node_host table’s heartbeat_time column is updated. This property depends on the frequency of the heartbeat job. If the heartbeat job is set to run every 10 minutes and this property is set to 10 seconds, then the heartbeat will only update every 10 minutes.

Default: 0

heartbeat.sync.on.startup

When this property is set to true the heartbeat process will run at server startup. Prior to 3.4 the heartbeat always happened at startup.

Default: false

heartbeat.update.node.with.batch.status

When this is set to true, SymmetricDS will update fields in the sym_node table that indicate the number of outstanding errors and/or batches it has pending

Default: false

http.compression

Whether or not to use compression over HTTP connections. Currently, this setting only affects the push connection of the source node. Compression on a pull is enabled using a filter in the web.xml for the PullServlet. @see web.compression.disabled to enable/disable the filter

Default: true

http.concurrent.workers.max

This is the number of HTTP concurrent push/pull requests SymmetricDS will accept. This is controlled by the NodeConcurrencyFilter. The number is per servlet the filter is applied to. The db.pool.max.active value should be twice this value.

Default: 20

http.push.stream.output.enabled

The HTTP client connection, during a push, buffers the entire outgoing pay-load locally before sending it. Set this to true if you are getting heap space errors during a push. Note that basic auth may not work when this is turned on.

Default: true

http.push.stream.output.size

When HTTP chunking is turned on, this is the size to use for each chunk.

Default: 30720

http.timeout.ms

Sets both the connection and read timeout on the internal HttpUrlConnection

Default: 90000

https.verified.server.names

During SSL handshaking, if the URL’s hostname and the server’s identification hostname mismatch, the verification mechanism will check this comma separated list of server names to see if the cert should be accepted (see javax.net.ssl.HostnameVerifier.) Set this value equal to 'all' if all server names should be accepted. Set this value to blank if a valid SSL cert is required.

Default:

hybrid.push.pull.buffer.status.updates

This controls whether or not the ReportStatus job will buffer its status updates. When buffered, status updates will only be sent when a channel’s batch to send count goes from 0 to non-zero.

Default: true

hybrid.push.pull.enabled

Enable hybrid push/pull functionality. Allows for pull configuration, but also allows clients to report their outgoing batch status so that the nodes that have data can be prioritized for pulling.

Default: false

hybrid.push.pull.timeout.ms

When hybrid.push.pull.enabled=true, how much time in millis has to go by to pull from a node that hasn’t reported pending batches.

Default: 3600000

incoming.batches.record.ok.enabled

Indicates whether batches that have loaded successfully should be recorded in the incoming_batch table. Note that if this is set to false, then duplicate batches will NOT be skipped because SymmetricDS will have no way of knowing that a batch has already loaded. This parameter can be set to false to reduce contention on sym_incoming_batch for systems with many clients.

Default: true

incoming.batches.skip.duplicates

This instructs symmetric to attempt to skip duplicate batches that are received. Symmetric might be more efficient when recovering from error conditions if this is set to true, but you run the risk of missing data if the batch ids get reset (on one node, but not another) somehow (which is unlikely in production, but fairly likely in lab or development setups).

Default: true

initial.load.after.sql

This is SQL that will run on the client after an initial load finishes. The default delimiter for these lines is a semicolon. To override, include a single line that starts with delimiter and is followed by the new delimiter, then the old For example a line that reads: delimiter $; would change sql lines to have a delimiter of $ for subsequent lines

Default:

initial.load.before.sql

This is SQL that will run on the client before an initial load starts. The default delimiter for these lines is a semicolon. To override, include a single line that starts with delimiter and is followed by the new delimiter, then the old For example a line that reads: delimiter $; would change sql lines to have a delimiter of $ for subsequent lines

Default:

initial.load.block.channels

Initial load and reload events should normally block other channels to ensure each table is loaded first followed by changes captured during the initial load. Setting this to false will allow all channels to load in priority order even when reload events or an initial load is running.

Default: true

initial.load.concat.csv.in.sql.enabled

Indicates that the SQL used to extract data from a table for an initial load should concatenate the data using the same SQL expression that a trigger uses versus concatenating the data in code.

Default: false

initial.load.create.first

Set this if tables should be created prior to an initial load.

Default: false

initial.load.delete.first

Set this if tables should be purged prior to an initial load.

Default: false

initial.load.delete.first.sql

This is the SQL statement that will be used for purging a table during an initial load.

Default: delete from %s

initial.load.extract.and.send.when.staged

Indicate that the initial load batches will be sent as soon as they are staged. Used in combination with initial.load.use.extract.job.enabled=true

Default: true

initial.load.extract.thread.per.server.count

The number of threads available for concurrent extracts of initial load batches.

Default: 20

initial.load.extract.timeout.ms

The number of milliseconds to wait until the lock will be broken on an initial load extract job.

Default: 7200000

initial.load.reverse.after.sql

This is SQL that will run on the server after a reverse initial load finishes.

Default:

initial.load.reverse.before.sql

This is SQL that will run on the server before a reverse initial load starts.

Default:

initial.load.reverse.first

Indicate that if both the initial load and the reverse initial load are requested, then the reverse initial load should take place first.

Default: true

initial.load.schema.dump.command

Specify a system command that writes the structure of the database to system.out to be captured and sent to the node that is being initial loaded. Used in conjunction with initial.load.schema.load.command. An example is: pg_dump --dbname=server --schema=my_schema --schema-only --clean

Default:

initial.load.schema.load.command

Specify a system command that will take the content captured by initial.load.schema.dump.command and apply it to the database. The content is passed to the system command via system.in. An example is: psql --output=output.log --dbname=client

Default:

initial.load.use.extract.job.enabled

Indicate that the extract job job should be used to extract reload batches

Default: true

initial.load.use.reload.channel

Indicate that the initial load events should be put on the reload channel. If this is set to false each table will be put on it’s assigned channel during the reload.

Default: true

job.file.sync.pull.period.time.ms

null

Default: 60000

job.file.sync.push.period.time.ms

null

Default: 60000

job.file.sync.tracker.cron

null

Default: 0 0/5 * * * *

job.heartbeat.period.time.ms

This is how often the heartbeat job runs. Note that this doesn’t mean that a heartbeat is performed this often. See heartbeat.sync.on.push.period.sec to change how often the heartbeat is sync’d

Default: 900000

job.initial.load.extract.period.time.ms

This is how often the initial load extract queue job will run in the background

Default: 10000

job.monitor.period.time.ms

null

Default: 60000

job.offline.pull.period.time.ms

This is how often the offline pull job will be run to schedule offline reading of batch files from nodes.

Default: 60000

job.offline.push.period.time.ms

This is how often the offline push job will be run to schedule offline writing of batch files for nodes.

Default: 60000

job.pull.period.time.ms

This is how often the pull job will be run to schedule pulls of nodes.

Default: 60000

job.purge.first.pass

Enables a first pass purge for sym_data and sym_data_event that quickly purges the beginning of the table that precedes outstanding batches. These delete statements don’t use joins, so they run quicker.

Default: false

job.purge.first.pass.outstanding.batches.threshold

The maximum number of outstanding batches allowed for running the first pass purge. If there are too many outstanding batches, it will take too long to find their first data_id, so it shouldn’t be run.

Default: 1000

job.purge.incoming.cron

This is how often the incoming batch purge job will be run.

Default: 0 0 0 * * *

job.purge.max.num.batches.to.delete.in.tx

This is the number of batches that will be purged in one database transaction.

Default: 5000

job.purge.max.num.data.event.batches.to.delete.in.tx

This is the number of batches that will be purged from the data_event table in one database transaction.

Default: 5

job.purge.max.num.data.to.delete.in.tx

This is the number of data ids that will be purged in one database transaction.

Default: 5000

job.purge.outgoing.cron

This is how often the outgoing batch and data purge job will be run.

Default: 0 0 0 * * *

job.push.period.time.ms

This is how often the push job will be run to schedule pushes to nodes.

Default: 60000

job.refresh.cache.cron

This is when the refresh cache job will run.

Default: 0/30 * * * * *

job.report.status.cron

This is how often a client will push its status to the root server. Used in conjuction with hybrid.push.pull.

Default: 0 0/5 * * * *

job.routing.period.time.ms

This is how often the router will run in the background

Default: 10000

job.stage.management.cron

This is when the stage management job will run.

Default: 0 0 * * * *

job.stat.flush.cron

This is how often accumulated statistics will be flushed out to the database from memory.

Default: 0 0/5 * * * *

job.sync.config.cron

This is when the sync config job will run.

Default: 0 0/10 1 * * *

job.synctriggers.cron

This is when the sync triggers job will run.

Default: 0 0 0 * * *

job.watchdog.period.time.ms

null

Default: 3600000

jobs.synchronized.enable

If jobs need to be synchronized so that only one job can run at a time, set this parameter to true

Default: false

lock.timeout.ms

The amount of time a thread can hold a shared or exclusive lock before another thread can break the lock. The timeout is a safeguard in case an unexpected exception causes a lock to be abandoned. Restarting the service will clear all locks.

Default: 1800000

lock.wait.retry.ms

While waiting for a lock to be released, how often should we check the lock status in the sym_lock table in the database.

Default: 10000

log.conflict.resolution

Whether logging is enabled for conflict resolution

Default: false

monitor.events.capture.enabled

Enable capturing of monitor events and syncing to other nodes.

Default: true

mssql.allow.only.row.level.locks.on.runtime.tables

Automatically update data, data_event and outgoing_batch tables to allow only row level locking.

Default: true

mssql.bulk.load.field.terminator

Specify the field terminator used by the SQL Server bulk loader. Pick something that does not exist in the data in your database.

Default: ||

mssql.bulk.load.fire.triggers

Whether or not triggers should be allowed to fire when bulk loading data.

Default: false

mssql.bulk.load.max.rows.before.flush

Maximum number of rows to write to file before running with "BULK INSERT" to SQL-Server

Default: 100000

mssql.bulk.load.row.terminator

Specify the line terminator used by the SQL Server bulk loader. Pick something that does not exist in the data in your database.

Default: \r\n

mssql.bulk.load.unc.path

Specify a UNC network path to the tmp\bulkloaddir directory for SQL Server to access bulk load files. Use this property with bulk loader when SymmetricDS is on a separate server from SQL Server.

Default:

mssql.include.catalog.in.triggers

Includes the catalog/database name within generated triggers (catalog.schema.table). May need turned off to support backup processes such as creating a bacpac file

Default: true

mssql.lock.escalation.disabled

Disables lock escalation and turns off page level locking. May need turned off to support backup processes such as creating a bacpac file

Default: true

mssql.trigger.execute.as

Specify the user the SymmetricDS triggers should execute as. Possible values are { CALLER | SELF | OWNER | 'user_name' }

Default: caller

mssql.trigger.order.first

Set the order of triggers to 'First' using sp_settriggerorder after creating triggers. This is needed when the user has existing custom triggers that modify data. The SymmetricDS triggers need to fire first and capture the first change so that order of changes is preserved. If the user has a trigger set as 'First', it will be changed to 'None'.

Default: false

mssql.use.ntypes.for.sync

Use ntext for the data capture columns and cast to nvarchar(max) in the trigger text so that nvarchar, ntext and nchar double byte data isn’t lost when the database collation for char types isn’t compatible with n char types.

Default: false

mysql.bulk.load.replace

Whether or not to replace rows that already exist, based on primary key or unique key. If set to false, duplicates will be skipped.

Default: true

node.copy.mode.enabled

If the copy mode is enabled and the node starts up with an identity that does not match the configured external id, then the node will register with a special parameter that indicates the registration server should copy outgoing batch to the new node id.

Default: false

node.id.creator.script

This is a bean shell script that will be used to generate the node id for a registering node. This script is run on the registration server, not the registering node. The following variable are available for use: node, hostname, remoteHost, remoteAddress, and log. You can get the node group id by calling node.getNodeGroupId()

Default:

node.offline

Set the node to offline mode so that outgoing and incoming data are written to and read from a local directory instead of being sent over the network.

Default: false

node.offline.archive.dir

For a node operating in offline mode, specify the local directory where incoming data files should be moved to after successfully loading them. If this parameter is empty, files are removed after loading. The $(nodeGroupId) and $(nodeId) variables are useful when running multiple engines in the same server.

Default:

node.offline.error.dir

For a node operating in offline mode, specify the local directory where incoming data files should be moved to when they encounter an error during loading. To guarentee order of data loading, this parameter should be left empty so the file is not moved. The $(nodeGroupId) and $(nodeId) variables are useful when running multiple engines in the same server.

Default:

node.offline.incoming.accept.all

Accept batch data files from any node in the incoming directory, regardless of whether or not it is considered offline.

Default: true

node.offline.incoming.dir

For a node operating in offline mode, specify the local directory where data files should be read from. The $(nodeGroupId) and $(nodeId) variables are useful when running multiple engines in the same server.

Default: tmp/$(nodeGroupId)-$(nodeId)/offline/incoming

node.offline.outgoing.dir

For a node operating in offline mode, specify the local directory where data files should be written to. The $(nodeGroupId) and $(nodeId) variables are useful when running multiple engines in the same server.

Default: tmp/$(nodeGroupId)-$(nodeId)/offline/outgoing

num.of.ack.retries

This is the number of times we will attempt to send an ACK back to the remote node when pulling and loading data.

Default: 5

offline.node.detection.period.minutes

This is the number of minutes that a node has been offline before taking action A value of -1 (or any negative value) disables the feature.

Default: -1

offline.pull.lock.timeout.ms

The amount of time a single offline pull worker node_communication lock will timeout after.

Default: 7200000

offline.pull.thread.per.server.count

The number of threads created that will be used to read incoming offline batch data files

Default: 1

offline.push.lock.timeout.ms

The amount of time a single offline push worker node_communication lock will timeout after.

Default: 7200000

offline.push.thread.per.server.count

The number of threads created that will be used to write outgoing offline batch data files

Default: 1

oracle.sequence.noorder

On Oracle RAC, an ordered sequence for sym_data must be coordinated across RAC nodes, which has wait overhead. By setting this to true, a no-order sequence is used instead, which performs better for high throughput. Because the sequence is no longer ordered, sym_data is queried using an order by of create_time and data_id. You will need to restart after changing this parameter to get DDL applied to the sequence and sym_data.

Default: false

oracle.sequence.noorder.nextvalue.db.urls

For Oracle RAC in no-order mode, this parameter provides two methods for managing data gaps across multiple nodes in the cluster. When left blank, routing will use gv$_sequences to manage gaps. Or, use this parameter to specify a comma-separated list of database URLs to connect to during the heartbeat, which ensures each RAC node has periodic activity within its gap to prevent the gap from expiring.

Default:

oracle.template.precision

This is the precision that is used in the number template for oracle triggers

Default: *,38

oracle.transaction.view.clock.sync.threshold.ms

Requires access to gv$transaction. This is the threshold by which clock can be off in an oracle rac environment. It is only applicable when oracle.use.transaction.view is set to true.

Default: 60000

oracle.use.transaction.view

Requires access to gv$transaction

Default: false

outgoing.batches.copy.to.incoming.staging

When sending an outgoing batch, copy directly from the outgoing staging to the incoming staging when both nodes are on the same server. This also requires the staging to be enabled (stream.to.file.enabled=true). The HTTP transport is still used to send a batch "retry" instruction that causes the target node to read from staging.

Default: true

outgoing.batches.max.to.select

The maximum number of unprocessed outgoing batch rows for a node that will be read into memory for the next data extraction.

Default: 50000

outgoing.batches.peek.ahead.batch.commit.size

This is the number of data events that will be batched and committed together while building a batch. Note that this only kicks in if the prospective batch size is bigger than the configured max batch size.

Default: 10

outgoing.batches.update.status.data.count

Update the outgoing batch status to QY (querying) and SE (sending) when the data event count is larger than this threshold. This can improve performance overhead on small batches by avoiding status updates.

Default: 1000

outgoing.batches.update.status.millis

Update the outgoing batch status to QY (querying) and SE (sending) only when the last update to the batch is in the past by at least the specified number of milliseconds. This can improve performance overhead on small batches by avoiding status updates.

Default: 10000

parameter.reload.timeout.ms

The number of milliseconds parameters will be cached by the ParameterService before they are reread from the file system and database.

Default: 600000

pull.lock.timeout.ms

The amount of time a single pull worker node_communication lock will timeout after.

Default: 7200000

pull.period.minimum.ms

This is the minimum time that is allowed between pulls of a specific node.

Default: 0

pull.thread.per.server.count

The number of threads created that will be used to pull nodes concurrently on one server in the cluster.

Default: 10

purge.extract.request.retention.minutes

This is the retention time for how long a extract request will be retained

Default: 7200

purge.log.summary.retention.minutes

This is the retention for how long log summary messages will be retained in memory.

Default: 60

purge.registration.request.retention.minutes

This is the retention time for how long a registration request will be retained

Default: 7200

purge.retention.minutes

This is the retention for how long synchronization data will be kept in the symmetric synchronization tables. Note that data will be purged only if the purge job is enabled.

Default: 1440

purge.stats.retention.minutes

This is the retention for how long statistic data will be kept in the symmetric stats tables. Note that data will be purged only if the statistics flush job is enabled.

Default: 1440

push.lock.timeout.ms

The amount of time a single push worker node_communication lock will timeout after.

Default: 7200000

push.period.minimum.ms

This is the minimum time that is allowed between pushes to a specific node.

Default: 0

push.thread.per.server.count

The number of threads created that will be used to push to nodes concurrently on one server in the cluster.

Default: 10

redshift.append.to.copy.command

The value of this property will be appended to the end of the copy command when the redshift data loader is enabled.

Default:

redshift.bulk.load.max.bytes.before.flush

Maximum number of bytes to write to file before copying to S3 and running with COPY statement

Default: 1000000000

redshift.bulk.load.max.rows.before.flush

Maximum number of rows to write to file before copying to S3 and running with COPY statement

Default: 100000

redshift.bulk.load.s3.access.key

The AWS access key ID (aws_access_key_id) to use as credentials for uploading to S3

Default:

redshift.bulk.load.s3.bucket

The S3 bucket where bulk load files will be uploaded to before bulk loading into Redshift

Default:

redshift.bulk.load.s3.endpoint

The endpoint for the s3 bucket. If not set it will use the default endpoint.

Default:

redshift.bulk.load.s3.secret.key

The AWS secret access key (aws_secret_access_key) to use as credentials for uploading to S3

Default:

registration.number.of.attempts

This is the number of times registration will be attempted before being aborted. The default value is -1 which means an endless number of attempts. This parameter is specific to the node that is trying to register, not the node that is providing registration.

Default: -1

registration.reinitialize.enable

Indicates whether SymmetricDS should be re-initialized immediately before registration. When a client is successful with its registration, it will un-install its database objects (sym triggers, tables, and procedures), then re-install its database objects and load the configuration received from registration.

Default: false

registration.reopen.use.same.password

Indicates that if registration is reopened if the same password should be used. If set to false then a new password will be generated.

Default: true

rest.api.enable

Enables the REST API

Default: false

rest.api.heartbeat.on.pull

Enables the REST API to update the heartbeat when pulling data

Default: false

routing.collect.stats.unrouted

Enable to collect unrouted data statistics into the stat tables for graphs.

Default: false

routing.data.reader.order.by.gap.id.enabled

Use the order by clause to order sym_data when selecting data for routing. Most databases order the data naturally and might even have better performance when the order by clause is left off.

Default: true

routing.data.reader.threshold.gaps.to.use.greater.than.query

Select data to route from sym_data using a simple > start_gap_id query if the number of gaps in sym_data_gap are greater than the following number

Default: 100

routing.detect.invalid.gaps

Run checks for duplicate, invalid range, overlapping, and large gaps while processing each gap. This can be used to log information and catch problems with gap detection, but it incurs additional overhead.

Default: true

routing.flush.jdbc.batch.size

null

Default: 50000

routing.largest.gap.size

This is the maximum number of data that will be routed during one run. It should be a number that well exceeds the number rows that will be in a transaction.

Default: 50000000

routing.log.stats.on.batch.error

Enable to collect routing statistics for each batch and log the statistics when a batch goes into error.

Default: false

routing.max.gap.changes

This is the maximum number of changes that can be applied to the data_gap table. If the gap detection exceeds this number of changes, it will record the minimal gaps to the table and keep the rest in memory. This setting only applies to non-clustered systems.

Default: 200

routing.max.gaps.to.qualify.in.sql

This is the number of gaps that will be included in the SQL that is used to select data from sym_data. If there are more gaps than this number, then the last gap will in the SQL will use the end id of the last gap.

Default: 100

routing.peek.ahead.memory.threshold.percent

When reading data to route, if a lot of data has been committed in a single transaction, then the peek ahead queue size can cause out of memory errors. This setting instructs the routing reader to disperse with all "non" active transactions if the peek ahead queue grows to be a certain percentage of the overall allocated heap size.

Default: 50

routing.peek.ahead.window.after.max.size

This is the maximum number of events that will be peeked at to look for additional transaction rows after the max batch size is reached. The more concurrency in your db and the longer the transaction takes the bigger this value might have to be.

Default: 2000

routing.query.channels.first

Enable to query for which channels have data waiting, and then only route for those channels.

Default: true

routing.stale.dataid.gap.time.ms

This is the time that any gaps in data_ids will be considered stale and skipped.

Default: 1200000

routing.stale.gap.busy.expire.time.ms

For a busy system, how often to run checks on sym_data in order to expire gaps. Normally the routing reads all data and gap expiration can run without checking sym_data. But when the system is busy, then not all data is read, and gap expiration must query each gap from the sym_data table, which is expensive.

Default: 7200000

routing.wait.for.data.timeout.seconds

null

Default: 330

schema.version

This is hook to give the user a mechanism to indicate the schema version that is being synchronized.

Default: ?

send.ack.keepalive.ms

After a push or pull HTTP connection has been idle for this many milliseconds, a small partial acknowledgement or partial batch is sent to keep the connection alive.

Default: 60000

smtp.allow.untrusted.cert

Whether or not to accept an untrusted certificate for SSL/TLS when connecting to the mail server.

Default: false

smtp.auth

Whether or not to authenticate with the mail server.

Default: false

smtp.from

The email address to use in the "from" header when sending email.

Default: symmetricds@localhost

smtp.host

The hostname of the mail server

Default: localhost

smtp.password

When authenticating with the mail server, the password to use.

Default:

smtp.port

The port number of the mail server

Default: 25

smtp.starttls

Whether or not to use TLS after connecting to the mail server.

Default: false

smtp.transport

The transport type to use when connecting to mail server, either smtp or smtps.

Default: smtp

smtp.user

When authenticating with the mail server, the username to use.

Default:

start.offline.pull.job

Whether the offline pull job is enabled for this node.

Default: false

start.offline.push.job

Whether the offline push job is enabled for this node.

Default: false

start.sync.config.job

Whether the sync config job is enabled for this node. This job checks that the configuration version matches the software version, otherwise it will pull the latest configuration from the registration server.

Default: true

stream.to.file.enabled

Save data to the file system before transporting it to the client or loading it to the database if the number of bytes is past a certain threshold. This allows for better compression and better use of database and network resources. Statistics in the batch tables will be more accurate if this is set to true because each timed operation is independent of the others.

Default: true

stream.to.file.min.ttl.ms

If stream.to.file.enabled is false and staging is purged based on the database, then this is the minimum amount of time a staging file will be retained after it is purged from the database

Default: 1800000

stream.to.file.purge.on.ttl.enabled

When this is set to false, then batches in the staging area will only be purged after they have been purged from the database. If this is set to true, then batches will be purged based on the stream.to.file.ttl.ms setting.

Default: false

stream.to.file.threshold.bytes

If stream.to.file.enabled is true, then the threshold number of bytes at which a file will be written is controlled by this property. Note that for a synchronization the entire payload of the synchronization will be buffered in memory up to this number (at which point it will be written and continue to stream to disk.)

Default: 0

stream.to.file.ttl.ms

If stream.to.file.enabled is true, then this is how long a file will be retained in the staging directory after it has been marked as done.

Default: 3600000

sync.triggers.thread.count.per.server

Number of threads to use for creating triggers and removing old ones.

Default: 3

time.between.ack.retries.ms

This is the amount of time to wait between trying to send an ACK back to the remote node when pulling and loading data.

Default: 5000

transport.max.bytes.to.sync

This is the number of maximum number of bytes to synchronize in one connect.

Default: 1048576

transport.max.error.millis

Networks errors will be logged at INFO level since they are retried. After the maximum number of millis for network errors that continue in succession, the logging switches to WARN level.

Default: 300000

trigger.capture.ddl.changes

Feature to install a DDL trigger to capture any schema changes, including tables, views, triggers, functions, and stored procedures, which are synced to all nodes on configured group links. Supported on MS SQL-Server only.

Default: false

trigger.create.before.initial.load.enabled

Disable this property to prevent table triggers from being created before initial load has completed.

Default: true

trigger.update.capture.changed.data.only.enabled

Enable this property to force a compare of old and new data in triggers. If old=new, then don’t record the change in the data capture table. This is currently supported by the following dialects: mysql, oracle, db2, postgres, sql server

Default: false

web.compression.disabled

Disable compression from occurring on Servlet communication. This property only affects the outbound HTTP traffic streamed by the PullServlet and PushServlet.

Default: false

B.3. Server Configuration

Server configuration is read from conf/symmetric-server.conf for settings needed by the server before the parameter system has been initialized.

host.bind.name

Specify the hostname/IP address to bind to. (Default 0.0.0.0 will bind to all interfaces.)

Default: 0.0.0.0

http.enable

Enable synchronization over HTTP.

Default: true

http.port

Port number for synchronization over HTTP.

Default: 31415

https.allow.self.signed.certs

Use a trust manager that allows self-signed server SSL certificates.

Default: true

https.enable

Enable synchronization over HTTPS (HTTP over SSL).

Default: false

https.port

Port number for synchronization over HTTPS (HTTP over SSL).

Default: 31417

https.verified.server.names

List host names that are allowed for server SSL certificates.

Default: all

jmx.http.enable

Enable Java Management Extensions (JMX) web console.

Default: true

jmx.http.port

Port number for Java Management Extensions (JMX) web console.

Default: 31416

Appendix C: Database Notes

This section describes specific settings and notes for using each supported database platform.

C.1. Compatibility

Each database management system has its own characteristics that results in feature coverage in SymmetricDS. The following table shows which features are available by database.

Table 65. Support by Database
Database Versions Transaction Identifier Data Capture Conditional Sync Update Loop Prevention BLOB Sync CLOB Sync

DB2

9.5

N

Y

Y

Y

Y

Y

DB2

10,11

Y

Y

Y

Y

Y

Y

DB2 for IBM i

6

N

Y

Y

N

Y

Y

DB2 for IBM z/OS

10

N

Y

Y

N

Y

Y

Derby

10.3.2.1

Y

Y

Y

Y

Y

Y

Firebird

2.0

Y

Y

Y

Y

Y

Y

Greenplum

8.2.15 and above

N

N

N

Y

N

N

H2

1.x

Y

Y

Y

Y

Y

Y

HSQLDB

1.8

Y

Y

Y

Y

Y

Y

HSQLDB

2.0

N

Y

Y

Y

Y

Y

Informix

11

N

Y

Y

Y

N

N

Interbase

9.0

N

Y

Y

Y

Y

Y

MySQL

5.0.2 and above

Y

Y

Y

Y

Y

Y

MariaDB

5.1 and above

Y

Y

Y

Y

Y

Y

Oracle

10g and above

Y

Y

Y

Y

Y

Y

PostgreSQL

8.2.5 and above

Y (8.3 and above only)

Y

Y

Y

Y

Y

Redshift

1.0

N

N

N

Y

N

N

SQL Anywhere

9

Y

Y

Y

Y

Y

Y

SQL Server

2005 and above

Y

Y

Y

Y

Y

Y

SQL Server Azure

Tested on 11.00.2065

Y

Y

Y

Y

Y

N

SQLite

3.x

N

Y

Y

Y

Y

Y

Sybase ASE

12.5

Y

Y

Y

Y

Y

Y

Tibero

6 and above

Y

Y

Y

Y

Y

Y

Transaction Identifier

A transaction identifier is recorded in the SYM_DATA table along with changes, which allows changes in the same transaction to be grouped together for commit within a batch.

Data Capture

Changes to tables can be captured using database triggers.

Conditional Sync

Conditions can be specified on SYM_TRIGGER, which are compiled into the trigger to decide if a change should be captured.

Update Loop Prevention

The remote node is recorded on data that is captured, so the system can prevent the changes from being sent back to the same node.

BLOB Sync

Binary large object data can be captured or streamed from the database.

CLOB Sync

Character large object data can be captured or streamed from the database.

C.2. Catalog and Schema

A relational database may be divided into catalogs that contain sub-databases called schemas, which contain tables. Each database management system can implement the concepts of catalog and schema differently or not at all. When locating a table, SymmetricDS uses the default catalog and schema unless the user specifies one.

Table 66. Catalog and Schema Support by Database
Database Version Catalog Support Catalog Default Schema Support Schema Default

DB2

N

Y

values current schema

Derby

N

Y

values current schema

Firebird

N

N

Greenplum

N

Y

select current_schema()

H2

Y

select database()

Y

select schema()

HSQLDB

1.0

N

N

HSQLDB

2.0

Y

select value from information_schema.system_sessioninfo where key = 'CURRENT SCHEMA'

Y

select value from information_schema.system_sessioninfo where key = 'CURRENT SCHEMA'

Informix

N

Y

select trim(user) from sysmaster:sysdual

Interbase

N

N

MySQL

Y

select database()

N

MariaDB

Y

select database()

N

Oracle

N

Y

select sys_context('USERENV', 'CURRENT_SCHEMA') from dual

PostgreSQL

N

Y

select current_schema()

SQL Anywhere

Y

select db_name()

Y

select user_name()

SQL Server

2000

Y

select db_name()

Y

select 'dbo'

SQL Server

2005+

Y

select db_name()

Y

select schema_name()

SQL Server

Y

select db_name()

Y

select schema_name()

SQL Server Azure

Y

select db_name()

Y

select schema_name()

SQLite

N

N

Sybase ASE

Y

select db_name()

Y

select user_name()

Redshift

N

Y

select current_schema()

Tibero

N

Y

select sys_context('USERENV', 'CURRENT_SCHEMA') from dual

C.3. Apache Ignite

Since SymmetricDS is trigger based and there are not triggers in Apache Ignite, data can only be loaded to an Apache Ignite instance. The runtime SymmetricDS tables will also need to be installed in a full relational database to support integration with Apache Ignite.

The following steps explain how to configure a SymmetricDS instance using Apache Ignite as a destination node:

  • Configure and start an Apache Ignite cluster.

  • Copy the Apache Ignite JDBC driver (ignite-core-VERSION.jar) to the "lib" directory of the SymmetricDS installation.

  • Start SymmetricDS and configure a master node with the desired source database.

  • Configure the desired node groups, group links, and routers.

  • Create a target node and database that will contain the SymmetricDS runtime tables for the Apache Ignite instance.

The simplest solution to support Ignite is to add a new node (see Add Node) that is connected to an H2 database to store all the SYM_* runtime tables.
  • Configure the channels that will send data to the target node and the corresponding reload channel. Set the Data Loader Type to "jdbc" on both of these channels.

  • Stop your SymmetricDS instance and edit the .properties file for the target node in the engines directory of the SymmetricDS installation.

  • Set the following properties in the engine file:

jdbc.db.url=jdbc:ignite:thin://localhost
jdbc.db.driver=org.apache.ignite.IgniteJdbcThinDriver
jdbc.db.user=
jdbc.db.password=
jdbc.create.table.not.null.columns.supported=false
  • Update the jdbc url, username, and password to the desired Apache Ignite instance.

  • Restart SymmetricDS.

  • Create Table Triggers and Table Routers for the desired source tables to sync.

Keep in mind that SymmetricDS currently only supports syncing to the "PUBLIC" schema of an Apache Ignite instance.
  • (Optional) Perform an initial load from the source to the target node and/or send the table definitions to the Apache Ignite instance.

C.4. DB2

The IBM DB2 Dialect uses global variables to enable and disable node and trigger synchronization. These variables are created automatically during the first startup. The DB2 JDBC driver should be placed in the "lib" folder.

Currently, the DB2 Dialect for SymmetricDS does not provide support for transactional synchronization. Large objects (LOB) are supported, but are limited to 16,336 bytes in size. The current features in the DB2 Dialect have been tested using DB2 9.5 on Linux and Windows operating systems.

There is currently a bug with the retrieval of auto increment columns with the DB2 9.5 JDBC drivers that causes some of the SymmetricDS configuration tables to be rebuilt when auto.config.database=true. The DB2 9.7 JDBC drivers seem to have fixed the issue. They may be used with the 9.5 database.

A system temporary tablespace with too small of a page size may cause the following trigger build errors:

SQL1424N Too many references to transition variables and transition table
columns or the row length for these references is too long. Reason
code="2". LINE NUMBER=1. SQLSTATE=54040

Simply create a system temporary tablespace that has a bigger page size. A page size of 8k will probably suffice.

CREATE BUFFERPOOL tmp_bp PAGESIZE 8k;

CREATE SYSTEM TEMPORARY TABLESPACE tmp_tbsp
     PAGESIZE 8K
     MANAGED BY SYSTEM
     USING ('/home/db2inst1/tmp_tbsp')
         BUFFERPOOL tmp_bp
Table 67. Supported Data Types
Data Type Supported?

Char, VarChar, Long VarChar

Yes

Graphic, VarGraphic, Long VarGraphic

Yes

SmallInt, Integer, BigInt

Yes

Double

Yes

Decimal

Yes

Date, Time, TimeStamp

Yes

Blob, Clob, DBClob

Yes

DecFloat

No

Binary, VarBinary

No

By default DB2 will not capture the transaction id associated with the captured data. This can be turned on with the following parameter.

db2.capture.transaction.id=false

C.5. DB2 for IBM i

The DB2 for IBM i dialect can connect to a database on IBM iSeries (AS/400) machines. It was tested with the jt400 JDBC driver, which is already included in the SymmetricDS download. Here is an example JDBC URL:

jdbc:as400://hostname/myschema

The "libraries" parameter may be used in some cases to resolve unqualified object names:

jdbc:as400://hostname/;libraries=myschema
The tables created by SymmetricDS must have journaling enabled for commitment control.

C.5.1. Auto Journaling

The SymmetricDS library will be automatically journaled if it is created using the CREATE SCHEMA or CREATE COLLECTION SQL commands.

Otherwise, journaling can be enabled for new tables automatically by creating a default journal named QSQJRN in the library. The following steps add automatic journaling to the "sym" library (change it to your library) using the OS/400 command line:

  • Create the journal receiver object:

CRTJRNRCV JRNRCV(sym/symjrnrcv)
  • Create the journal object:

CRTJRN JRN(sym/QSQJRN) JRNRCV(sym/symjrnrcv)

C.5.2. Manual Journaling

Using automatic journaling for the SymmetricDS library is the preferred method, but journaling can also be enabled for each table manually. After starting SymmetricDS for the first time, it will connect to the database and create the required tables. Then it will log an error message that journaling needs to be enabled for its tables. The following steps add journaling to the "sym" library (change it to your library) using the OS/400 command line:

  • Create a journal receiver object:

CRTJRNRCV JRNRCV(sym/symjrnrcv)
  • Create a journal object:

CRTJRN JRN(sym/symjrn) JRNRCV(sym/symjrnrcv)
  • Start journaling:

STRJRNPF FILE(sym/SYM_C00001) JRN(sym/symjrn)

This step needs to be repeated for each physical file (table) created by SymmetricDS. A single command can be run for all tables at once, like this:

CALL QCMD
<hit F11 for more lines>

STRJRNPF FILE(sym/SYM_C00001 sym/SYM_C00002 sym/SYM_C00003 sym/SYM_C00004 sym/SYM_C00005 sym/SYM_C00006 sym/SYM_D00001 sym/SYM_D00002 sym/SYM_DATA sym/SYM_E00001 sym/SYM_E00002 sym/SYM_F00001 sym/SYM_F00002 sym/SYM_F00003 sym/SYM_F00004 sym/SYM_G00001 sym/SYM_G00002 sym/SYM_I00005 sym/SYM_I00008 sym/SYM_L00001 sym/SYM_LOCK sym/SYM_M00001 sym/SYM_M00002 sym/SYM_N00001 sym/SYM_N00002 sym/SYM_N00003 sym/SYM_N00004 sym/SYM_N00005 sym/SYM_N00006 sym/SYM_N00007 sym/SYM_N00008 sym/SYM_N00009 sym/SYM_N00010 sym/SYM_N00011 sym/SYM_N00012 sym/SYM_NODE sym/SYM_O00001 sym/SYM_P00001 sym/SYM_R00001 sym/SYM_R00002 sym/SYM_ROUTER sym/SYM_S00001 sym/SYM_T00001 sym/SYM_T00002 sym/SYM_T00003 sym/SYM_T00004 sym/SYM_T00005 sym/SYM_T00006 sym/SYM_T00007) JRN(sym/symjrn)
Table 68. Supported Data Types
Data Type Supported?

Char, VarChar, Long VarChar

Yes

Graphic, VarGraphic, Long VarGraphic

Yes

SmallInt, Integer, BigInt

Yes

Double

Yes

Decimal

Yes

Date, Time, TimeStamp

Yes

Blob, Clob, DBClob

Yes

DecFloat

No

Binary, VarBinary

No

C.6. Derby

The Apache Derby database can be run as an embedded database that is accessed by an application or a standalone server that can be accessed from the network. This dialect implementation creates database triggers that make method calls into Java classes. This means that the supporting JAR files need to be in the classpath when running Derby as a standalone database, which includes symmetric-ds.jar and commons-lang.jar.

C.7. Firebird

The Firebird Dialect may require the installation of a User Defined Function (UDF) library in order to provide functionality needed by the database triggers. SymmetricDS includes the required UDF library, called SYM_UDF, in both source form (as a C program) and as pre-compiled libraries for both Windows and Linux. For Firebird 2.0 and earlier, the UDF is needed for capturing character and BLOB types, so the dialect will not allow startup if the UDF is missing. For Firebird 2.1 and later, the UDF is only needed for capturing BLOB types, so installation may not be necessary and the dialect does not check for it.

The SYM_UDF library is copied into the UDF folder within the Firebird installation directory. For Linux users:

cp databases/firebird/sym_udf.so /opt/firebird/UDF

For Windows users:

copy databases\firebird\sym_udf.dll C:\Program Files\Firebird\Firebird_2_0\UDF

The following limitations currently exist for this dialect:

  1. The outgoing batch does not honor the channel size, and all outstanding data events are included in a batch.

  2. Syncing of Binary Large Object (BLOB) is limited to 16KB per column.

  3. Syncing of character data is limited to 32KB per column. The overall row size of a resultset cannot exceed 64KB. For change capture, the row_data and old_data are limited to 10KB and and the pk_data is limited to 500 bytes for performance reasons. If you get the error of "arithmetic exception, numeric overflow, or string truncation" during extraction of a batch, set the contains_big_lob to true for the channel.

Firebird 3 is supported, however legacy authentication must be enabled in order to connect. Please refer to the Firebird 3.0 documentation for instructions on enabling this feature.

Table 69. Supported Data Types
Data Type Supported?

SmallInt

Yes

Integer

Yes

BigInt

Yes

Char

Yes

VarChar

Yes

Float

Yes

Decimal

Yes

Numeric

Yes

Double Precision

Yes

Date

Yes

Time

Yes

TimeStamp

Yes

Blob

No

C.8. Greenplum

Greenplum is a data warehouse based on PostgreSQL. It is supported as a target platform in SymmetricDS.

SymmetricDS has bulk loading capability available for Greenplum. SymmetricDS specifies data loader types on a channel by channel basis. To utilize Greenplum Bulk loading versus straight JDBC insert, specify the Postgres Bulk Loader ("postgres_bulk") in the data_loader_type column of sym_channel.

C.9. H2

The H2 database allows only Java-based triggers. Therefore the H2 dialect requires that the SymmetricDS jar file be in the database’s classpath.

Table 70. Supported Data Types
Data Type Supported?

Int, TinyInt, SmallInt, BigInt

Yes

Boolean

Yes

Decimal

Yes

Double, Real

Yes

Time, Date, Timestamp

Yes

Binary, Blob

Yes

C.10. HSQLDB

HSQLDB was implemented with the intention that the database be run embedded in the same JVM process as SymmetricDS. Instead of dynamically generating static SQL-based triggers like the other databases, HSQLDB triggers are Java classes that re-use existing SymmetricDS services to read the configuration and insert data events accordingly.

The transaction identifier support is based on SQL events that happen in a 'window' of time. The trigger(s) track when the last trigger fired. If a trigger fired within X milliseconds of the previous firing, then the current event gets the same transaction identifier as the last. If the time window has passed, then a new transaction identifier is generated.

C.11. Informix

The Informix Dialect was tested against Informix Dynamic Server 11.50, but older versions may also work. You need to download the Informix JDBC Driver (from the IBM Download Site) and put the ifxjdbc.jar and ifxlang.jar files in the SymmetricDS lib folder.

SymmetricDS requires the db space to have at least a 4k page size.

Make sure your database has logging enabled, which enables transaction support. Enable logging when creating the database, like this:

CREATE DATABASE MYDB WITH LOG;

Or enable logging on an existing database, like this:

ondblog mydb unbuf log
ontape -s -L 0

The following features are not yet implemented:

  1. Syncing of Binary and Character Large Objects (LOB) is disabled.

  2. There is no transaction ID recorded on data captured, so it is possible for data to be committed within different transactions on the target database. If transaction synchronization is required, either specify a custom transaction ID or configure the synchronization so data is always sent in a single batch. A custom transaction ID can be specified with the tx_id_expression on TRIGGER. The batch size is controlled with the max_batch_size on CHANNEL. The pull and push jobs have runtime properties to control their interval.

C.12. Interbase

The Interbase Dialect requires the installation of a User Defined Function (UDF) library in order to provide functionality needed by the database triggers. SymmetricDS includes the required UDF library, called SYM_UDF, in both source form (as a C program) and as pre-compiled libraries for both Windows and Linux. The SYM_UDF library is copied into the UDF folder within the Interbase installation directory.

For Linux users:

cp databases/interbase/sym_udf.so /opt/interbase/UDF

For Windows users:

copy databases\interbase\sym_udf.dll C:\CodeGear\InterBase\UDF

The Interbase dialect currently has the following limitations:

  1. Data capture is limited to 4 KB per row, including large objects (LOB).

  2. There is no transaction ID recorded on data captured. Either specify a tx_id_expression on the TRIGGER table, or set a max_batch_size on the CHANNEL table that will accommodate your transactional data.

C.13. MariaDB

See MySQL notes. You can use either the MySQL or MariaDB driver for this dialect.

Table 71. Supported Data Types
Data Type Supported?

TinyInt, SmallInt, MediumInt, Int, BigInt

Yes

Decimal, Numeric

Yes

Float, Double

Yes

Bit

Yes

Char, Varchar

Yes

Binary, VarBinary

Yes

TinyBlob

No

Blob, MediumBlob, Longblob

Yes

TinyText, Text, MediumText, LongText

Yes

Enum

No

Set

No

Date, Time, DateTime, TimeStamp, Year

Yes

Point, LineString, Polygon, MultiPoint, MultiLinestring, MultiPolygon, GeometryCollection, Geometry

No

C.14. MySQL

MySQL supports several storage engines for different table types. However, SymmetricDS requires a storage engine that handles transactions. The recommended storage engine is InnoDB, which is included by default in MySQL 5.0 distributions. Either select the InnoDB engine during installation or modify your server configuration. To make InnoDB the default storage engine, modify your MySQL server configuration file (my.ini on Windows, my.cnf on Unix):

default-storage_engine = innodb

Alternatively, you can convert tables to the InnoDB storage engine with the following command:

alter table t engine = innodb;

On MySQL 5.0, the SymmetricDS user needs the SUPER privilege in order to create triggers.

grant super on *.* to symmetric;

On MySQL 5.1, the SymmetricDS user needs the TRIGGER and CREATE ROUTINE privileges in order to create triggers and functions.

grant trigger on *.* to symmetric;
grant create routine on *.* to symmetric;

MySQL allows '0000-00-00 00:00:00' to be entered as a value for datetime and timestamp columns. JDBC cannot deal with a date value with a year of 0. In order to work around this SymmetricDS can be configured to treat date and time columns as varchar columns for data capture and data load. To enable this feature set the db.treat.date.time.as.varchar.enabled property to true.

If you are using UTF-8 encoding in the database, you might consider using the characterEncoding parameter in the JDBC URL.

jdbc:mysql://hostname/databasename?tinyInt1isBit=false&characterEncoding=utf8
Table 72. Supported Data Types
Data Type Supported?

TinyInt, SmallInt, Int, MediumInt, BigInt

Yes

Decimal, Numeric

Yes

Float, Double

Yes

Bit

Yes

Date, DateTime, TimeStamp, Time, Year

Yes

Char, Varchar

Yes

Binary, VarBinary

Yes

TinyBlob, Blob, MediumBlob, BigBlob

Yes

TinyText, Text, MediumText, BigText

Yes

Enum

No

Set

No

Geometry, Point, LineString, Polygon, GeometryCollection, MultiPoint, MultiLinestring, MultiPolygon

No

C.15. MongoDB

Since SymmetricDS is trigger based and there are not triggers in MongoDB data can only synchronized to a MongoDB instance. The runtime SymmetricDS tables will also need to be installed in a full relational database to support integration with MongoDB.

The simplest solution to support MongoDB is to add a new node (see Add Node) that is connected to an H2 database to store all the SYM_* runtime tables.

The MongoDB data loader maps relational database rows to MongoDB documents in collections. To use the preconfigured MongoDB data loader, you set the data_loader_type to MongoDB on a channel.

Tables that should be synchronized to MongoDB should be configured to use this channel.

In order to point it to a MongoDB instance set the following properties in the engines properties file.

mongodb.username=xxxx
mongodb.password=xxxx
mongodb.host=xxxx
mongodb.port=xxxx
mongodb.default.databasename=default

By default, the catalog or schema passed by SymmetricDS will be used for the MongoDB database name. The table passed by SymmetricDS will be used as the MongoDB collection name. If the catalog or schema are not set, the default database name property is used as the database name.

The _id of the MongoDB document will be the primary key of the database record. If the table has a composite primary key, then the _id will be an embedded document that has name value pairs of the composite key. The body of the document will be name value pairs of the table column name and table row value.

SymmetricDS uses the MongoDB Java Driver to upsert documents.

SymmetricDS transforms can be used to transform the data. If a complex mapping is required that is not supported by transforms, then the IDBObjectMapper can be implemented and a new MongoDataLoaderFactory can be wired up as an extension point.

C.16. Oracle

This section describes Oracle specific SymmetricDS details.

C.16.1. Database Permissions

The SymmetricDS database user generally needs privileges for connecting and creating tables (including indexes), triggers, sequences, and procedures (including packages and functions). The following is an example of the needed grant statements:

GRANT CONNECT TO SYMMETRIC;
GRANT RESOURCE TO SYMMETRIC;
GRANT CREATE ANY TRIGGER TO SYMMETRIC;
GRANT EXECUTE ON UTL_RAW TO SYMMETRIC;

C.16.2. Known Limitations

  • The LONG data type is not supported. LONG columns cannot be accessed from triggers

  • The global precision of numeric columns is controlled by the oracle.template.precision parameter. It defaults to a precision of 30,10

  • With the default settings a database row cannot exceed 4k. If the error 'ORA-01489: result of string concatenation is too long' occurs then set use_capture_lobs to 1 in the TRIGGER table and contains_big_lobs to 1 on the assigned CHANNEL. Triggers will need to be synchronized. By enabling use_capture_lobs, the concatenated varchar string is cast to a clob which allows a length of more than 4k. By enabling contains_big_lobs, the extraction of sym_data is cast to a clob which prevents truncation at 4k. There is overhead for both of these settings

  • When multiple triggers are defined on the same table, then the order in which the triggers occur appears to be arbitrary

C.16.3. Bulk Loader

SymmetricDS has bulk loading capability available for Oracle. SymmetricDS specifies data loader types on a channel by channel basis. To utilize Oracle Bulk loading versus straight JDBC insert, specify the Oracle Bulk Loader ("oracle_bulk") in the data_loader_type column of sym_channel.

The bulk loader only supports simple data types. The bulk loader does not support tables that contain lobs.

C.16.4. Optional - Partitioning

Partitioning the DATA table by channel can help insert, routing and extraction performance on concurrent, high throughput systems. TRIGGERs should be organized to put data that is expected to be inserted concurrently on separate CHANNELs. The following is an example of partitioning. Note that both the table and the index should be partitioned. The default value allows for more channels to be added without having to modify the partitions.

CREATE TABLE SYM_DATA
(
    data_id INTEGER NOT NULL ,
    table_name VARCHAR2(50) NOT NULL,
    event_type CHAR(1) NOT NULL,
    row_data CLOB,
    pk_data CLOB,
    old_data CLOB,
    trigger_hist_id INTEGER NOT NULL,
    channel_id VARCHAR2(20),
    transaction_id VARCHAR2(1000),
    source_node_id VARCHAR2(50),
    external_data VARCHAR2(50),
    create_time TIMESTAMP
) PARTITION BY LIST (channel_id) (
PARTITION P_CONFIG VALUES ('config'),
PARTITION P_CHANNEL_ONE VALUES ('channel_one'),
PARTITION P_CHANNEL_TWO VALUES ('channel_two'),
...
PARTITION P_CHANNEL_N VALUES ('channel_n'),
PARTITION P_DEFAULT VALUES (DEFAULT));
CREATE UNIQUE INDEX IDX_D_CHANNEL_ID ON SYM_DATA (DATA_ID, CHANNEL_ID)  LOCAL
(
 PARTITION I_CONFIG,
 PARTITION I_CHANNEL_ONE,
 PARTITION I_CHANNEL_TWO,
 ...
 PARTITION I_CHANNEL_N,
 PARTITION I_DEFAULT
);

C.16.5. Supported Data Types

Table 73. Supported Data Types
Data Type Supported?

Char

Yes

NChar

Yes

VarChar2

Yes

NVarChar2

Yes

Long

No

Number

Yes

Binary_Float

Yes

Binary_Double

Yes

Date

Yes

Timestamp

Yes

Timestamp With Time Zone

Yes

Timestamp With Local Time Zone

Yes

Interval Year to Month

Yes

Interval Day to Second

Yes

Raw

Yes

Long Raw

No

RowID

Yes

URowID

No

Clob

Yes

NClob

Yes

Blob

Yes

BFile

No

C.17. PostgreSQL

SymmetricDS has bulk loading capability available for Postgres. SymmetricDS specifies data loader types on a channel by channel basis. To utilize Postgres Bulk loading versus straight JDBC insert, specify the Postgres Bulk Loader ("postgres_bulk") in the data_loader_type column of sym_channel.

Starting with PostgreSQL 8.3, SymmetricDS supports the transaction identifier. Binary Large Object (BLOB) replication is supported for both byte array (BYTEA) and object ID (OID) data types.

In order to function properly, SymmetricDS needs to use session variables. Before PostgreSQL 9.2, session variables are enabled using a custom variable class. Add the following line to the postgresql.conf file of PostgreSQL server on versions before 9.2:

custom_variable_classes = 'symmetric'

This setting is required on versions before 9.2, and SymmetricDS will log an error and exit if it cannot set session variables. PostgreSQL versions 9.2 or later do not require this setting.

Before database triggers can be created by in PostgreSQL, the plpgsql language handler must be installed on the database. If plpgsql is not already installed, the following statements can be run by the administrator on the database:

CREATE FUNCTION plpgsql_call_handler() RETURNS language_handler AS
    '$libdir/plpgsql' LANGUAGE C;

CREATE FUNCTION plpgsql_validator(oid) RETURNS void AS
    '$libdir/plpgsql' LANGUAGE C;

CREATE TRUSTED PROCEDURAL LANGUAGE plpgsql
    HANDLER plpgsql_call_handler
    VALIDATOR plpgsql_validator;

If you want SymmetricDS to install into a schema other than public you can alter the database user to set the default schema.

alter user {user name} set search_path to {schema name};

When multiple users are involved, the SymmetricDS user will need the following privileges:

GRANT USAGE ON SCHEMA {schema name} TO {user name};
GRANT CREATE ON SCHEMA {schema name} TO {user name};
Table 74. supported data types
Data Type Supported?

SmallInt, Integer, BigInt

Yes

Decimal, Numeric

Yes

Real, Double Precesion

Yes

Serial, BigSerial

Yes

Char, Varchar, Text

Yes

Money

No

Timestamp, Date, Time, Interval

Yes

Enum

No

Point, Lseg, Box, Path, Polygon, Circle

Yes

C.18. SQL Anywhere

SQL Anywhere and Sybase Adaptive Server Anywhere (ASA) were tested using the jConnect JDBC driver. The jConnect JDBC driver should be placed in the "lib" folder.

C.19. SQL Server

Microsoft SQL Server was tested using the jTDS JDBC driver.

SQL Server allows the update of primary key fields via the SQL update statement. If your application allows updating of the primary key field(s) for a table, and you want those updates synchronized, you will need to set the "Handle Key Updates" field on the trigger record for that specific table. The default for Handle Key Updates is false.

SymmetricDS expects a row count to be returned for data manipulation statements, which is the default setting for most server. However, if the NOCOUNT option is ON for SQL-Server, SymmetricDS will not behave correctly. The NOCOUNT setting can be checked with "select case when (512 & @@OPTIONS) = 512 then 'on' else 'off' end". If you’re unable to change NOCOUNT for the server, the "db.init.sql" parameter can be set to "SET NOCOUNT OFF" in the engine properties file.

Connections are pooled and expected to be in the database context like a new connection, so avoid using the "USE database" Transact-SQL statement in extension code.

If SQL Server is configured with a default collation that does NOT support unicode then we have experienced bad performance for update and delete statements when a table has character based primary keys. This is because statements are prepared for a unicode type and as a result the indexes are not used. You can turn this functionality off in JTDS by appending the following to your db.url: ;sendStringParametersAsUnicode=false

C.19.1. SQL Server Permissions

SymmetricDS can be configured to work with a variety SQL Server configurations. It is possible to install SymmetricDS in the same database and schema with the same user account your application runs. However you can also set it up to run in its own database with a designated user. Whichever configuration you choose below are the permissions required for SymmetricDS to run.

Symmetric User

Application User

Symmetric Database

CREATE TABLE, CREATE FUNCTION, REFERENCES

INSERT, EXECUTE

Symmetric Schema

ALTER, SELECT, INSERT, UPDATE, DELETE

Application Database

SELECT, ALTER

Example 25. Example Script for a designated SymmetricDS database and user account

Replace the following variables with your desired values.

SYM_USER

The SymmetricDS user

SYM_DATABASE

The database the SymmetricDS runtime tables will be installed in

APP_DATABASE

The application database where sync data resides

APP_USER

The application database user account that the application uses when making changes to the data.

-- SymmetricDS User

CREATE LOGIN SYM_USER
WITH PASSWORD = 'SYM_USER';
GO
use SYM_DATABASE;
GO
CREATE USER SYM_USER FOR LOGIN SYM_USER;
GO
GRANT CREATE TABLE ON DATABASE::SYM_DATABASE to SYM_USER;
GRANT CREATE FUNCTION ON DATABASE::SYM_DATABASE to SYM_USER;
GRANT REFERENCES ON DATABASE::SYM_DATABASE to SYM_USER;
GRANT ALTER, SELECT, INSERT, DELETE, UPDATE ON SCHEMA::dbo TO SYM_USER;
GO

use APP_DATABASE;
CREATE USER SYM_USER FOR LOGIN SYM_USER
GRANT SELECT, ALTER ON DATABASE::APP_DATABASE to SYM_USER;

-- Application User

CREATE LOGIN APP_USER
WITH PASSWORD = 'APP_USER';
GO
use APP_DATABASE;
GO
CREATE USER APP_USER FOR LOGIN APP_USER
GO
GRANT SELECT, INSERT, DELETE, UPDATE ON SCHEMA::dbo TO APP_USER;
GO
use SYM_DATABASE;
CREATE USER APP_USER FOR LOGIN APP_USER
GRANT INSERT, EXECUTE ON DATABASE::SYM_DATABASE to APP_USER;
Table 75. Supported Data Types
Data Type Supported?

BigInt, Int, SmallInt, TinyInt

Yes

Decimal, Numeric

Yes

Bit

Yes

Money, SmallMoney

Yes

Float, Real

Yes

Date, DateTime, Datetime2, SmallDatetime, Time

Yes

Datetimeoffset

No

Char, Varchar, Text, Nchar, Nvarchar, Ntext

Yes

Binary, Varbinary

Yes

Image

Yes

Spatial Data Types

No

C.20. SQLite

For SQLite, the implementation of sync-on-incoming back and the population of a source node if in the sym data rows relies on use of a context table (by default, called sym_context) to hold a boolean and node id in place of the more common methods of using temp tables (which are inaccessible from triggers) or functions (which are not available). The context table assumes there’s a single thread updating the database at any one time. If that is not the case in the future, the current implementation of sync on incoming batch will be unreliable.

Nodes using SQLite should have the jobs.synchronized.enable parameter set to true. This parameter causes the jobs and push/pull threads to all run in a synchronized fashion, which is needed in the case of SQLite.

The SQLite dialect has the following limitations:

  • There is no transaction ID recorded on data captured. Either specify a tx_id_expression on the TRIGGER table, or set a max_batch_size on the CHANNEL table that will accommodate your transactional data.

  • Due to the single threaded access to SQLite, the following parameter should be set to true: jobs.synchronized.enable. The sync.triggers.thread.count.per.server parameter should be set to 1.

Table 76. Suported Data Types
Data Type Supported?

Text

Yes

Numeric

Yes

Integer

Yes

Real

Yes

Blob

Yes

C.21. Sybase ASE

Sybase Adaptive Server Enterprise (ASE) was tested using the jConnect 7 JDBC driver. The jConnect 7 JDBC driver should be placed in the "lib" folder.

driver class : com.sybase.jdbc4.jdbc.SybDriver

SymmetricDS requires the "select into" database option be turned on for Sybase ASE. Run the following command with the sa account on the master database while replacing the database value with your database name.

sp_dboption 'YOUR_DATABASE', 'select into', true

SymmetricDS requires that the meta data information be installed on each database that will be used in replication. Sybase provides these metadata store procedures in a script that is packaged with the installation.

Without the metadata stored procedures installed above the following error will be produced by SymmetricDS

Error while reading the database metadata: JZ0SJ: Metadata accessor information was not found on this database. Please install the required tables as mentioned in the jConnect documentation.
Set the classpath to contain the jconnect jar and the classes directory
export CLASSPATH=/opt/sybase/jConnect-7_0/classes/jconn4.jar:/opt/sybase/jConnect-7_0/classes
Install appropriate script from the jconnect driver folder in your Sybase installation under the /sp directory
java  IsqlApp -U sa -P {SA_PASSWORD} -S jdbc:sybase:Tds:{HOSTNAME}:{PORT}/{DATABASE} -I sql_server{SYBASE VERSION}.sql -c go

Columns of type DATETIME are accurate to 1/300th of a second, which means that the last digit of the milliseconds portion will end with 0, 3, or 6. An incoming DATETIME synced from another database will also have its millisconds rounded to one of these digits (0 and 1 become 0; 2, 3, and 4 become 3; 5, 6, 7, and 8 become 6; 9 becomes 10). If DATETIME is used as the primary key or as one of the columns to detect a conflict, then conflict resolution could fail unless the milliseconds are rounded in the same fashion on the source system.

On ASE, each new trigger in a table for the same operation (insert, update, or delete) overwrites the previous one. No warning message displays before the overwrite occurs. When SymmetricDS is installed and configured to synchronize a table, it will install triggers that could overwrite already existing triggers on the database. New triggers created after SymmetricDS is installed will overwrite the SymmetricDS triggers. Custom trigger text can be added to the SymmetricDS triggers by modifying CUSTOM_ON_INSERT_TEXT, CUSTOM_ON_UPDATE_TEXT, and CUSTOM_ON_DELETE_TEXT on the TRIGGER table.

SymmetricDS expects a row count to be returned for data manipulation statements, which is the default setting for most server. However, if the NOCOUNT option is ON or the "send doneinproc tokens" setting is 0 for Sybase, SymmetricDS will not behave correctly. The NOCOUNT setting can be checked with "select case when (512 & @@OPTIONS) = 512 then 'on' else 'off' end". The DONEINPROC setting can be checked with "sp_configure 'send doneinproc tokens'". The commands "sp_configure 'send doneinproc tokens', 1" and "SET NOCOUNT OFF" will enable row counts. If you’re unable to change NOCOUNT for the server, the "db.init.sql" parameter can be set to "SET NOCOUNT OFF" in the engine properties file.

Connections are pooled and expected to be in the database context like a new connection, so avoid using the "USE database" Transact-SQL statement in extension code.

C.22. Redshift

Redshift is a managed data warehouse in the cloud from Amazon. Version 1.0 of Redshift is based on PostgreSQL 8.0, with some features modified or removed. SymmetricDS supports Redshift as a target platform where data can be loaded, but it does not support data capture. However, the initial load and reload functions are implemented, so it is possible to query rows from Redshift tables and send them to another database.

While Redshift started with PostgreSQL 8.0, there are some important differences from PostgreSQL. Redshift does not support constraints, indexes, functions, triggers, or sequences. Primary keys, foreign keys, and unique indexes can be defined on tables, but they are informational metadata that are not enforced by the system. When using the default data loader with SymmetricDS, it will enforce primary keys, either defined in the database or with the sync keys features, by checking if a row exists before attempting an insert. However, the bulk loader does not perform this check. The data types supported are smallint, integer, bigint, decimal, real, double precision, boolean, char, varchar, date, and timestamp.

A data loader named "redshift_bulk" is a bulk loader that can be set for a channel to improve loading performance. Instead of sending individual SQL statements to the database, it creates a comma separated value (CSV) file, uploads the object to Amazon S3, and uses the COPY statement to load it. The COPY command appends the new data to any existing rows in the table. If the target table has any IDENTITY columns, the EXPLICIT_IDS option is enabled to override the auto-generated values and load the incoming values. The following parameters (see Appendix B) can be set for bulk loader:

redshift.bulk.load.max.rows.before.flush

When the max rows is reached, the flat file is sent to S3 and loaded into the database. The default is 100,000 rows.

redshift.bulk.load.max.bytes.before.flush

When the max bytes is reached, the flat file is sent to S3 and loaded into the database. The default is 1,000,000,000 bytes.

redshift.bulk.load.s3.bucket

The S3 bucket name where files are uploaded. This bucket should be created from the AWS console ahead of time.

redshift.bulk.load.s3.access.key

The AWS access key ID to use as credentials for uploading to S3 and loading from S3.

redshift.bulk.load.s3.secret.key

The AWS secret key to use as credentials for uploading to S3 and loading from S3.

redshift.bulk.load.s3.endpoint

The AWS endpoint used for uploading to S3. This is optional. You might need to specify if you get warnings about retrying during the S3 upload.

To clean and organize tables after bulk changes, it is recommended to run a "vacuum" against individual tables or the entire database so that consistent query performance is maintained. Deletes and updates mark rows for delete that are not automatically reclaimed. New rows are stored in a separate unsorted region, forcing queries to sort on demand. Consider running a "vacuum" periodically during a maintenance window when there is minimal query activity that will be affected. If large batches are continually loaded from SymmetricDS, the "vacuum" command can be run after committing a batch by using a load filter (see Section 3.9) for the "batch commit" event, like this:

for (String tablename : context.getParsedTables().keySet()) {
    engine.getSqlTemplate().update("vacuum " + tablename, new Object[] { } );
}
Unresolved directive in appendix/databases.ad - include::sqlite.ad[]

C.23. Tibero

This section describes Tibero specific SymmetricDS details.

C.23.1. Database Permissions

The SymmetricDS database user generally needs privileges for connecting and creating tables (including indexes), triggers, sequences, and procedures (including packages and functions). The following is an example of the needed grant statements:

GRANT CONNECT TO SYMMETRIC;
GRANT RESOURCE TO SYMMETRIC;
GRANT CREATE ANY TRIGGER TO SYMMETRIC;
GRANT EXECUTE ON UTL_RAW TO SYMMETRIC;

Appendix D: Data Format

The SymmetricDS Data Format is used to stream data from one node to another. The data format reader and writer are pluggable with an initial implementation using a format based on Comma Separated Values (CSV). Each line in the stream is a record with fields separated by commas. String fields are surrounded with double quotes. Double quotes and backslashes used in a string field are escaped with a backslash. Binary values are represented as a string with hex values in "\0xab" format. The absence of any value in the field indicates a null value. Extra spacing is ignored and lines starting with a hash are ignored.

The first field of each line gives the directive for the line. The following directives are used:

nodeid, {node_id}

Identifies which node the data is coming from. Occurs once in CSV file.

binary, {BASE64|NONE|HEX}

Identifies the type of decoding the loader needs to use to decode binary data in the pay load. This varies depending on what database is the source of the data.

channel, {channel_id}

Identifies which channel a batch belongs to. The SymmetricDS data loader expects the channel to be specified before the batch.

batch, {batch_id}

Uniquely identifies a batch. Used to track whether a batch has been loaded before. A batch of -9999 is considered a virtual batch and will be loaded, but will not be recorded in incoming_batch.

schema, {schema name}

The name of the schema that is being targeted.

catalog, {catalog name}

The name of the catalog that is being targeted.

table, {table name}

The name of the table that is being targeted.

keys, {column name…​}

Lists the column names that are used as the primary key for the table. Only needs to occur after the first occurrence of the table.

columns, {column name…​}

Lists all the column names (including key columns) of the table. Only needs to occur after the first occurrence of the table.

insert, {column value…​}

Insert into the table with the values that correspond with the columns.

update, {new column value…​},{old key value…​}

Update the table using the old key values to set the new column values.

old, {old column value…​}

Represent all the old values of the data. This data can be used for conflict resolution.

delete, {old key value…​}

Delete from the table using the old key values.

sql, {sql statement}

Optional notation that instructs the data loader to run the accompanying SQL statement.

bsh, {bsh script}

Optional notation that instructs the data loader to run the accompanying BeanShell snippet.

create, {xml}

Optional notation that instructs the data loader to run the accompanying DdlUtils XML table definition in order to create a database table.

commit, {batch_id}

An indicator that the batch has been transmitted and the data can be committed to the database.

Example 26. Data Format Stream
nodeid, 1001
channel, pricing
binary, BASE64
batch, 100
schema,
catalog,
table, item_selling_price
keys, price_id
columns, price_id, price, cost
insert, 55, 0.65, 0.55
schema,
catalog,
table, item
keys, item_id
columns, item_id, price_id, name
insert, 110000055, 55, "Soft Drink"
delete, 110000001
schema,
catalog,
table, item_selling_price
update, 55, 0.75, 0.65, 55
commit, 100

Appendix E: Version Numbering

The software is released with a version number based on the Apache Portable Runtime Project version guidelines. In summary, the version is denoted as three integers in the format of MAJOR.MINOR.PATCH. Major versions are incompatible at the API level, and they can include any kind of change. Minor versions are compatible with older versions at the API and binary level, and they can introduce new functions or remove old ones. Patch versions are perfectly compatible, and they are released to fix defects.

Appendix F: Upgrading

The upgrade process can either be a full upgrade of a new installation that copies in existing settings or an incremental upgrade of an existing installation that copies in new library files. A full upgrade is the cleanest method that ensures all new files are updated, while an incremental upgrade changes a minimal number of files. An incremental upgrade works best for patch releases and most minor releases. When SymmetricDS is started after a major or minor software update, it will alter its database tables with any changes needed for that release.

F.1. Full Upgrade

For major releases and clean upgrades, copy old settings into a new installation using the following steps:

  1. Stop the old SymmetricDS.

  2. Backup the old SymmetricDS folder by renaming it.

  3. Unzip the SymmetricDS distribution.

  4. Copy old files from "engines" folder. (These files contain database connection information and engine settings.)

  5. Copy old files from "conf" folder. (These files contain settings for ports, wrapper, and logging.) Check to see if any new changes need merged.

  6. Copy old files from "security" folder. (These files contain keys for encryption.)

  7. Restart SymmetricDS.

F.2. Incremental Upgrade

For patch and minor releases, copy and replace the library folders of the installation using the following steps:

  1. Stop the old SymmetricDS.

  2. Backup the old "lib" and "web/WEB-INF/lib" folders by renaming them.

  3. Unzip the SymmetricDS distribution to a temporary folder.

  4. Copy the new "lib" and "web/WEB-INF/lib" folders into the old installation.

  5. Restart SymmetricDS.

Most patch releases only change the JAR files named with "symmetric" in the "web/WEB-INF/lib" folder.