Components |
Scroll |
Versatility extends to the individual product components. The Apply Engine for example can often be configured and tested on one platform and implemented on another operating system with little or no modification. Migration of Apply and Replicator Engines from a development environment through integration testing and on to the production environment can be accomplished without modification and using your existing configuration management tools.
One area where complexity cannot be completely mitigated is the configuration of Change Data Capture. Source databases each require some specific configuration steps to initially enable and facilitate ongoing capture. The Capture agents however use a common set of tools for configuration and nearly identical configuration parameters. Once performed, those typically responsible for the configurations will find it easy to repeat the process as needed for subsequent implementations.
Each product component, developed on a common code base, functions seamlessly with each other across environments, providing the consistency and reliability that is expected in an enterprise integration tool. At the core of the product are three components:
Controller Daemons - Authenticate all connection requests, managing secure communication between Capture/Publisher Agents, Engines often running on other platforms as well as Utilities.
Data Capture Agents - The data capture layer consists of near-real-time and asynchronous data capture agents designed to be high performance and best of breed for their respective datastore types. Each Capture Agent is tightly coupled to a Publisher providing the second and third layers of the framework which provide fully configurable Transient Storage and forward Transport of the captured data. For more details, see Change Data Capture and the source specific Capture references.
Engines - The multi-function Apply Engine performs, based on a SQL like scripting language, all necessary data filtering, transformation and augmentation required to apply the changed data to a target datastore. The Replicator Engine streamlines change data replication for selected sources and provides a streaming utility function to Kafka and HDFS including automatic generation of the target schema and optionally, schema evolution for the target schema based on changes made to the source schema.
These components, together with a highly efficient transient storage and TCP/IP based transport using NaCL libraries for authentication and either TLS or NaCL encryption, provide a single high performance enterprise class solution".
Finally a collection of Utilities supplement the core product including configuration, management and control of the Capture/Publisher Agents and Engines.