Please enable JavaScript to view this site.

Navigation: Replicator Engine > Use Case Scenarios

DB2 to Hadoop JSON

Scroll

Replicate DB2 changed data (CDC) for the IVP_HR database, EMPLOYEE and DEPARTMENT tables into a JSON HDFS. All changes to the source tables are Replicated and no filtering is supported. The values for TOPIC and SUBJECT used here are arbitrary but were selected based on the source Table Names, source application and schema source, in this example, the EMPLOYEE and DEPARTMENT Tables, a DB2 "IVP_HR" Database and SQData respectively. Operation of the Producer will be optimized using one worker thread.

Example

----------------------------------------------------------------------

-- Name: DB2TOKAF:  Z/OS DB2 To HDFS JSON on Linux

-- Client/Project: client/project

----------------------------------------------------------------------

--       Change Log:

----------------------------------------------------------------------

-- 2019-07-01 INITIAL RELEASE using Kafka Replicator Engine

--

----------------------------------------------------------------------

--       Replicate Source/Target

----------------------------------------------------------------------

REPLICATE

   DB2 cdc://<src_host_name>:<src_sqdaemon_port>/<publisher_name>/<replicator_engine>

  TO

  JSON 'hdfs://10.0.0.14:9050/output/<prefix>_*_<suffix>.json'

  WITH 1 WORKERS

;

----------------------------------------------------------------------

--       Processing Option References

----------------------------------------------------------------------

OPTIONS

   AVRO COMPATIBLE NAMES

  STRIP TRAILING SPACES

  ROTATE SIZE 100M

  ROTATE DELAY 30

;

----------------------------------------------------------------------

--       Source References

----------------------------------------------------------------------

MAPPINGS

   SOURCE 'IVP_HR.EMPLOYEE'

          TOPIC 'EMPLOYEE'

          ALIAS 'EMP'

  ,SOURCE 'IVP_HR.DEPARTMENT'

          TOPIC 'DEPARTMENT'

          ALIAS 'DEPT'

;