Hi Team,
We use Goldengate to replicate data from Oracle DB to Kafka. We use 1 TOPIC per TABLE topology to replicate the data to kafka, I'm looking mechanism to replicate data consistently across all our sinks. Debezium provides transactional markers that is useful to detect the number of impacted records per table for each XID. These transactional markers can be used to buffer related events using spark or flink or kafka streams. After arrival of all related relational records one can create single atomic transaction to replicate the data consistently across all sink. Here is the Debeizum documentation https://debezium.io/documentation/reference/stable/connectors/oracle.html#oracle-transaction-metadata and sample json formatted transaction maker. Is there a mechanism to achieve this using Goldengate Bigdata - Kafka or Kafka Connect handlers ?
{
"status": "BEGIN",
"id": "5.6.641",
"ts_ms": 1486500577125,
"event_count": null,
"data_collections": null
}
{
"status": "END",
"id": "5.6.641",
"ts_ms": 1486500577691,
"event_count": 2,
"data_collections": [
{
"data_collection": "ORCLPDB1.DEBEZIUM.CUSTOMER",
"event_count": 1
},
{
"data_collection": "ORCLPDB1.DEBEZIUM.ORDER",
"event_count": 1
}
]
}
TIA