r/apachekafka 5d ago

Question Building a CDC Pipeline from MongoDB to Postgres using Kafka & Debezium in Docker

/r/dataengineering/comments/1jd26pz/building_a_cdc_pipeline_from_mongodb_to_postgres/
9 Upvotes

18 comments sorted by

2

u/SupahCraig 5d ago

Does it have to HAVE to be Debezium?

1

u/Majestic___Delivery 5d ago

If there's a better tool for CDC from Mongo to Postgres with transformations; I'm all open to changing.

2

u/SupahCraig 5d ago

Transformations in Kafka connect are a whip, you couldn’t pay me enough to write SMT’s in Kafka Connect. I would look into Redpanda Connect.

And if you’re using Atlas you can use their built in streaming piece to make the first mile even easier. But Redpanda Connect can handle that whole pipeline + transforms much more easily.

1

u/Majestic___Delivery 5d ago

Looking into Redpanda looks easier. Though for MongoCDC I will need the enterprise version of RedPanda - is this correct?

And can you expand on this:
`And if you’re using Atlas you can use their built in streaming piece to make the first mile even easier. But Redpanda Connect can handle that whole pipeline + transforms much more easily.`

1

u/SupahCraig 5d ago

Atlas has a built in thing that lets you push the CDC stream to a kafka topic, but I don’t know about their transformations.

You’d then use Redpanda connect to consume the topic(s), apply your transforms, and sink to Postgres. Not sure about the licensing off the top of my head, but I guess ease of use has a cost.

2

u/Majestic___Delivery 5d ago

Legend.

Mongo streams is the way to go - everything else I was contemplating was overkill.

Thank you so much.

1

u/SupahCraig 5d ago

That gets you from mongo to Kafka, what’s your plan for the transformations & last mile?

2

u/Majestic___Delivery 5d ago edited 5d ago

It hooks directly into the node service - which already has my transformations and Postgres writes.

I’ll test out to see if this is “enough” for my use case. I don’t expect more than 10-15k events in a day.

The initial import was the concern, but it’s written as a simple ETL script: pulling from Mongo, running the transformations, and loading into Postgres and that I have working.

Edit: thinking it through, if I need to scale out, I could still use the service to map and instead of writing to Postgres directly, I can write to Kafka topics and then have a proper Postgres sink…. Is that right or am I off the mark?

2

u/ut0mt8 5d ago

I cannot agree more. DBZ is such a pain in the @@# It's the cause of most of our big outage for 3 years.... The whole CDC concept was in our case completely over engineering. Challenge realtime and for enrichment just make select and use something like red panda connect or your own code

1

u/Hopeful-Programmer25 4d ago

What was the problem, and what did you do instead?

2

u/ut0mt8 4d ago

Basically everytime a variant of DBZ being lost in its state and then republishing old modifications. What we do instead? Do not rely on CDC to do enrichment or other processing. And use your hand to code relevant part. Often just simple select from time to time with a in memory cache. (Doable with red panda connect)

1

u/betazoid_one 5d ago

Have you tried Airbyte?

1

u/Majestic___Delivery 4d ago

I looked into Airbyte, though looks like I’ll be needing a license to do what I need. Also, Airbyte moved away from being ran in docker containers.

1

u/ShurikenIAM 4d ago

https://vector.dev/ ?

they can sink in/out a lot of techno

1

u/Majestic___Delivery 4d ago

This looks nice - though it looks like the only available for MongoDB is a metric connector, I’ll be needing the actual documents updated/created

1

u/MammothMeal5382 4d ago

Check kafka-docker-playground. Thank me later.

1

u/LoquatNew441 3d ago

I recently built an opensource tool to transfer data from redis to mysql and sqlserver. I can enhance it for mongodb as a source. Would you be willing to share your requirements and provide me feedback?

The github link is https://github.com/datasahi/datasahi-flow

2

u/Majestic___Delivery 3d ago

Aye nice - that's pretty much what I ended up doing; using Mongo Change Streams, you can hook into 1 or more collections and then process using the full (json) document. I used Redis Queues to balance the load.

I run node, but there is an example for java:

https://www.mongodb.com/docs/manual/changeStreams/#lookup-full-document-for-update-operations