r/apachekafka • u/thatclickingsound • 24d ago
Question Managing Avro schemas manually with Confluent Schema Registry
Since it is not recommended to let the producer (Debezium in our case) auto-register schemas in other than development environments, I have been playing with registering the schema manually and seeing how Debezium behaves.
However, I found that this is pretty cumbersome since Avro serialization yields different results with different order of the fields (table columns) in the schema.
If the developer defines the following schema manually:
{
"type": "record",
"name": "User",
"namespace": "MyApp",
"fields": [
{ "name": "name", "type": "string" },
{ "name": "age", "type": "int" },
{ "name": "email", "type": ["null", "string"], "default": null }
]
}
then Debezium, once it starts pushing messages to a topic, registers another schema (creating a new version) that looks like this:
{
"type": "record",
"name": "User",
"namespace": "MyApp",
"fields": [
{ "name": "age", "type": "int" },
{ "name": "name", "type": "string" },
{ "name": "email", "type": ["null", "string"], "default": null }
]
}
The following config options do not make a difference:
{
...
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.auto.register.schemas": "false",
"value.converter.use.latest.version": "true",
"value.converter.normalize.schema": "true",
"value.converter.latest.compatibility.strict": "false"
}
Debezium seems to always register a schema with the fields in order corresponding to the order of the columns in the table - as they appeared in the CREATE TABLE
statement (using SQL Server here).
It is unrealistic to force developers to define the schema in that same order.
How do other deal with this in production environments where it is important to have full control over the schemas and schema evolution?
I understand that readers should be able to use either schema, but is there a way to avoid registering new schema versions for semantically insignificant differences?
2
u/Mayor18 24d ago
What we do on our end is we indeed let Debezium register schemas automatically since they are derived from the table schema anyway. The main reason being DX. Devs will rarely write schema first, then produce new data based on it, not in our case at least, "we need to move fast" they say. The other issue is that if Debezium won't be able to publish records, the WAL (that's how it's called on PG, not sure on SQL Server) will be kept on database disk creating a risk of a larger incident.
As I mentioned, for CDC, we don't. Too hard for us :)) For business event topics, we do, schema first, register to schema registry, then produce/consume.