I have a Kafka cluster. At this moment just for testing, there is only one topic and to this topic, 1 consumer is taking the same messages from the topic and processing, and storing them on the database. But, if I have any problem storing on the database and throw an exception, for example, a PersistenceException
, then the message flow is interrupted. How can I handle it?
How do we re-process data when something goes wrong?
Can we re-process messages in topics?
Did anyone face this scenario?
Advertisement
Answer
Messages are durable in Kafka for at least 7 days, by default.
Within that period, Kafka consumer group offsets can be reset or rewound using kafka-consumer-groups
CLI, for example
Otherwise, restart your consumer code, and the last message offset that wasn’t committed will be reprocessed