I wanted to see if I can connect Spring Cloud Stream Kafka with the help of docker-compose in Docker containers, but I’m stuck and I didn’t find a solution yet, please help me. I’m working from Spring Microservices In Action; I didn’t find any help by now. Docker-compose with Kafka and Zookeeper: Docker-Compose with my Spring services: App.properties for my
Tag: apache-kafka
How to ignore a method call inside a method that is being tested?
I am trying to make a test pass with mockmvc and it is failing with the following error message: Caused by: org.apache.kafka.common.config.ConfigException We have Kafka in our service layer as a dependency, and it is being called inside the method we are testing. Is there a way to ignore that specific call during tests? In the example below, we want
kafka producer does not throw exception when broker is down
Created a cluster with two brokers using same zookeeper and trying to produce message to a topic whose details are as below. When the producer sets acks=”all” or -1,min.insync.replicas=”2″, it is supposed to receive acknowledgement from the brokers(leaders and replicas) but when one broker is shut manually while it is producing, it is making no difference to the kafka producer
Serde String[] in Kafka
I’m new in kafka. I am using the store builder and want to have a String[] of two elements as the value associated to the key. I set up the store like this: When I call the method to have the data into the store : I receive this error: I should set the store like this: but I don’t
Kafka consumer missing messages while consuming messages in loop
I am running my consumer code in loop due to memory constraints, committing my data and then loading into tables Following is the code which will run in loop But due to some reason I seem to be missing few messages. I believe this has to do something with consumer rebalancing/Committing. How can I check if my consumer is ready
Kafka Consumer stuck at deserialization
I created a simple producer-consumer app that using a custom Serializer and Deserializer. After adding a new method to the Message class that I produce, the consumer started being stack at deserialization. My producer is using the new class (with the new method) and the consumer is using the old class (without the method). I didn’t add any new data
2 consecutive stream-stream inner joins produce wrong results: what does KStream join between streams really do internally?
The problem setting I have a stream of nodes and a stream of edges that represent consecutive updates of a graph and I want to build patterns composed of nodes and edges using multiple joins in series. Let’s suppose I want to match a pattern like: (node1) –[edge1]–> (node2). My idea is to join the stream of nodes with the
Is it possible to edit values after Kafka consumer has been initiated? (from org.apache.kafka.clients.consumer)
I’d like to know if it’s possible to edit the values I gave while constructing a Kafka consumer later, namely: Is it possible to edit the values (specifically groupId) of my consumer cons after it has been generated? I would like to test for changes in the groupId Ex: I have looked at the docs but no answer there. I
Avro GenericRecord deserialization not working via SpringKafka
I’m trying to simplify my consumer as much as possible. The problem is, when looking at the records coming in my Kafka listener: List<GenericRecord> incomingRecords the values are just string values. I’ve tried turning specific reader to true and false. I’ve set the value deserializer as well. Am I missing something? This worked fine when I use a Java configuration
To close or to not close RocksDB Cache and WriteBufferManager in kafka streams app
I am currently playing around with a custom RocksDB configuration in my streams app by extending RocksDBConfigSetter interface. I see conflicting documentation around closing cache & writeBufferManager instances. Right now, I see that the javadoc & one of the documentation page suggests that we need to close all the instances that extend RocksObject (both Cache & WriteBufferManager instances extend this