Skip to content
Advertisement

How to target all nodes of an ActiveMQ Artemis cluster with Spring’s DefaultMessageListenerContainer

I’ve got an issue connecting to an ActiveMQ Artemis cluster (AMQ from Red Hat in fact) through Spring’s DefaultJmsListenerContainerFactory.

DefaultMessageListenerContainer makes use of only one connection, regardless of the number of consumers you specify through the concurrency parameter. The problem is that, in the cluster, there are 3 brokers configured at the moment (and, as a dev, I shouldn’t care about the topology of the cluster). Since here is only one connection consumers are only listening to one broker.

To solve the issue I disabled the cache (i.e. setCacheLevel(CACHE_NONE) in the factory). It “solved” the problem because now I can see the connections distributing on all the nodes of the cluster but it’s not a good solution because connections are perpetually dropped and recreated and that makes a lot of overhead at the broker side (it makes me think of a Christmas Tree :D).

Can you guys tell me what’s the correct approach to handle this? I trie using a JmsPoolConnectionFactory, but I didn’t get any good results till now. I still have only one connection.

I’m using Spring Boot 2.7.4 with Artemis Starter. You can find below a code snippet of the actual config.

(Side note, I don’t use Spring autoconfig because i need to be able to switch between ActiveMQ Artemis and the old ActiveMQ “Classic” implementation).

@Bean
DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
    DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
    factory.setConnectionFactory(connectionFactory());
    factory.setDestinationResolver(destinationResolver());
    factory.setSessionTransacted(true);
    factory.setConcurrency(config.getConcurrency());
    //Set this to allow load balancing of connections to all members of the cluster
    factory.setCacheLevel(DefaultMessageListenerContainer.CACHE_NONE);

    final ExponentialBackOff backOff = new ExponentialBackOff(
    config.getRetry().getInitialInterval(), config.getRetry().getMultiplier());
    backOff.setMaxInterval(config.getRetry().getMaxDuration());

    factory.setBackOff(backOff);

    return factory;
}

ConnectionFactory connectionFactory() {
    return new ActiveMQJMSConnectionFactory(
    config.getUrl(), config.getUser(), config.getPassword());
}

DestinationResolver destinationResolver() {
    final ActiveMQQueue activeMQQueue = new ActiveMQQueue(config.getQueue());
    return (session, destinationName, pubSubDomain) -> activeMQQueue;
}


@JmsListener(destination = "${slp.amq.queue}")
public void processLog(String log) {
    final SecurityLog securityLog = SecurityLog.parse(log);

    fileWriter.write(securityLog);
    logsCountByApplicationId.increment(securityLog.getApplicationId());

    if (elasticClient != null) {
        elasticClient.write(securityLog);
    }
}

The connection URL is:

(tcp://broker1:port,tcp://broker2:port,tcp://broker3:port)?useTopologyForLoadBalancing=true

Advertisement

Answer

The cluster can be configured so that any consumer on any node can consume messages sent to any node. Therefore, you shouldn’t strictly need to “target all nodes” of the cluster with your consumer. Message redistribution and re-routing in the cluster should be transparent to your application. As you said, as a developer you shouldn’t care about the topology of the cluster.

That said, the goal of clustering is to increase overall message throughput (i.e. performance) via horizontal scaling. Furthermore, every node in the cluster should ideally have sufficient producers and consumers so that messages aren’t being redistributed or re-routed between cluster nodes as that’s not optimal for performance. If you’re in a situation where you have just a few consumers connected to your cluster then it’s likely you don’t actually need a cluster in the first place. A single ActiveMQ Artemis broker can handle millions of messages per second in certain use-cases.

User contributions licensed under: CC BY-SA
3 People found this is helpful
Advertisement