Skip to content
Advertisement

Tag: ignite

Ignite Communication SPI support for IPv6

I have read in multiple locations about Ignite having potential issues with IPv6 and we have been seeing similar communication issues within our Kubernetes setup with Ignite, where the communications spi seems to randomly fail. The Ignite docs state that “Ignite tries to support IPv4 and IPv6 but this can sometimes lead to issues where the cluster becomes detached.” While

Apache Ignite 2.11.0 :: Missing ignite-spring-tx-ext in maven repository

We are trying to upgrading from 2.9.1 to 2.11.0 and we have already using org.apache.ignite.transactions.spring.SpringTransactionManager which is now not available in core lib, when we checked docs https://ignite.apache.org/docs/latest/extensions-and-integrations/spring/spring-tx#maven-configuration its suggesting to use ignite-spring-tx-ext. But when we used that in pom.xml, its repository dose not exist in Maven repository. Can some please help us how to solve this. Answer Starting from

Adjust classpath / change spring version in azure databricks

I’m trying to use Apache Spark/Ignite integration in Azure Databricks. I install the org.apache.ignite:ignite-spark-2.4:2.9.0 maven library using the Databricks UI. And I have an error while accessing my ignite cahces: Here the AbstractApplicationContext is compiled with ReflectionUtils of different spring version. I see the spring-core-4.3.26.RELEASE.jar is installed in the /dbfs/FileStore/jars/maven/org/springframework during the org.apache.ignite:ignite-spark-2.4:2.9.0 installation and there are no other spring

How do I make sure my apache Ignite 2.x distributed cache puts are asynchornous

Below I have a distributed cache example using apache ignite. I want to make it so when I do a cache put operation: cache.put(i, new X12File(“x12file” + i, LocalDateTime.now().toString())); that it is completely asynchronous. Meaning my put operation should be super fast, and the pushing to the rest of the cluster should happen in the background not inconveniencing the user.

Apache Ignite: Caches unusable after reconnecting to Ignite servers

I am using Apache Ignite as a distributed cache and I am running into some fundamental robustness issues. If our Ignite servers reboot for any reason it seems like this breaks all of our Ignite clients, even after the Ignite servers come back online. This is the error the clients see when interacting with caches after the servers reboot and

How to fix Ignite performance issues?

We use Ignite 2.7.6 in both server and client modes: two server and six clients. At first, each app node with client Ignite inside had 2G heap. Each Ignite server node had 24G offheap and 2G heap. With last app update we introduced new functionality which required about 2000 caches of 20 entires (user groups). Cache entry has small size

Ignite heap not release, this is a memery leak?

I have a 20 minute pressure test during which the free JVM heap size drops from 97% to 3%. Even if I wait 5 hours, the amount of free space does not change. If I try to have a test again, the GC has too much work and causes a long JVM pause. I use 2.7 Ignite, I do not

Advertisement