Skip to content

How to manage memory using Spring Boot Maven Plugin with Kubernetes

I’m using spring boot 2.5.6 and I’m generating the docker image with the spring boot maven plugin. I’m deploying the application using AWS EKS with nodes managed by fargate.

The plugin configuration is the following


The command I use to execute it is the following

./mvnw spring-boot:build-image

When the application is deployed on AWS EKS, the application print the following data

Setting Active Processor Count to 2
Calculated JVM Memory Configuration: 
    (Total Memory: 1G, Thread Count: 250, Loaded Class Count: 20215, Headroom: 0%)
Enabling Java Native Memory Tracking
Adding 128 container CA certificates to JVM truststore
Spring Cloud Bindings Enabled

If I go inside the container and I run the command “free -h” I get the following output

total mem  : 7.7G
used mem   : 730M
free mem   : 4.6G
shared     : 820K
buff/cache : 2.4G

Why the -Xmx is filled with 400Mb only? And why the total memory is only 1Gb?



Posting this out of comments for better visibility.

An important thing to mention is when free command is run inside a pod’s container, it shows all available memory on the node where this pod is scheduled and running.

At this point it’s very important to have memory resources and limits for java applications since JVM memory allocation can be set incorrectly if it happens by the application.

There are two main options for resource allocation (in this particular case is memory):

  • requests (spec.containers[].resources.requests.memory) – kubernetes scheduler has to find a node which has requested amount of memory, not less than specified.

    It’s very important to set the requests reasonably since it’s used for scheduling and there are chances that kubernetes scheduler won’t be able to find a sufficient node with enough free memory to schedule the pod – good example of incorrect requests

  • limits (spec.containers[].resources.limits.memory) – kubelet insures that pod will not consume more than specified in limits, since containers in pod are allowed to consume more than requested.

    It’s also important to have limits set up for predictable resource consumption since containers can exceed requested memory and consume all node’s memory until OOM killer is involved. Possible cases when limits are not set

User contributions licensed under: CC BY-SA
1 People found this is helpful