I’m using spring boot 2.5.6 and I’m generating the docker image with the spring boot maven plugin. I’m deploying the application using AWS EKS with nodes managed by fargate.
The plugin configuration is the following
<plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <excludes> <exclude> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </exclude> </excludes> </configuration> </plugin>
The command I use to execute it is the following
./mvnw spring-boot:build-image -Dspring-boot.build-image.imageName=my-image-name
When the application is deployed on AWS EKS, the application print the following data
Setting Active Processor Count to 2 Adding $JAVA_OPTS to $JAVA_TOOL_OPTIONS Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx408405K -XX:MaxMetaspaceSize=128170K -XX:ReservedCodeCacheSize=240M -Xss1M (Total Memory: 1G, Thread Count: 250, Loaded Class Count: 20215, Headroom: 0%) Enabling Java Native Memory Tracking Adding 128 container CA certificates to JVM truststore Spring Cloud Bindings Enabled Picked up JAVA_TOOL_OPTIONS: -Djava.security.properties=/layers/paketo-buildpacks_bellsoft-liberica/java-security-properties/java-security.properties -XX:+ExitOnOutOfMemoryError -XX:ActiveProcessorCount=2 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="/var/log/containers/heapDump.hprof" -XX:MaxDirectMemorySize=10M -Xmx408405K -XX:MaxMetaspaceSize=128170K -XX:ReservedCodeCacheSize=240M -Xss1M -XX:+UnlockDiagnosticVMOptions -XX:NativeMemoryTracking=summary -XX:+PrintNMTStatistics -Dorg.springframework.cloud.bindings.boot.enable=true
If I go inside the container and I run the command “free -h” I get the following output
total mem : 7.7G used mem : 730M free mem : 4.6G shared : 820K buff/cache : 2.4G available
Why the -Xmx is filled with 400Mb only? And why the total memory is only 1Gb?
Posting this out of comments for better visibility.
An important thing to mention is when
free command is run inside a pod’s container, it shows all available memory on the node where this pod is scheduled and running.
At this point it’s very important to have memory
limits for java applications since JVM memory allocation can be set incorrectly if it happens by the application.
There are two main options for resource allocation (in this particular case is
spec.containers.resources.requests.memory) – kubernetes scheduler has to find a node which has requested amount of memory, not less than specified.
It’s very important to set the
requestsreasonably since it’s used for scheduling and there are chances that kubernetes scheduler won’t be able to find a sufficient node with enough free memory to schedule the pod – good example of incorrect requests
spec.containers.resources.limits.memory) – kubelet insures that pod will not consume more than specified in limits, since containers in pod are allowed to consume more than requested.
It’s also important to have
limitsset up for predictable resource consumption since containers can exceed requested memory and consume all node’s memory until
OOM killeris involved. Possible cases when limits are not set