Please bear with me. I may sound silly. But, I am very new to K8s and don’t know much about writing its yaml files.
I have deployed one Spring Boot application as a pod to k8s cluster via its helm chart. When I access any URL of the application from pod’s terminal using curl, I get a success response. but, when I access it from my laptop or browser, I get 503 service unavailable error. I have no idea what is happening. My senior says its an ingress issue. But, the helm install ran perfectly and pod came up as well.
Below is my ingress.yaml file.
{{- if .Values.ingress.enabled -}} {{- $fullName := include "my-service.fullname" . -}} {{- $svcPort := .Values.service.port -}} {{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }} {{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }} {{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}} {{- end }} {{- end }} {{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}} apiVersion: networking.k8s.io/v1 {{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}} apiVersion: networking.k8s.io/v1beta1 {{- else -}} apiVersion: extensions/v1beta1 {{- end }} kind: Ingress metadata: name: {{ $fullName }} namespace: {{ .Release.Namespace | quote }} labels: {{- include "my-service.labels" . | nindent 4 }} {{- with .Values.ingress.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} spec: {{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }} ingressClassName: {{ .Values.ingress.className }} {{- end }} {{- if .Values.ingress.tls }} tls: {{- range .Values.ingress.tls }} - hosts: {{- range .hosts }} - {{ . | quote }} {{- end }} secretName: {{ .secretName }} {{- end }} {{- end }} rules: {{- range .Values.ingress.hosts }} - host: {{ .host | quote }} http: paths: {{- range .paths }} - path: {{ . }} backend: serviceName: {{ $fullName }} servicePort: {{ $svcPort }} {{- end }} {{- end }} {{- end }}
Below is the trace I get on my laptop’s cmd terminal when I run curl from there.
* Trying 10.210.228.31... * TCP_NODELAY set * Connected to my-server.corp.xyz.com port 443 (#0) * schannel: SSL/TLS connection with my-server.corp.xyz.com port 443 (step 1/3) * schannel: disabled server certificate revocation checks * schannel: verifyhost setting prevents Schannel from comparing the supplied target name with the subject names in server certificates. * schannel: sending initial handshake data: sending 186 bytes... * schannel: sent initial handshake data: sent 186 bytes * schannel: SSL/TLS connection with my-server.corp.xyz.com port 443 (step 2/3) * schannel: failed to receive handshake, need more data * schannel: SSL/TLS connection with my-server.corp.xyz.com port 443 (step 2/3) * schannel: encrypted data got 4096 * schannel: encrypted data buffer: offset 4096 length 4096 * schannel: encrypted data length: 4022 * schannel: encrypted data buffer: offset 4022 length 4096 * schannel: received incomplete message, need more data * schannel: SSL/TLS connection with my-server.corp.xyz.com port 443 (step 2/3) * schannel: encrypted data got 957 * schannel: encrypted data buffer: offset 4979 length 5046 * schannel: sending next handshake data: sending 93 bytes... * schannel: SSL/TLS connection with my-server.corp.xyz.com port 443 (step 2/3) * schannel: encrypted data got 274 * schannel: encrypted data buffer: offset 274 length 5046 * schannel: SSL/TLS handshake complete * schannel: SSL/TLS connection with my-server.corp.xyz.com port 443 (step 3/3) * schannel: stored credential handle in session cache > POST /api-htmplortal/authenticate HTTP/1.1 > Host: my-server.corp.xyz.com > User-Agent: curl/7.55.1 > Accept: */* > Content-type: application/json > Content-Length: 73 > * upload completely sent off: 73 out of 73 bytes * schannel: client wants to read 102400 bytes * schannel: encdata_buffer resized 103424 * schannel: encrypted data buffer: offset 0 length 103424 * schannel: encrypted data got 469 * schannel: encrypted data buffer: offset 469 length 103424 * schannel: decrypted data length: 440 * schannel: decrypted data added: 440 * schannel: decrypted data cached: offset 440 length 102400 * schannel: encrypted data buffer: offset 0 length 103424 * schannel: decrypted data buffer: offset 440 length 102400 * schannel: schannel_recv cleanup * schannel: decrypted data returned 440 * schannel: decrypted data buffer: offset 0 length 102400 < HTTP/1.1 503 Service Temporarily Unavailable < Server: nginx/1.17.10 < Date: Fri, 18 Jun 2021 14:09:45 GMT < Content-Type: text/html < Content-Length: 198 < Connection: keep-alive < Strict-Transport-Security: max-age=15724800; includeSubDomains < <html> <head><title>503 Service Temporarily Unavailable</title></head> <body> <center><h1>503 Service Temporarily Unavailable</h1></center> <hr><center>nginx/1.17.10</center> </body> </html> * Connection #0 to host my-server.corp.xyz.com left intact
Advertisement
Answer
It was actually a configuration issue. I was actually running different pods and I had helm charts of all. To spin up all the pods, I had a helm of helms which included all the helms as subcharts. The ingress in the question is the one from the parent/main helm chart.
To run the parent helm, I had a shell script which would provide environment properties to the subcharts of the parent chart.
Now, I had two helm subcharts: i.e. old-service and new-service. My requirement was that I needed to spin up only one based on a flag value. The new service was an upgraded version of the old service, so to not update the UI endpoint for accessing the services, I tried keeping the backend for both subcharts as same (thinking that since only one pod would spin up -> I was keeping replicaCount as 0 for pod which was not starting from my shell script) as below:
--set old-service.ingress.hosts[0].paths[0]="/proxy-service-${NAMESPACE}(/|$)(.*)" --set new-service.ingress.hosts[0].paths[0]="/proxy-service-${NAMESPACE}(/|$)(.*)"
But, it turns out. It cannot be done. It appears that kubernetes reads configuration rather than pod running or not.
To resolve it, I kept the above two backend for new-service and old-service different and provided dynamic endpoint environment property to UI subchart based on which service was going to be up on the basis of flag value.
--set ui.proxyUrl=https://${LOAD_BALANCER}/${proxyUrl}-${NAMESPACE}