the default health check : pod will restart when encountering error
create Pod ,simulate failure by exit “1”
then pod has restarted 3 times
Liveness:
the process will create healthy file,delete it after 30 s .
livenessProbe probes whether the file is or not 10s after created,per 5s,
the pod will restart when probe fails for 3 times
Readness:
replace liveness with readness
Pod readiness:
1.not ready when created
2.in initialDelaySeconds ,ready
3.readness probes failure for 3 times,not ready
kubectl describe pod readiness :
the failure log for probe
comparison between liveness and readiness:
to judge the success of probe whether return value is 0 after process is started
they are independent:
1.liveness probe determines if the containers need to be restarted to achieve self-healting;
2.readiness probe determines if the containers are ready to serve
we can use Readiness to scale up pods:
if readinessProbe (such as probing httpget for ip:port/path to determine whether the pod is ready ,if ready the pods will be added to Service 's backend pool to serve the client request )
For multi-copy applications, when a Scale Up operation is performed, the new copy is added as a backend to the Service’s responsible balance, processing the client’s request along with the existing copy. Considering that application startup usually requires a preparation phase, such as loading cached data, connecting to a database, etc., it takes a while to get services from the container to the server. We can use the Readiness probe to determine if the container is ready to avoid sending the request to a backend that is not ready.
we can use Readiness to rolling upgrade :
If the Health Check is configured correctly, the new copy will only be added to the Service if it passes the Readiness probe; if it is not detected, the existing copy will not be replaced and the service will still be normal.