背景
最新在学习波波老师的课程关于微服务日志监控这一块知识,这个不过是实践一波波波老师课程,谢谢波波老师的分享。这里假设我们已经完成了minikube环境搭建。
一图胜千言
这里的kafka队列是生产环境中是帮es集群做缓冲层。
步骤
namespace
namespace配置文件,ns.yml
:
apiVersion: v1
kind: Namespace
metadata:
name: logging
添加命名空间:
kubectl apply -f ns.yml
namespace/logging created
检查命名空间:
kubectl get ns
NAME STATUS AGE
default Active 8d
kube-node-lease Active 8d
kube-public Active 8d
kube-system Active 8d
logging Active 2m36s
ES
elasticsearch配置文件,elastic.yml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: logging
spec:
selector:
matchLabels:
component: elasticsearch
template:
metadata:
labels:
component: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
env:
- name: discovery.type
value: single-node
ports:
- containerPort: 9200
name: http
protocol: TCP
resources:
limits:
cpu: 500m
memory: 2Gi
requests:
cpu: 500m
memory: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: logging
labels:
service: elasticsearch
spec:
type: NodePort
selector:
component: elasticsearch
ports:
- port: 9200
targetPort: 9200
nodePort: 31200
创建ES服务:
kubectl apply -f elastic.yml
deployment.apps/elasticsearch created
service/elasticsearch created
检查ES服务:
kubectl get all -n logging
NAME READY STATUS RESTARTS AGE
pod/elasticsearch-c99467b8d-h7p97 1/1 Running 0 63s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch NodePort 10.104.78.39 <none> 9200:31200/TCP 63s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/elasticsearch 1/1 1 1 63s
NAME DESIRED CURRENT READY AGE
replicaset.apps/elasticsearch-c99467b8d 1 1 1 63s
直接检查ES的pod日志:
kubectl logs pod/elasticsearch-c99467b8d-h7p97 -n logging
直接访问es进行检查:
minikube service list
|-------------|---------------|--------------|----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|---------------|--------------|----------------------------|
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| logging | elasticsearch | 9200 | http://192.168.64.36:31200 |
|-------------|---------------|--------------|----------------------------|
访问http://192.168.64.36:31200
,结果如下:
{
"name" : "elasticsearch-c99467b8d-h7p97",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "y19J1SdxQj6YwIBNple0iQ",
"version" : {
"number" : "7.6.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
ES运行正常。
kibana
配置文件:kibana.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: logging
spec:
selector:
matchLabels:
run: kibana
template:
metadata:
labels:
run: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.6.2
env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: XPACK_SECURITY_ENABLED
value: "true"
ports:
- containerPort: 5601
name: http
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: logging
labels:
service: kibana
spec:
type: NodePort
selector:
run: kibana
ports:
- port: 5601
targetPort: 5601
nodePort: 31601
添加kibana服务:
kubectl apply -f kibana.yml
deployment.apps/kibana created
service/kibana created
检查kibana服务:
kubectl get all -n logging
NAME READY STATUS RESTARTS AGE
pod/elasticsearch-c99467b8d-h7p97 1/1 Running 0 8m35s
pod/kibana-86cdf4b8fd-cv8jw 1/1 Running 0 2m43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch NodePort 10.104.78.39 <none> 9200:31200/TCP 8m35s
service/kibana NodePort 10.108.8.205 <none> 5601:31601/TCP 2m43s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/elasticsearch 1/1 1 1 8m35s
deployment.apps/kibana 1/1 1 1 2m43s
NAME DESIRED CURRENT READY AGE
replicaset.apps/elasticsearch-c99467b8d 1 1 1 8m35s
replicaset.apps/kibana-86cdf4b8fd 1 1 1 2m43s
直接检查kibana的pod日志:
kubectl logs pod/kibana-86cdf4b8fd-cv8jw -n logging
minikube检查kibana服务:
minikube service list
|-------------|---------------|--------------|----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|---------------|--------------|----------------------------|
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| logging | elasticsearch | 9200 | http://192.168.64.36:31200 |
| logging | kibana | 5601 | http://192.168.64.36:31601 |
|-------------|---------------|--------------|----------------------------|
然后,用浏览器打开http://192.168.64.36:31601
,界面如下:
fluentd-rbac
这里设置fluentd的基于角色的访问控制配置文件,fluentd-rbac.yml
:
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: kube-system
配置fluentd的rbac:
kubectl apply -f fluentd-rbac.yml
serviceaccount/fluentd created
clusterrole.rbac.authorization.k8s.io/fluentd created
clusterrolebinding.rbac.authorization.k8s.io/fluentd created
检查fluentd的rbac:
ServiceAccount:
kubectl get ServiceAccount -n kube-system | grep fluentd
fluentd 1 7m38s
ClusterRole:
kubectl get ClusterRole -n kube-system | grep fluentd
fluentd 2020-04-17T09:01:45Z
ClusterRoleBinding:
kubectl get ClusterRoleBinding | grep fluentd
fluentd ClusterRole/fluentd 10m
fluentd
配置fluentd-daemonset.yml
:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
version: v1
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch.logging"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENT_UID
value: "0"
- name: FLUENTD_SYSTEMD_CONF
value: disable
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
添加DaemonSet:
kubectl apply -f fluentd-daemonset.yml
daemonset.apps/fluentd created
检查fluentd服务:
kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/coredns-66bff467f8-9nkm5 1/1 Running 4 8d
pod/coredns-66bff467f8-w482k 1/1 Running 4 8d
pod/etcd-minikube 1/1 Running 4 8d
pod/fluentd-t9fc8 1/1 Running 0 18s
pod/kube-apiserver-minikube 1/1 Running 4 8d
pod/kube-controller-manager-minikube 1/1 Running 4 8d
pod/kube-proxy-clf8q 1/1 Running 4 8d
pod/kube-scheduler-minikube 1/1 Running 4 8d
pod/storage-provisioner 1/1 Running 6 8d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 8d
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/fluentd 1 1 1 1 1 <none> 18s
daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 8d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/coredns 2/2 2 2 8d
NAME DESIRED CURRENT READY AGE
replicaset.apps/coredns-66bff467f8 2 2 2 8d
这里就看到fluentd的pod和daemonset。
查看fluentd的日志:
kubectl logs pod/fluentd-t9fc8 -n kube-system
到这里EFK环境就基本上搞好了。下面就是通过kibana来看Spring boot的日志了。
Spring boot Hello World
# 创建目录
mkdir springboot-k8s
cd springboot-k8s
curl https://start.spring.io/starter.tgz -d dependencies=webflux,actuator | tar -xzvf -
# 编译构建jar
./mvnw clean && ./mvnw package
# 检查jar
ls -l target/*.jar
-rw-r--r-- 1 zhangyalin staff 21329326 Apr 21 10:02 target/demo-0.0.1-SNAPSHOT.jar
# 运行spring boot
java -jar target/demo-0.0.1-SNAPSHOT.jar
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.6.RELEASE)
2020-04-21 10:06:25.213 INFO 16007 --- [ main] com.example.demo.DemoApplication : Starting DemoApplication v0.0.1-SNAPSHOT on zylMBP with PID 16007 (/Users/zhangyalin/Downloads/springboot-k8s/target/demo-0.0.1-SNAPSHOT.jar started by zhangyalin in /Users/zhangyalin/Downloads/springboot-k8s)
2020-04-21 10:06:25.217 INFO 16007 --- [ main] com.example.demo.DemoApplication : No active profile set, falling back to default profiles: default
2020-04-21 10:06:26.429 INFO 16007 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 2 endpoint(s) beneath base path '/actuator'
2020-04-21 10:06:26.995 INFO 16007 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 8080
2020-04-21 10:06:27.003 INFO 16007 --- [ main] com.example.demo.DemoApplication : Started DemoApplication in 2.22 seconds (JVM running for 2.665)
验证Springboot程序:
这样表示Spring boot程序没问题。接下来,构建docker镜像,并发布到docker hub上面去。
# 创建Dockerfile文件
vim Dockerfile
内容如下:
FROM openjdk:8-jdk-alpine AS builder
WORKDIR target/dependency
ARG APPJAR=target/*.jar
COPY ${APPJAR} app.jar
RUN jar -xf ./app.jar
FROM openjdk:8-jre-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY --from=builder ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=builder ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=builder ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","com.example.demo.DemoApplication"]
开始构建镜像:
docker build -t fxtxz2/springbootdemo .
# 查看image
docker images | grep fxtxz2/springbootdemo
fxtxz2/springbootdemo latest 2a3b8c03d281 38 seconds ago 106MB
# 测试images
docker run -p 8080:8080 fxtxz2/springbootdemo
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.6.RELEASE)
2020-04-21 02:19:40.783 INFO 1 --- [ main] com.example.demo.DemoApplication : Starting DemoApplication on a39d915f31b0 with PID 1 (/app started by root in /)
2020-04-21 02:19:40.791 INFO 1 --- [ main] com.example.demo.DemoApplication : No active profile set, falling back to default profiles: default
2020-04-21 02:19:43.472 INFO 1 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 2 endpoint(s) beneath base path '/actuator'
2020-04-21 02:19:44.793 INFO 1 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 8080
2020-04-21 02:19:44.810 INFO 1 --- [ main] com.example.demo.DemoApplication : Started DemoApplication in 4.968 seconds (JVM running for 6.1)
检查docker中运行image是否正常:
# 测试健康检查
curl localhost:8080/actuator/health
{"status":"UP"}%
# 推送image到docker hub
docker login
Authenticating with existing credentials...
Login Succeeded
docker push fxtxz2/springbootdemo
配置k8s
# 配置deployment
kubectl create deployment demo --image=fxtxz2/springbootdemo --dry-run=client -o=yaml > deployment.yaml
echo --- >> deployment.yaml
# 配置service
kubectl create service nodeport demo --tcp=8080:8080 --dry-run=client -o=yaml >> deployment.yaml
最终配置文件如下:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: demo
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: demo
spec:
containers:
- image: fxtxz2/springbootdemo
name: springbootdemo
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: demo
name: demo
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: demo
type: NodePort
status:
loadBalancer: {}
将Springboot发布到k8s中:
kubectl apply -f deployment.yaml
deployment.apps/demo created
service/demo created
检查发布状态:
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/demo-7654c476f5-tfkgk 0/1 ContainerCreating 0 6s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demo NodePort 10.105.17.102 <none> 8080:31019/TCP 6s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/demo 0/1 1 0 6s
NAME DESIRED CURRENT READY AGE
replicaset.apps/demo-7654c476f5 1 1 0 6s
查看minikube服务:
minikube service list
|-------------|---------------|----------------|----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|---------------|----------------|----------------------------|
| default | demo | 8080-8080/8080 | http://192.168.64.36:31019 |
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
| logging | elasticsearch | 9200 | http://192.168.64.36:31200 |
| logging | kibana | 5601 | http://192.168.64.36:31601 |
|-------------|---------------|----------------|----------------------------|
检查Spring boot在minikube的k8s中的状态:
curl http://192.168.64.36:31019/actuator/health
{"status":"UP"}%
在kibana中查看日志
打开kibana界面:http://192.168.64.36:31601。
先在kibana中创建索引模式,然后,对Springboot程序根据k8s选择器和标签机制进行日志查询。
kibana的Springboot日志查询具体如下:
资源清理
kubectl delete svc --all
kubectl delete deploy --all
kubectl delete svc --all -n logging
kubectl delete deploy --all -n logging
kubectl delete ns logging
kubectl delete daemonset fluentd -n kube-system
minikube stop