kubernetes 1.5安装 ELK(ElasticSearch, Logstash, Kibana)

安装这东西还是比较麻烦,在网上的文章很多,要么难得不行,要不就是简单的不行,看完了也不知道怎么搞。按着他们的好不容易搞出来,也发现无法使用。


还有和原来一样,先把镜像和版本说明一下:

 

index.tenxcloud.com/docker_library/elasticsearch:2.3.0

index.tenxcloud.com/docker_library/kibana:latest

pires/logstash



##1:   安装elasticsearch集群

es-master.yaml 文件内容

[html]  view plain  copy
  1. [root@localhost github]# cat es-master.yaml   
  2. apiVersion: extensions/v1beta1  
  3. kind: Deployment  
  4. metadata:  
  5.   name: es-master  
  6.   labels:  
  7.     component: elasticsearch  
  8.     role: master  
  9. spec:  
  10.   template:  
  11.     metadata:  
  12.       labels:  
  13.         component: elasticsearch  
  14.         role: master  
  15.       annotations:  
  16.         pod.beta.kubernetes.io/init-containers: '[  
  17.           {  
  18.           "name": "sysctl",  
  19.             "image": "busybox",  
  20.             "imagePullPolicy": "IfNotPresent",  
  21.             "command": ["sysctl", "-w", "vm.max_map_count=262144"],  
  22.             "securityContext": {  
  23.               "privileged": true  
  24.             }  
  25.           }  
  26.         ]'  
  27.     spec:  
  28.       containers:  
  29.       - name: es-master  
  30.         securityContext:  
  31.           privileged: true  
  32.           capabilities:  
  33.             add:  
  34.               - IPC_LOCK  
  35.         image:  index.tenxcloud.com/docker_library/elasticsearch:2.3.0  
  36.         imagePullPolicy: Always  
  37.         env:  
  38.         - name: NAMESPACE  
  39.           valueFrom:  
  40.             fieldRef:  
  41.               fieldPath: metadata.namespace  
  42.         - name: "CLUSTER_NAME"  
  43.           value: "myesdb"  
  44.         - name: NODE_MASTER  
  45.           value: "true"  
  46.         - name: NODE_INGEST  
  47.           value: "false"  
  48.         - name: NODE_DATA  
  49.           value: "false"  
  50.         - name: HTTP_ENABLE  
  51.           value: "false"  
  52.         - name: "ES_JAVA_OPTS"  
  53.           value: "-Xms256m -Xmx256m"  
  54.         ports:  
  55.         - containerPort: 9300  
  56.           name: transport  
  57.           protocol: TCP  
  58.         volumeMounts:  
  59.         - name: storage  
  60.           mountPath: /data  
  61.       volumes:  
  62.           - emptyDir:  
  63.               medium: ""  
  64.             name: "storage"  

 es-client.yaml文件内容

[html]  view plain  copy
  1. apiVersion: extensions/v1beta1  
  2. kind: Deployment  
  3. metadata:  
  4.   name: es-client  
  5.   labels:  
  6.     component: elasticsearch  
  7.     role: client  
  8. spec:  
  9.   template:  
  10.     metadata:  
  11.       labels:  
  12.         component: elasticsearch  
  13.         role: client  
  14.       annotations:  
  15.         pod.beta.kubernetes.io/init-containers: '[  
  16.           {  
  17.           "name": "sysctl",  
  18.             "image": "busybox",  
  19.             "imagePullPolicy": "IfNotPresent",  
  20.             "command": ["sysctl", "-w", "vm.max_map_count=262144"],  
  21.             "securityContext": {  
  22.               "privileged": true  
  23.             }  
  24.           }  
  25.         ]'  
  26.     spec:  
  27.       containers:  
  28.       - name: es-client  
  29.         securityContext:  
  30.           privileged: true  
  31.           capabilities:  
  32.             add:  
  33.               - IPC_LOCK  
  34.         image: index.tenxcloud.com/docker_library/elasticsearch:2.3.0  
  35.         imagePullPolicy: Always  
  36.         env:  
  37.         - name: NAMESPACE  
  38.           valueFrom:  
  39.             fieldRef:  
  40.               fieldPath: metadata.namespace  
  41.         - name: "CLUSTER_NAME"  
  42.           value: "myesdb"  
  43.         - name: NODE_MASTER  
  44.           value: "false"  
  45.         - name: NODE_DATA  
  46.           value: "false"  
  47.         - name: HTTP_ENABLE  
  48.           value: "true"  
  49.         - name: "ES_JAVA_OPTS"  
  50.           value: "-Xms256m -Xmx256m"  
  51.         ports:  
  52.         - containerPort: 9200  
  53.           name: http  
  54.           protocol: TCP  
  55.         - containerPort: 9300  
  56.           name: transport  
  57.           protocol: TCP  
  58.         volumeMounts:  
  59.         - name: storage  
  60.           mountPath: /data  
  61.       volumes:  
  62.           - emptyDir:  
  63.               medium: ""  
  64.             name: "storage"  


es-data.yaml内容

[html]  view plain  copy
  1. [root@localhost github]# cat es-data.yaml   
  2. apiVersion: extensions/v1beta1  
  3. kind: Deployment  
  4. metadata:  
  5.   name: es-data  
  6.   labels:  
  7.     component: elasticsearch  
  8.     role: data  
  9. spec:  
  10.   template:  
  11.     metadata:  
  12.       labels:  
  13.         component: elasticsearch  
  14.         role: data  
  15.       annotations:  
  16.         pod.beta.kubernetes.io/init-containers: '[  
  17.           {  
  18.           "name": "sysctl",  
  19.             "image": "busybox",  
  20.             "imagePullPolicy": "IfNotPresent",  
  21.             "command": ["sysctl", "-w", "vm.max_map_count=262144"],  
  22.             "securityContext": {  
  23.               "privileged": true  
  24.             }  
  25.           }  
  26.         ]'  
  27.     spec:  
  28.       containers:  
  29.       - name: es-data  
  30.         securityContext:  
  31.           privileged: true  
  32.           capabilities:  
  33.             add:  
  34.               - IPC_LOCK  
  35.         image: index.tenxcloud.com/docker_library/elasticsearch:2.3.0  
  36.         imagePullPolicy: Always  
  37.         env:  
  38.         - name: NAMESPACE  
  39.           valueFrom:  
  40.             fieldRef:  
  41.               fieldPath: metadata.namespace  
  42.         - name: "CLUSTER_NAME"  
  43.           value: "myesdb"  
  44.         - name: NODE_MASTER  
  45.           value: "false"  
  46.         - name: NODE_INGEST  
  47.           value: "false"  
  48.         - name: HTTP_ENABLE  
  49.           value: "false"  
  50.         - name: "ES_JAVA_OPTS"  
  51.           value: "-Xms256m -Xmx256m"  
  52.         ports:  
  53.         - containerPort: 9300  
  54.           name: transport  
  55.           protocol: TCP  
  56.         volumeMounts:  
  57.         - name: storage  
  58.           mountPath: /data  
  59.       volumes:  
  60.           - emptyDir:  
  61.               medium: ""  
  62.             name: "storage"  



es-discovery-svc.yaml 内容:

[html]  view plain  copy
  1. [root@localhost github]# cat es-discovery-svc.yaml   
  2. apiVersion: v1  
  3. kind: Service  
  4. metadata:  
  5.   name: elasticsearch-discovery  
  6.   labels:  
  7.     component: elasticsearch  
  8.     role: master  
  9. spec:  
  10.   selector:  
  11.     component: elasticsearch  
  12.     role: master  
  13.   ports:  
  14.   - name: transport  
  15.     port: 9300  
  16.     protocol: TCP  


es-svc.yaml文件内容:

[html]  view plain  copy
  1. [root@localhost github]# cat es-svc.yaml   
  2. apiVersion: v1  
  3. kind: Service  
  4. metadata:  
  5.   name: elasticsearch  
  6.   labels:  
  7.     component: elasticsearch  
  8.     role: client  
  9. spec:  
  10.   type: LoadBalancer  
  11.   selector:  
  12.     component: elasticsearch  
  13.     role: client  
  14.   ports:  
  15.   - name: http  
  16.     port: 9200  
  17.     protocol: TCP  


es-svc-1.yaml

[html]  view plain  copy
  1. [root@localhost github]# cat es-svc-1.yaml   
  2. apiVersion: v1  
  3. kind: Service  
  4. metadata:  
  5.   name: elasticsearch-logging  
  6.   namespace: kube-system  
  7.   labels:  
  8.     k8s-app: elasticsearch  
  9.     kubernetes.io/name: "elasticsearch"  
  10. spec:  
  11.   type: ExternalName  
  12.   externalName: elasticsearch.default.svc.cluster.local  
  13.   ports:  
  14.     - port: 9200  
  15.       targetPort: 9200  


这几个文件创建完成以后,一个一个的kubecret create -f .....


[html]  view plain  copy
  1. kubecrt create -f  es-client.yaml -f  es-data.yaml -f  es-discovery-svc.yaml -f   es-master.yaml  -f  es-svc-1.yaml -f  es-svc.yaml  


完成以后,查看:

kubectl get pods

[html]  view plain  copy
  1. [root@localhost github]# kubectl get pods  
  2. NAME                         READY     STATUS    RESTARTS   AGE  
  3. busybox                      1/1       Running   2          2h  
  4. es-client-1983128803-k0g83   1/1       Running   0          2h  
  5. es-data-4107927351-f621b     1/1       Running   0          2h  
  6. es-master-1554717905-pg6tv   1/1       Running   0          2h  

再查看:

[html]  view plain  copy
  1. kubectl logs es-master-1554717905-pg6tv  


[html]  view plain  copy
  1. [root@localhost github]# kubectl logs es-master-1554717905-pg6tv  
  2. [2016-12-28 04:27:01,727][INFO ][node                     ] [Vindicator] version[2.3.0], pid[1], build[8371be8/2016-03-29T07:54:48Z]  
  3. [2016-12-28 04:27:01,727][INFO ][node                     ] [Vindicator] initializing ...  
  4. [2016-12-28 04:27:02,154][INFO ][plugins                  ] [Vindicator] modules [reindex, lang-expression, lang-groovy], plugins [], sites []  
  5. [2016-12-28 04:27:02,174][INFO ][env                      ] [Vindicator] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/centos-root)]], net usable_space [92.2gb], net total_space [99.9gb], spins? [possibly], types [xfs]  
  6. [2016-12-28 04:27:02,174][INFO ][env                      ] [Vindicator] heap size [247.5mb], compressed ordinary object pointers [true]  
  7. [2016-12-28 04:27:03,441][INFO ][node                     ] [Vindicator] initialized  
  8. [2016-12-28 04:27:03,441][INFO ][node                     ] [Vindicator] starting ...  
  9. [2016-12-28 04:27:03,493][INFO ][transport                ] [Vindicator] publish_address {10.32.0.4:9300}, bound_addresses {[::]:9300}  
  10. [2016-12-28 04:27:03,497][INFO ][discovery                ] [Vindicator] elasticsearch/x5bP8CfuTNOjId9FnM0EWQ  
  11. [2016-12-28 04:27:06,538][INFO ][cluster.service          ] [Vindicator] new_master {Vindicator}{x5bP8CfuTNOjId9FnM0EWQ}{10.32.0.4}{10.32.0.4:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)  
  12. [2016-12-28 04:27:06,566][INFO ][http                     ] [Vindicator] publish_address {10.32.0.4:9200}, bound_addresses {[::]:9200}  
  13. [2016-12-28 04:27:06,566][INFO ][node                     ] [Vindicator] started  
  14. [2016-12-28 04:27:06,591][INFO ][gateway                  ] [Vindicator] recovered [0] indices into cluster_state  

[html]  view plain  copy
  1. kubectl logs es-data-4107927351-f621b   

[html]  view plain  copy
  1. [root@localhost github]# kubectl logs es-data-4107927351-f621b   
  2. [2016-12-28 04:26:23,791][INFO ][node                     ] [Cassandra Nova] version[2.3.0], pid[1], build[8371be8/2016-03-29T07:54:48Z]  
  3. [2016-12-28 04:26:23,791][INFO ][node                     ] [Cassandra Nova] initializing ...  
  4. [2016-12-28 04:26:24,219][INFO ][plugins                  ] [Cassandra Nova] modules [reindex, lang-expression, lang-groovy], plugins [], sites []  
  5. [2016-12-28 04:26:24,240][INFO ][env                      ] [Cassandra Nova] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/centos-root)]], net usable_space [42gb], net total_space [50gb], spins? [possibly], types [xfs]  
  6. [2016-12-28 04:26:24,240][INFO ][env                      ] [Cassandra Nova] heap size [247.5mb], compressed ordinary object pointers [true]  
  7. [2016-12-28 04:26:25,537][INFO ][node                     ] [Cassandra Nova] initialized  
  8. [2016-12-28 04:26:25,537][INFO ][node                     ] [Cassandra Nova] starting ...  
  9. [2016-12-28 04:26:25,585][INFO ][transport                ] [Cassandra Nova] publish_address {10.46.0.6:9300}, bound_addresses {[::]:9300}  
  10. [2016-12-28 04:26:25,589][INFO ][discovery                ] [Cassandra Nova] elasticsearch/hcnJzTAZRBK517ZsLt2LQw  
  11. [2016-12-28 04:26:28,626][INFO ][cluster.service          ] [Cassandra Nova] new_master {Cassandra Nova}{hcnJzTAZRBK517ZsLt2LQw}{10.46.0.6}{10.46.0.6:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)  
  12. [2016-12-28 04:26:28,655][INFO ][http                     ] [Cassandra Nova] publish_address {10.46.0.6:9200}, bound_addresses {[::]:9200}  
  13. [2016-12-28 04:26:28,655][INFO ][node                     ] [Cassandra Nova] started  
  14. [2016-12-28 04:26:28,678][INFO ][gateway                  ] [Cassandra Nova] recovered [0] indices into cluster_state  



##2: 安装kibana和logstash


index.tenxcloud.com/docker_library/kibana:latest  现在的时间是:2016.12.28日的最新版本。也可以尝试别的版本

pires/logstash


继续看下yaml文件内容


kibana-controller.yaml 文件内容

[html]  view plain  copy
  1. [root@localhost elk]# cat kibana-controller.yaml   
  2. apiVersion: v1  
  3. kind: ReplicationController  
  4. metadata:  
  5.   name: kibana  
  6.   namespace: default  
  7.   labels:  
  8.     component: elk  
  9.     role: kibana  
  10. spec:  
  11.   replicas: 1  
  12.   selector:  
  13.     component: elk  
  14.     role: kibana  
  15.   template:  
  16.     metadata:  
  17.       labels:  
  18.         component: elk  
  19.         role: kibana  
  20.     spec:  
  21.       serviceAccount: elk  
  22.       containers:  
  23.       - name: kibana  
  24.         image: index.tenxcloud.com/docker_library/kibana:latest  
  25.         env:  
  26.         - name: KIBANA_ES_URL  
  27.           value: "http://elasticsearch.default.svc.cluster.local:9200"  
  28.         - name: KUBERNETES_TRUST_CERT  
  29.           value: "true"  
  30.         ports:  
  31.         - containerPort: 5601  
  32.           name: http  
  33.           protocol: TCP  


kibana-service.yaml 文件内容:

[html]  view plain  copy
  1. [root@localhost elk]# cat kibana-service.yaml   
  2. apiVersion: v1  
  3. kind: Service  
  4. metadata:  
  5.   name: kibana  
  6.   namespace: default  
  7.   labels:  
  8.     component: elk  
  9.     role: kibana  
  10. spec:  
  11.   selector:  
  12.     component: elk  
  13.     role: kibana  
  14.   type: NodePort  
  15.   ports:  
  16.   - name: http  
  17.     port: 80  
  18.     nodePort: 30080  
  19.     targetPort: 5601  
  20.     protocol: TCP  


 logstash-controller.yaml 文件内容:

[html]  view plain  copy
  1. [root@localhost elk]# cat logstash-controller.yaml   
  2. apiVersion: v1  
  3. kind: ReplicationController  
  4. metadata:  
  5.   name: logstash  
  6.   namespace: default  
  7.   labels:  
  8.     component: elk  
  9.     role: logstash  
  10. spec:  
  11.   replicas: 1  
  12.   selector:  
  13.     component: elk  
  14.     role: logstash  
  15.   template:  
  16.     metadata:  
  17.       labels:  
  18.         component: elk  
  19.         role: logstash  
  20.     spec:  
  21.       serviceAccount: elk  
  22.       containers:  
  23.       - name: logstash  
  24.         image: pires/logstash  
  25.         env:  
  26.         - name: KUBERNETES_TRUST_CERT  
  27.           value: "true"  
  28.         ports:  
  29.         - containerPort: 5043  
  30.           name: lumberjack  
  31.           protocol: TCP  
  32.         volumeMounts:  
  33.         - mountPath: /certs  
  34.           name: certs  
  35.       volumes:  
  36.       - emptyDir:  
  37.           medium: ""  
  38.         name: "storage"  
  39.       - hostPath:  
  40.           path: "/tmp"  
  41.         name: "certs"  


logstash-service.yaml 文件内容:

[html]  view plain  copy
  1. [root@localhost elk]# cat logstash-service.yaml   
  2. apiVersion: v1  
  3. kind: Service  
  4. metadata:  
  5.   name: logstash  
  6.   namespace: default  
  7.   labels:  
  8.     component: elk  
  9.     role: logstash  
  10. spec:  
  11.   selector:  
  12.     component: elk  
  13.     role: logstash  
  14.   ports:  
  15.   - name: lumberjack  
  16.     port: 5043  
  17.     protocol: TCP  


service-account.yaml 文件内容:


[html]  view plain  copy
  1. [root@localhost elk]# cat service-account.yaml   
  2. apiVersion: v1  
  3. kind: ServiceAccount  
  4. metadata:  
  5.   name: elk  


接下来安装:

[html]  view plain  copy
  1. kubectl create -f  service-account.yaml -f logstash-service.yaml -f  logstash-controller.yaml  -f   kibana-service.yaml -f   kibana-controller.yaml  



##3: 安装fluentd-elasticsearch

gcr.io/google-containers/fluentd-elasticsearch:1.20

镜像还是先下载,最好在每一个节点都下载。

fluentd.yaml文件内容

[html]  view plain  copy
  1. [root@localhost elk]# cat fluentd.yaml   
  2. apiVersion: extensions/v1beta1  
  3. kind: DaemonSet  
  4. metadata:  
  5.   name: fluentd-elasticsearch  
  6.   namespace: kube-system  
  7.   labels:  
  8.     k8s-app: fluentd-logging  
  9. spec:  
  10.   template:  
  11.     metadata:  
  12.       labels:  
  13.         name: fluentd-elasticsearch  
  14.     spec:  
  15.       containers:  
  16.       - name: fluentd-elasticsearch  
  17.         image: gcr.io/google-containers/fluentd-elasticsearch:1.20  
  18.         resources:  
  19.           limits:  
  20.             memory: 200Mi  
  21.           requests:  
  22.             cpu: 100m  
  23.             memory: 200Mi  
  24.         volumeMounts:  
  25.         - name: varlog  
  26.           mountPath: /var/log  
  27.         - name: varlibdockercontainers  
  28.           mountPath: /var/lib/docker/containers  
  29.           readOnly: true  
  30.       terminationGracePeriodSeconds: 30  
  31.       volumes:  
  32.       - name: varlog  
  33.         hostPath:  
  34.           path: /var/log  
  35.       - name: varlibdockercontainers  
  36.         hostPath:  
  37.           path: /var/lib/docker/containers  


[html]  view plain  copy
  1. kubectl create -f fluentd.yaml   

安装完成。


##4: 验证

[html]  view plain  copy
  1. kubectl  get pods --all-namespaces  


[html]  view plain  copy
  1. [root@localhost elk]# kubectl  get pods --all-namespaces  
  2. NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE  
  3. default       busybox                                 1/1       Running   3          3h  
  4. default       es-client-1983128803-k0g83              1/1       Running   0          2h  
  5. default       es-data-4107927351-f621b                1/1       Running   0          2h  
  6. default       es-master-1554717905-pg6tv              1/1       Running   0          2h  
  7. default       kibana-39wgw                            1/1       Running   0          2h  
  8. default       logstash-18k38                          1/1       Running   0          2h  
  9. kube-system   dummy-2088944543-xjj21                  1/1       Running   0          1d  
  10. kube-system   etcd-centos-master                      1/1       Running   0          1d  
  11. kube-system   fluentd-elasticsearch-1mln9             1/1       Running   0          2h  
  12. kube-system   fluentd-elasticsearch-qrl81             1/1       Running   0          2h  
  13. kube-system   fluentd-elasticsearch-wkw0n             1/1       Running   0          2h  
  14. kube-system   heapster-2193675300-j1jxn               1/1       Running   0          1d  
  15. kube-system   kube-apiserver-centos-master            1/1       Running   0          1d  
  16. kube-system   kube-controller-manager-centos-master   1/1       Running   0          1d  
  17. kube-system   kube-discovery-1769846148-c45gd         1/1       Running   0          1d  
  18. kube-system   kube-dns-2924299975-96xms               4/4       Running   0          1d  
  19. kube-system   kube-proxy-33lsn                        1/1       Running   0          1d  
  20. kube-system   kube-proxy-jnz6q                        1/1       Running   0          1d  
  21. kube-system   kube-proxy-vfql2                        1/1       Running   0          1d  
  22. kube-system   kube-scheduler-centos-master            1/1       Running   0          1d  
  23. kube-system   kubernetes-dashboard-3000605155-8mxgz   1/1       Running   0          1d  
  24. kube-system   monitoring-grafana-810108360-h92v7      1/1       Running   0          1d  
  25. kube-system   monitoring-influxdb-3065341217-q2445    1/1       Running   0          1d  
  26. kube-system   weave-net-k5tlz                         2/2       Running   0          1d  
  27. kube-system   weave-net-q3n89                         2/2       Running   0          1d  
  28. kube-system   weave-net-x57k7                         2/2       Running   0          1d  


再执行查看日志的命令:

[html]  view plain  copy
  1. kubectl --namespace=kube-system logs fluentd-elasticsearch-1mln9  


有以下成功的日志:

2016-12-28 05:48:08 +0000 [info]: Connection opened to Elasticsearch cluster => {:host=>"elasticsearch-logging", :port=>9200, :scheme=>"http"}
2016-12-28 05:48:08 +0000 [warn]: retry succeeded. plugin_id="object:3fa39c8a7e2c"










 2017-01-12 07:04:48 +0000 [warn]: suppressed same stacktrace
2017-01-12 07:04:48 +0000 [warn]: record_reformer: Fluent::BufferQueueLimitError queue size exceeds limit /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.12/lib/fluent/buffer.rb:189:in `block in emit'
2017-01-12 07:04:48 +0000 [warn]: emit transaction failed: error_class=Fluent::BufferQueueLimitError error="queue size exceeds limit" tag="var.log.containers.tomcat-apr-dm-1449051961-shb7d_default_tomcat-apr-a251826108724b4265bd92b7fcf48177d82d6527c1185818129f62d2c24b7442.log"
  2017-01-12 07:04:48 +0000 [warn]: suppressed same stacktrace
2017-01-12 07:04:48 +0000 [warn]: record_reformer: Fluent::BufferQueueLimitError queue size exceeds limit /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.12/lib/fluent/buffer.rb:189:in `block in emit'
^C


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值