两台都要操作:
关闭防火墙:
[root@localhost ~]# iptables -F
[root@localhost ~]# iptables-save
启动dockers服务:
[root@localhost ~]# systemctl start docker
[root@localhost ~]# docker images
第一台虚拟机:
创建/htdocs目录,并向里面写入数据:
[root@localhost ~]# mkdir /htdocs
[root@localhost ~]# echo "zhangqian" > htdocs/index.html
[root@localhost ~]# docker run -d -p 80 --name web1 --volume "$(pwd)"/htdocs:/usr/local/apache2/htdocs httpd
[root@localhost ~]# docker run -d -p 80 --name web2 --volume "$(pwd)"/htdocs:/usr/local/apache2/htdocs httpd
查看一下dockers ps:
[root@localhost ~]# docker ps
进行访问本机加上端口号:
[root@localhost ~]# curl 192.168.1.12:49153
zhangqian
[root@localhost ~]# curl 192.168.1.12:49154
zhangqian
第二台主机:
运行bbox1和bbox2容器:
[root@localhost ~]# docker run -itd --name bbox1 busybox
[root@localhost ~]# docker run -itd --name bbox2 busybox
[root@localhost ~]# docker ps
两台都要下载的镜像:
[root@localhost ~]# docker pull prom/node-exporter
[root@localhost ~]# docker pull google/cadvisor
第一台上下载的镜像:
[root@localhost ~]# docker pull prom/prometheus
[root@localhost ~]# docker pull grafana/grafana
两台都要操作:
[root@localhost ~]# docker run -d -p 9100:9100 --volume /proc/:/host/proc/ --volume /sys:/host/sys --volume /:/rootfs --network host prom/node-exporter --path.procfs /host/proc --path.sysfs /host/sys --collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc)($$|/)"
[root@localhost ~]# docker ps
访问第一台9100端口:
第二台主机访问:
两台虚拟机都操作:
[root@localhost ~]# docker run --volume=/:/rootfs:ro --volume=/var/run:/var/run --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --detach=true --name=cadvisor --privileged --network host google/cadvisor
[root@localhost ~]# docker ps
在普罗米修斯官网上找到prometheus.yml文件,写入到虚拟机中:
https://prometheus.io/docs/prometheus/latest/getting_started/
第一台操作:
[root@localhost ~]# vi prometheus.yml
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s # Evaluate rules every 15 seconds.
# Attach these extra labels to all timeseries collected by this Prometheus instance.
external_labels:
monitor: 'codelab-monitor'
rule_files:
# - 'prometheus.rules.yml'
scrape_configs:
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090','localhost:9100','localhost:8080','192.168.1.137:9100','192.168.1.137:8080']
[root@localhost ~]# docker run -d -p 9090:9090 --volume /root/prometheus.yml:/etc/prometheus/prometheus.yml --name prometheus --network host prom/prometheus
[root@localhost ~]# docker ps
这里的状态要全为up:
第一台操作:普罗米修斯:
[root@localhost ~]# docker run -d -i -p 3000:3000 -e 'GF_SERVER_ROOT_URL=http://grafana,server.name' -e 'GF_SECURITY_ADMIN_PASSWORD=redhat' --network host grafana/Grafana
[root@localhost ~]# docker ps
普罗米修斯监控:
点击import
去grafana.com官网下载镜像:(Grafana: The open observability platform | Grafana Labs)
回到普罗米修斯监控这里来: