promethues监控之docker-compose编排容器

一、使用方法

1.docker-compose介绍和安装

docker-compose 是一个用于定义和运行多容器 docker 应用程序的工具。您可以使用 YAML 文件来配置应用程序的服务。使用单个命令,从配置中创建并启动所有服务。docker-compose 适用于所有环境:生产、登台、开发、测试以及 CI 工作流。
使用 docker-compose 基本上是一个三步过程:

  1. 使用 定义您的应用程序的环境,dockerfile以便它可以在任何地方复制。
  2. 定义组成您的应用程序的服务,docker-compose.yml 以便它们可以在隔离的环境中一起运行。
  3. 运行docker-compose up使用docker-compose 二进制文件启动并运行你的整个应用程序。

安装稳定版:

curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

docker-compose --version
docker-compose version 1.29.2, build 5becea4c

docker-compose up -d   #创建容器并以后台方式运行
docker-compose logs -f   #查看容器日志
docker-compose down  #销毁
docker-compose ps: #查看当前正在运行的内容

2.docker-compose使用

大家想了解更详细的可以去看下官方的文档。
docker-compose官方文档:
https://docs.docker.com/compose/compose-file/compose-file-v3/

代码如下:

version: "3.7"
services:
  grafana:
    #replace username/repo:tag with your name and image details
    image: grafana/grafana:v6.2.5 #可以自定义镜像名,需要dockerfile配合使用,也可以是pull的镜像名
    #build 可以指定为包含构建上下文路径的字符串
    build: 
      context: ./grafana  #此目录会自动获取Dockerfile并按照配置进行打包docker镜像
    depends_on:  #启动容器顺序:是指docker先启动哪个容器并不会检测是否启动成功,如果有依赖关系的话,则需要官方有个脚本wait-for-it,这个需要注意一下。
      - cadvisor
      - alertmanager
      - prometheus
    shm_size: "20gb"  #限制容器磁盘大小
    restart: on-failure    #on-failure如果退出代码指示的故障错误政策重启的容器。      
    ports:
      - "13000:3000"
#    network_mode: "host"
    volumes:
      - "/data/opt/grafana:/var/lib/grafana"  #挂载grafana数据到宿主机目录,这种方法保证了持久化的问题,同时项目容器更方便查找日志,痛点就是需要定期清理日志。
    environment:  #环境变量设置时区,有用               
      - TZ=Asia/Shanghai
  prometheus:
    image: centos/prometheus:v2.8.1
    build: 
      context: ./prometheus
    shm_size: "20gb"
    restart: on-failure
    #command:  #docker-compose可以自定义启动容器命令,需要处理容器依赖关系自定义启动
    #  - '-config.file=/app/opt/prometheus/prometheus.yml'
    #  - '-storage.local.path=/prometheus'
    #  - '-alertmanager.url=http://alertmanager:9093'
    ports:
      - "19090:9090"
#    network_mode: "host"
    volumes:
      - "/data/opt/prometheus/data:/data/opt/prometheus/data"
    environment:  #环境变量设置时区,有点不太好用                
      - TZ=Asia/Shanghai
  alertmanager:
    image: prom/alertmanager:v1.0
    build: 
      context: ./alertmanager
    shm_size: "20gb"
    restart: on-failure   
    ports:
      - "19093:9093"
    networks:
      docker:
        ipv4_address: 192.168.2.251
    volumes:
      - "/root/docker/monitor/alertmanager:/etc/alertmanager"  #挂载配置文件,方便查看修改配置文件
    environment:               
      - TZ=Asia/Shanghai
  cadvisor:
    image: google/cadvisor:latest
    build: 
      context: .
    shm_size: "20gb"
    restart: on-failure   
    ports:
      - "18080:8080"
    networks:
      docker:
        ipv4_address: 192.168.2.252
    volumes:
      - "/:/rootfs:ro"
      - "/var/run:/var/run:rw"
      - "/sys:/sys:ro"
      - "/var/lib/docker/:/var/lib/docker:ro"
    environment:               
      - TZ=Asia/Shanghai
networks:
  docker:
    driver: bridge  #网桥模式
    ipam:
      driver: default
      config:
      -
        subnet: 192.168.2.0/24

3.promethues配置

拉取docker普罗米修斯镜像

docker pull prom/prometheus:latest

promethues.yml参数设置,自动发现文件配置主要是应对数量较大的服务器时,方便阅读promethues.yml,不会显得杂乱无章。

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The 
  external_labels:
      monitor: 'codelab-monitor'

# Alertmanager配置
alerting:
  alertmanagers:
  - static_configs:
    - targets: ["localhost:19093"] # 设定alertmanager和prometheus交互的接口,即alertmanager监听的ip地址和端口

# rule配置,首次读取默认加载,之后根据evaluation_interval设定的周期加载
rule_files:  #Alertmanager警报规则
  - "host_warning.yml"
  - "container_warning.yml"

scrape_configs:
  - job_name: 'prometheus'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['192.168.10.120:9090']

  - job_name: 'node_exproter'   #此处定义了自动发现的采集任务名称,可以依据自己的业务定义多个自动发现任务
    file_sd_configs:
      - files:
        - file_ds/node_exproter.json
        refresh_interval: 5m #自动发现间隔时间,默认5m

  - job_name: 'cadvisor'
    static_configs:
      - targets: ['192.168.10.120:18080','192.168.10.190:18080']
    
  - job_name: black_exporter  #黑盒探测
    metrics_path: /probe
    params:
      modules: [http_2xx]
    static_configs:
      - files:
        - file_ds/http_status.json  
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: [192.168.10.120:18080']  #blackbox_exporter 所在的机器和端口

4.file_ds自动发现

file_ds/node_exproter.json文件格式:

  1. targets只能有一个ip或者服务api(black_exporter黑盒探测)
  2. labels标签可以有多个,也可以是汉字,这个可以用promethues的web页面ip:9090中查到的,标签名及内容都会写入promethues数据库中。后面Alertmanager报警规则可以匹配这些规则,从而给不同的人发邮件。
[
   { "targets": [ "192.168.10.120:19100" ],"labels": { "group1": "node_expoter", "service": "mysql"}},
   { "targets": [ "192.168.10.190:19100" ],"labels": { "group1": "node_expoter"}}
]

5.host主机告警规则

host_warning.yml网上收集的关系主机报警规则,大家可以参考一下:

groups:
  - name: host_monitoring
    rules:
    # 节点内存已满(剩余< 20%)
    - alert: HostOutOfMemory
      expr: node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes * 100 < 20
      for: 2m
      labels:
        team: Develop
        severity: warning
      annotations:
        explain: "节点内存已满(剩余< 20%)"
        summary: Host out of memory (instance {{ $labels.instance }})
        description: "Node memory is filling up (< 20% left)\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # CPU 负载 > 80%
    - alert: HostHighCpuLoad
      expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[2m])) * 100) > 80
      for: 2m
      labels:
        team: Operations
        severity: warning
      annotations:
        explain: "CPU负载 > 80%"
        summary: Host high CPU load (instance {{ $labels.instance }})
        description: "CPU load is > 80%\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # 磁盘可能正在读取太多数据(> 50 MB/s)
    - alert: HostUnusualDiskReadRate
      expr: sum by (instance) (rate(node_disk_read_bytes_total[2m])) / 1024 / 1024 > 50
      for: 5m
      labels:
        team: Operations
        severity: warning
      annotations:
        explain: "磁盘可能正在读取太多数据(> 50 MB/s)"
        summary: Host unusual disk read rate (instance {{ $labels.instance }})
        description: "Disk is probably reading too much data (> 50 MB/s)\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # 磁盘可能写入过多数据(> 50 MB/s)
    - alert: HostUnusualDiskWriteRate
      expr: sum by (instance) (rate(node_disk_written_bytes_total[2m])) / 1024 / 1024 > 50
      for: 2m
      labels:
        team: Operations
        severity: warning
      annotations:
        explain: "磁盘可能写入过多数据(> 50 MB/s)"
        summary: Host unusual disk write rate (instance {{ $labels.instance }})
        description: "Disk is probably writing too much data (> 50 MB/s)\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # 磁盘几乎已满(剩余 < 10%)
    # Same rule using "node_filesystem_free_bytes" will fire when disk fills for non-root users.
    # "--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|run)($|/)".
    # Please add ignored mountpoints in node_exporter parameters like
    - alert: HostOutOfDiskSpace
      expr: (node_filesystem_avail_bytes * 100) / node_filesystem_size_bytes < 10 and ON (instance, device, mountpoint) node_filesystem_readonly == 0
      for: 2m
      labels:
        team: Operations
        severity: warning
      annotations:
        explain: "磁盘几乎已满(剩余 < 10%)"
        summary: Host out of disk space (instance {{ $labels.instance }})
        description: "Disk is almost full (< 10% left)\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # 磁盘几乎用完了可用的 inode(剩余 < 10%)
    - alert: HostOutOfInodes
      expr: node_filesystem_files_free{mountpoint ="/rootfs"} / node_filesystem_files{mountpoint="/rootfs"} * 100 < 10 and ON (instance, device, mountpoint) node_filesystem_readonly{mountpoint="/rootfs"} == 0
      for: 2m
      labels:
        team: Operations
        severity: warning
      annotations:
        explain: "磁盘几乎用完了可用的 inode(剩余 < 10%)"
        summary: Host out of inodes (instance {{ $labels.instance }})
        description: "Disk is almost running out of available inodes (< 10% left)\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # 磁盘延迟增加(读取操作 > 100 毫秒)
    - alert: HostUnusualDiskReadLatency
      expr: rate(node_disk_read_time_seconds_total[1m]) / rate(node_disk_reads_completed_total[1m]) > 0.1 and rate(node_disk_reads_completed_total[1m]) > 0
      for: 2m
      labels:
        team: Operations
        severity: warning
      annotations:
        explain: "磁盘延迟增加(读取操作 > 100 毫秒)"
        summary: Host unusual disk read latency (instance {{ $labels.instance }})
        description: "Disk latency is growing (read operations > 100ms)\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # 磁盘延迟增加(写入操作 > 100 毫秒)
    - alert: HostUnusualDiskWriteLatency
      expr: rate(node_disk_write_time_seconds_total[1m]) / rate(node_disk_writes_completed_total[1m]) > 0.1 and rate(node_disk_writes_completed_total[1m]) > 0
      for: 2m
      labels:
        team: Operations
        severity: warning
      annotations:
        explain: "磁盘延迟增加(写入操作 > 100 毫秒)"
        summary: Host unusual disk write latency (instance {{ $labels.instance }})
        description: "Disk latency is growing (write operations > 100ms)\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # 主机网络接口可能接收太多数据 (> 100 MB/s)
    - alert: HostUnusualNetworkThroughputIn
      expr: sum by (instance) (rate(node_network_receive_bytes_total[2m])) / 1024 / 1024 > 100
      for: 5m
      labels:
        team: Operations
        severity: warning
      annotations:
        explain: "主机网络接口可能接收太多数据 (> 100 MB/s)"
        summary: Host unusual network throughput in (instance {{ $labels.instance }})
        description: "Host network interfaces are probably receiving too much data (> 100 MB/s)\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # 主机网络接口可能发送过多数据(> 100 MB/s)
    - alert: HostUnusualNetworkThroughputOut
      expr: sum by (instance) (rate(node_network_transmit_bytes_total[2m])) / 1024 / 1024 > 100
      for: 5m
      labels:
        team: Operations
        severity: warning
      annotations:
        explain: "主机网络接口可能发送过多数据(> 100 MB/s)"
        summary: Host unusual network throughput out (instance {{ $labels.instance }})
        description: "Host network interfaces are probably sending too much data (> 100 MB/s)\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

    # conntrack数量接近极限
    - alert: HostConntrackLimit
      expr: node_nf_conntrack_entries / node_nf_conntrack_entries_limit > 0.8
      for: 5m
      labels:
        team: Operations
        severity: critical
      annotations:
        explain: "conntrack数量接近极限"
        summary: Host conntrack limit (instance {{ $labels.instance }})
        description: "The number of conntrack is approaching limit\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

总结

安装过程就不过多叙述了,网上一搜一堆,关于docker-compose需要考虑到以下几点:

  1. 有用到dockerfile自定义镜像,在docker-compose.yml是可以自定义镜像名的,对于中间件服务最好加上版本号,因为随着版本的迭代,可能会导致配置文件语法更新,造成不必要的麻烦。
  2. 生产环境中要考虑到日志的持久化,同样也方便了后期查找日志排查问题。
  3. 关于网络问题,prometheus、grafana建议使用host主机模式,原因是prometheus设置私网ip的话,导致node_exproter监控宿主机的数据获取不到;grafana设置私网ip的话,会报网关路由错误。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值