Prometheus、Grafan基于docker部署

该博客介绍了如何在Docker环境下部署Prometheus和Grafana监控系统,包括在master节点上配置Grafana和Prometheus容器,以及在多个node节点上部署node_exporter来收集系统指标。博主详细展示了从下载镜像、配置yaml文件到启动容器的步骤,并最终实现了对Linux服务器的监控。同时,还展示了如何通过Grafana界面查看监控状态。
摘要由CSDN通过智能技术生成

Prometheus、Grafan基于docker部署

环境

主机名IP部署功能
master192.168.143.140Grafan 容器 Prometheus 容器 node_exporter
node1192.168.143.141node_exporter
node2192.168.143.142node_exporter

Prometheus部署

//下载prometheus并复制配置文件到存储卷

[root@master ~]# wget https://github.com/prometheus/prometheus/releases/download/v2.32.1/prometheus-2.32.1.linux-amd64.tar.gz

[root@master ~]# tar xf prometheus-2.32.1.linux-amd64.tar.gz 
[root@master ~]# cd prometheus-2.32.1.linux-amd64/
[root@master prometheus-2.32.1.linux-amd64]# ls
console_libraries  LICENSE  prometheus      promtool
consoles           NOTICE   prometheus.yml
[root@master prometheus-2.32.1.linux-amd64]# cp promtool /opt/
[root@master ~]# cat /opt/prometheus.yml 
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["localhost:9090"]


//拉取prometheus镜像,运行映射

[root@master ~]# docker pull prom/prometheus 
//--restart always重启开机自启策略
[root@master ~]#  docker run -d --name prometheus --restart always -p 9090:9090 -v /opt/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus

//分别给三台,要监控的主机,配置节点

# wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz

# tar xf node_exporter-1.3.1.linux-amd64.tar.gz -C /usr/local/

# cd /usr/local/
# mv node_exporter-1.3.1.linux-amd64/ node_exporter

# cat > /usr/lib/systemd/system/node_exporter.service << EOF
[unit]
Description=The node_exporter Server
After=network.target

[Service]
ExecStart=/usr/local/node_exporter/node_exporter
Restart=on-failure
RestartSec=15s
SyslogIdentifier=node_exporter

[Install]
WantedBy=multi-user.target
EOF
# systemctl daemon-reload && systemctl enable --now node_exporter
Created symlink from /etc/systemd/system/multi-user.target.wants/node_exporter.service to /usr/lib/systemd/system/node_exporter.service.

//修改prometheus.yml ,并重启docker

[root@master ~]# cat /opt/prometheus.yml 
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["192.168.143.140:9100"]
  - job_name: "Linux Server"  
    static_configs:                    
      - targets: 
        - 192.168.143.141:9100
        - 192.168.143.142:9100


[root@master ~]# systemctl restart docker
[root@master ~]# docker ps | grep prometheus
d874318eed49   prom/prometheus                                     "/bin/prometheus --c…"   19 minutes ago   Up 14 seconds   0.0.0.0:9090->9090/tcp, :::9090->9090/tcp   prometheus

web页面查看状态

在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

grafana操作

//拉取granfana镜像,运行映射到主机

[root@master ~]# docker pull grafana/grafana
[root@master ~]# docker run -dit --name grafan -p 3000:3000 --restart always grafana/grafana
8d66349b75034ccc5ef941326d786ec9daf98b663d83711666ca6c9442d3cb98

web页面操作grafana

在这里插入图片描述
// 这里是修改密码,建议输入8位以上,数字加大小写加特殊字符的密码
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值