各组件exporter+promethus+grafana

一、下载promethus包

官网下载:https://prometheus.io/download/

es-exporter:https://github.com/prometheus-community/elasticsearch_exporter/releases

kafka-exporter:https://github.com/danielqsj/kafka_exporter/releases

Mongoldb-exporter:

---对应3版本 : https://github.com/dcu/mongodb_exporter/releases/

---对应4版本: https://github.com/percona/mongodb_exporter/releases/download/v0.35.0/mongodb_exporter-0.35.0.linux-amd64.tar.gz

redis-exporter:https://github.com/oliver006/redis_exporter/releases/

二、修改promethus-job配置以及启动和启动es-exporter

​#解压后进入prome目录修改prometheus.yml文件添加es-exporter的job即可
tar -xf prometheus-2.40.2.linux-amd64.tar.gz
cd prometheus-2.40.2.linux-amd64
cat prometheus.yml
- job_name: "elasticsearch-166"
    static_configs:
      - targets: ['localhost:39202']
​
nohup ./prometheus --config.file=prometheus.yml > ./prometheus.log 2>&1 &
​
#配置es-exporter以及启动
tar -xf elasticsearch_exporter-1.5.0.linux-amd64.tar.gz
​
nohup ./elasticsearch_exporter --es.all --es.indices --es.cluster_settings --es.indices_settings --es.shards --es.snapshots --es.timeout=10s --web.listen-address=":39206" --web.telemetry-path="/metrics" --es.uri http://172.26.2.73:39202 > ./es-73-exporter.log 2>&1 &
​
## 参数说明:
--es.uri             默认http://localhost:9200,连接到的Elasticsearch节点的地址(主机和端口)。 这可以是本地节点(例如localhost:9200),也可以是远程Elasticsearch服务器的地址
--es.all                默认flase,如果为true,则查询群集中所有节点的统计信息,而不仅仅是查询我们连接到的节点。
--es.cluster_settings   默认flase,如果为true,请在统计信息中查询集群设置
--es.indices            默认flase,如果为true,则查询统计信息以获取集群中的所有索引。
--es.indices_settings   默认flase,如果为true,则查询集群中所有索引的设置统计信息。
--es.shards             默认flase,如果为true,则查询集群中所有索引的统计信息,包括分片级统计信息(意味着es.indices = true)。
--es.snapshots          默认flase,如果为true,则查询集群快照的统计信息。
​
​
#kafka启动
nohup ./kafka_exporter --kafka.server=172.26.2.119:33092 > ./kafka-119-exporter.log 2>&1 &
[xyz@gz-52-210 kafka_exporter-1.6.0.linux-amd64]$ nohup ./kafka_exporter --web.listen-address=":39302" --kafka.server=172.26.2.119:33092 --zookeeper.server=172.26.2.119:33181 > ./kafka-119-exporter.log 2>&1 &
​
[xyz@gz-52-210 kafka_exporter-1.6.0.linux-amd64]$ ./kafka_exporter -h
usage: kafka_exporter [<flags>]
​
Flags:
  -h, --help                     Show context-sensitive help (also try --help-long and --help-man).
      --web.listen-address=":9308"  
                                 Address to listen on for web interface and telemetry.
      --web.telemetry-path="/metrics"  
                                 Path under which to expose metrics.
      --topic.filter=".*"        Regex that determines which topics to collect.
      --group.filter=".*"        Regex that determines which consumer groups to collect.
      --log.enable-sarama        Turn on Sarama logging, default is false.
      --kafka.server=kafka:9092 ...  
                                 Address (host:port) of Kafka server.
      --sasl.enabled             Connect using SASL/PLAIN, default is false.
      --sasl.handshake           Only set this to false if using a non-Kafka SASL proxy, default is true.
      --sasl.username=""         SASL user name.
      --sasl.password=""         SASL user password.
      --sasl.mechanism=""        The SASL SCRAM SHA algorithm sha256 or sha512 or gssapi as mechanism
      --sasl.service-name=""     Service name when using kerberos Auth
      --sasl.kerberos-config-path=""  
                                 Kerberos config path
      --sasl.realm=""            Kerberos realm
      --sasl.kerberos-auth-type=""  
                                 Kerberos auth type. Either 'keytabAuth' or 'userAuth'
      --sasl.keytab-path=""      Kerberos keytab file path
      --sasl.disable-PA-FX-FAST  Configure the Kerberos client to not use PA_FX_FAST, default is false.
      --tls.enabled              Connect to Kafka using TLS, default is false.
      --tls.server-name=""       Used to verify the hostname on the returned certificates unless tls.insecure-skip-tls-verify is given. The kafka server's name should be given.
      --tls.ca-file=""           The optional certificate authority file for Kafka TLS client authentication.
      --tls.cert-file=""         The optional certificate file for Kafka client authentication.
      --tls.key-file=""          The optional key file for Kafka client authentication.
      --server.tls.enabled       Enable TLS for web server, default is false.
      --server.tls.mutual-auth-enabled  
                                 Enable TLS client mutual authentication, default is false.
      --server.tls.ca-file=""    The certificate authority file for the web server.
      --server.tls.cert-file=""  The certificate file for the web server.
      --server.tls.key-file=""   The key file for the web server.
      --tls.insecure-skip-tls-verify  
                                 If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Default is false
      --kafka.version="2.0.0"    Kafka broker version
      --use.consumelag.zookeeper  
                                 if you need to use a group from zookeeper, default is false
      --zookeeper.server=localhost:2181 ...  
                                 Address (hosts) of zookeeper server.
      --kafka.labels=""          Kafka cluster name
      --refresh.metadata="30s"   Metadata refresh interval
      --offset.show-all          Whether show the offset/lag for all consumer group, otherwise, only show connected consumer groups, default is true
      --concurrent.enable        If true, all scrapes will trigger kafka operations otherwise, they will share results. WARN: This should be disabled on large clusters. Default is false
      --topic.workers=100        Number of topic workers
      --kafka.allow-auto-topic-creation  
                                 If true, the broker may auto-create topics that we requested which do not already exist, default is false.
      --verbosity=0              Verbosity log level
      --log.level=info           Only log messages with the given severity or above. One of: [debug, info, warn, error]
      --log.format=logfmt        Output format of log messages. One of: [logfmt, json]
      --version                  Show application version.
      
#mongodb启动:
3版本:
nohup ./mongodb_exporter -mongodb.uri="mongodb://172.26.2.62:27000,172.26.2.63:27000,172.26.2.64:27000" -web.listen-address=":27000" > ./mongodb-27000-exporter.log 2>&1 &
​
4版本:
./mongodb_exporter --mongodb.uri=mongodb://172.26.2.28:28000,172.26.2.93:28000,172.26.2.100:28000,172.26.2.42:28000,172.26.2.43:28000 --web.listen-address=:28000 --no-mongodb.direct-connect --collect-all
​
​
#redis启动

三、下载grafana的es- dashboard

https://grafana.com/grafana/dashboards/6483-elasticsearch/

在grafana-web页面import 下载好的json即可

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值