prometheus + influxdb + grafana + mysql

前言

本文介绍使用influxdb 作为prometheus持久化存储和使用mysql 作为grafana 持久化存储的安装方法

一 安装go环境

如果自己有go环境可以自主编译remote_storage_adapter插件,安装go环境目的就是为了获得此插件,如果没有go环境可以使用我分享的连接下载。

链接: https://pan.baidu.com/s/1DJpoYDOIfCeAFC6UGY22Xg 提取码: uj42

1 下载  

wget https://storage.googleapis.com/golang/go1.8.3.linux-amd64.tar.gz

2 安装

tar -C /usr/local -xzf go1.8.3.linux-amd64.tar.gz

添加环境变量
vim /etc/profile

export GOROOT=/usr/local/go 
export GOBIN=$GOROOT/bin export GOPKG=$GOROOT/pkg/tool/linux_amd64 export GOARCH=amd64 export GOOS=linux export GOPATH=/go export PATH=$PATH:$GOBIN:$GOPKG:$GOPATH/bin vim /etc/profile go get -d -v 

二  安装  influxdb  

1 下载并安装

wget https://dl.influxdata.com/influxdb/releases/influxdb-1.5.2.x86_64.rpm sudo yum localinstall influxdb-1.5.2.x86_64.rpm

2 启动influxdb  

systemctl start influxdb
systemctl enable influxdb

以非服务方式启动
influxd

需要指定配置文件的话,可以使用 --config 选项,具体可以help下看看

3 查看相关配置

安装后, 在/usr/bin下面有如下文件

influxd          influxdb服务器
influx           influxdb命令行客户端
influx_inspect   查看工具
influx_stress    压力测试工具
influx_tsm       数据库转换工具(将数据库从b1或bz1格式转换为tsm1格式)

在 /var/lib/influxdb/下面会有如下文件夹

data            存放最终存储的数据,文件以.tsm结尾
meta            存放数据库元数据
wal             存放预写日志文件

配置文件路径 :/etc/influxdb/influxdb.conf

4 创建http接口用于普罗米修斯

如何进入到db中

influx

如何创建一个prometheus库http 接口

curl -XPOST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE prometheus"

三 安装prometheus

1 下载 

https://prometheus.io/download/

2 解压安装

tar xf prometheus-2.8.0.linux-amd64.tar.gz

mv prometheus-2.8.0.linux-amd64 /usr/local/prometheus

cd /usr/local/prometheus

./prometheus --version

四 准备remote_storage_adapter

在github上准备一个 remote_storage_adapter 的可执行文件,然后启动它,如果想获取相应的帮助可以使用:./remote_storage_adapter -h来获取相应帮助(修改绑定的端口,influxdb的设置等..),现在我们启动一个remote_storage_adapter来对接influxdb和prometheus:
./remote_storage_adapter -influxdb-url=http://localhost:8086/ -influxdb.database=prometheus -influxdb.retention-policy=autogen,influxdb默认绑定的端口为9201

1 build 插件

/usr/local/go/bin/go get github.com/prometheus/documentation/examples/remote_storage/remote_storage_adapter/

2 使用插件

./remote_storage_adapter --influxdb-url=http://127.0.0.1:8086/ --influxdb.database="prometheus" --influxdb.retention-policy=autogen

3 修改prometheus文件

vim prometheus.yml 
添加:

#Remote write configuration (for Graphite, OpenTSDB, or InfluxDB).
remote_write:
   - url: "http://localhost:9201/write"

# Remote read configuration (for InfluxDB only at the moment). remote_read: - url: "http://localhost:9201/read" 

4 启动 prometheus

./prometheus

5 查看有无数据上报

此时只监控了server本身

6 查看 influxdb 数据库内容

> show databases;
name: databases
name
----
_internal
mydb
prometheus
> use prometheus
Using database prometheus
> SHOW MEASUREMENTS name: measurements name ---- go_gc_duration_seconds go_gc_duration_seconds_count go_gc_duration_seconds_sum go_goroutines go_info go_memstats_alloc_bytes go_memstats_alloc_bytes_total go_memstats_buck_hash_sys_bytes go_memstats_frees_total go_memstats_gc_cpu_fraction go_memstats_gc_sys_bytes go_memstats_heap_alloc_bytes go_memstats_heap_idle_bytes go_memstats_heap_inuse_bytes go_memstats_heap_objects go_memstats_heap_released_bytes go_memstats_heap_sys_bytes go_memstats_last_gc_time_seconds go_memstats_lookups_total go_memstats_mallocs_total go_memstats_mcache_inuse_bytes go_memstats_mcache_sys_bytes go_memstats_mspan_inuse_bytes go_memstats_mspan_sys_bytes go_memstats_next_gc_bytes go_memstats_other_sys_bytes go_memstats_stack_inuse_bytes go_memstats_stack_sys_bytes go_memstats_sys_bytes go_threads net_conntrack_dialer_conn_attempted_total net_conntrack_dialer_conn_closed_total net_conntrack_dialer_conn_established_total net_conntrack_dialer_conn_failed_total net_conntrack_listener_conn_accepted_total net_conntrack_listener_conn_closed_total process_cpu_seconds_total process_max_fds process_open_fds process_resident_memory_bytes process_start_time_seconds process_virtual_memory_bytes process_virtual_memory_max_bytes prometheus_api_remote_read_queries prometheus_build_info prometheus_config_last_reload_success_timestamp_seconds prometheus_config_last_reload_successful prometheus_engine_queries prometheus_engine_queries_concurrent_max prometheus_engine_query_duration_seconds prometheus_engine_query_duration_seconds_count prometheus_engine_query_duration_seconds_sum prometheus_http_request_duration_seconds_bucket prometheus_http_request_duration_seconds_count prometheus_http_request_duration_seconds_sum prometheus_http_response_size_bytes_bucket prometheus_http_response_size_bytes_count prometheus_http_response_size_bytes_sum prometheus_notifications_alertmanagers_discovered prometheus_notifications_dropped_total prometheus_notifications_queue_capacity prometheus_notifications_queue_length prometheus_remote_storage_dropped_samples_total prometheus_remote_storage_enqueue_retries_total prometheus_remote_storage_failed_samples_total prometheus_remote_storage_highest_timestamp_in_seconds prometheus_remote_storage_pending_samples prometheus_remote_storage_queue_highest_sent_timestamp_seconds prometheus_remote_storage_remote_read_queries prometheus_remote_storage_retried_samples_total prometheus_remote_storage_samples_in_total prometheus_remote_storage_sent_batch_duration_seconds_bucket prometheus_remote_storage_sent_batch_duration_seconds_count prometheus_remote_storage_sent_batch_duration_seconds_sum prometheus_remote_storage_shard_capacity prometheus_remote_storage_shards prometheus_remote_storage_succeeded_samples_total prometheus_rule_evaluation_duration_seconds_count prometheus_rule_evaluation_duration_seconds_sum prometheus_rule_evaluation_failures_total prometheus_rule_evaluations_total prometheus_rule_group_duration_seconds_count prometheus_rule_group_duration_seconds_sum prometheus_rule_group_iterations_missed_total prometheus_rule_group_iterations_total prometheus_sd_azure_refresh_duration_seconds_count prometheus_sd_azure_refresh_duration_seconds_sum prometheus_sd_azure_refresh_failures_total prometheus_sd_consul_rpc_duration_seconds_count prometheus_sd_consul_rpc_duration_seconds_sum prometheus_sd_consul_rpc_failures_total prometheus_sd_discovered_targets prometheus_sd_dns_lookup_failures_total prometheus_sd_dns_lookups_total prometheus_sd_ec2_refresh_duration_seconds_count prometheus_sd_ec2_refresh_duration_seconds_sum prometheus_sd_ec2_refresh_failures_total prometheus_sd_file_read_errors_total prometheus_sd_file_scan_duration_seconds_count prometheus_sd_file_scan_duration_seconds_sum prometheus_sd_gce_refresh_duration_count prometheus_sd_gce_refresh_duration_sum prometheus_sd_gce_refresh_failures_total prometheus_sd_kubernetes_cache_last_resource_version prometheus_sd_kubernetes_cache_list_duration_seconds_count prometheus_sd_kubernetes_cache_list_duration_seconds_sum prometheus_sd_kubernetes_cache_list_items_count prometheus_sd_kubernetes_cache_list_items_sum prometheus_sd_kubernetes_cache_list_total prometheus_sd_kubernetes_cache_short_watches_total prometheus_sd_kubernetes_cache_watch_duration_seconds_count prometheus_sd_kubernetes_cache_watch_duration_seconds_sum prometheus_sd_kubernetes_cache_watch_events_count prometheus_sd_kubernetes_cache_watch_events_sum prometheus_sd_kubernetes_cache_watches_total prometheus_sd_kubernetes_events_total prometheus_sd_marathon_refresh_duration_seconds_count prometheus_sd_marathon_refresh_duration_seconds_sum prometheus_sd_marathon_refresh_failures_total prometheus_sd_openstack_refresh_duration_seconds_count prometheus_sd_openstack_refresh_duration_seconds_sum prometheus_sd_openstack_refresh_failures_total prometheus_sd_received_updates_total prometheus_sd_triton_refresh_duration_seconds_count prometheus_sd_triton_refresh_duration_seconds_sum prometheus_sd_triton_refresh_failures_total prometheus_sd_updates_total prometheus_target_interval_length_seconds prometheus_target_interval_length_seconds_count prometheus_target_interval_length_seconds_sum prometheus_target_scrape_pool_reloads_failed_total prometheus_target_scrape_pool_reloads_total prometheus_target_scrape_pool_sync_total prometheus_target_scrape_pools_failed_total prometheus_target_scrape_pools_total prometheus_target_scrapes_exceeded_sample_limit_total prometheus_target_scrapes_sample_duplicate_timestamp_total prometheus_target_scrapes_sample_out_of_bounds_total prometheus_target_scrapes_sample_out_of_order_total prometheus_target_sync_length_seconds prometheus_target_sync_length_seconds_count prometheus_target_sync_length_seconds_sum prometheus_template_text_expansion_failures_total prometheus_template_text_expansions_total prometheus_treecache_watcher_goroutines prometheus_treecache_zookeeper_failures_total prometheus_tsdb_blocks_loaded prometheus_tsdb_checkpoint_creations_failed_total prometheus_tsdb_checkpoint_creations_total prometheus_tsdb_checkpoint_deletions_failed_total prometheus_tsdb_checkpoint_deletions_total prometheus_tsdb_compaction_chunk_range_seconds_bucket prometheus_tsdb_compaction_chunk_range_seconds_count prometheus_tsdb_compaction_chunk_range_seconds_sum prometheus_tsdb_compaction_chunk_samples_bucket prometheus_tsdb_compaction_chunk_samples_count prometheus_tsdb_compaction_chunk_samples_sum prometheus_tsdb_compaction_chunk_size_bytes_bucket prometheus_tsdb_compaction_chunk_size_bytes_count prometheus_tsdb_compaction_chunk_size_bytes_sum prometheus_tsdb_compaction_duration_seconds_bucket prometheus_tsdb_compaction_duration_seconds_count prometheus_tsdb_compaction_duration_seconds_sum prometheus_tsdb_compaction_populating_block prometheus_tsdb_compactions_failed_total prometheus_tsdb_compactions_total prometheus_tsdb_compactions_triggered_total prometheus_tsdb_head_active_appenders prometheus_tsdb_head_chunks prometheus_tsdb_head_chunks_created_total prometheus_tsdb_head_chunks_removed_total prometheus_tsdb_head_gc_duration_seconds prometheus_tsdb_head_gc_duration_seconds_count prometheus_tsdb_head_gc_duration_seconds_sum prometheus_tsdb_head_max_time prometheus_tsdb_head_max_time_seconds prometheus_tsdb_head_min_time prometheus_tsdb_head_min_time_seconds prometheus_tsdb_head_samples_appended_total prometheus_tsdb_head_series prometheus_tsdb_head_series_created_total prometheus_tsdb_head_series_not_found_total prometheus_tsdb_head_series_removed_total prometheus_tsdb_head_truncations_failed_total prometheus_tsdb_head_truncations_total prometheus_tsdb_lowest_timestamp prometheus_tsdb_lowest_timestamp_seconds prometheus_tsdb_reloads_failures_total prometheus_tsdb_reloads_total prometheus_tsdb_size_retentions_total prometheus_tsdb_storage_blocks_bytes prometheus_tsdb_symbol_table_size_bytes prometheus_tsdb_time_retentions_total prometheus_tsdb_tombstone_cleanup_seconds_bucket prometheus_tsdb_tombstone_cleanup_seconds_count prometheus_tsdb_tombstone_cleanup_seconds_sum prometheus_tsdb_vertical_compactions_total prometheus_tsdb_wal_completed_pages_total prometheus_tsdb_wal_corruptions_total prometheus_tsdb_wal_fsync_duration_seconds_count prometheus_tsdb_wal_fsync_duration_seconds_sum prometheus_tsdb_wal_page_flushes_total prometheus_tsdb_wal_truncate_duration_seconds_count prometheus_tsdb_wal_truncate_duration_seconds_sum prometheus_tsdb_wal_truncations_failed_total prometheus_tsdb_wal_truncations_total prometheus_wal_watcher_current_segment prometheus_wal_watcher_record_decode_failures_total prometheus_wal_watcher_records_read_total prometheus_wal_watcher_samples_sent_pre_tailing_total promhttp_metric_handler_requests_in_flight promhttp_metric_handler_requests_total scrape_duration_seconds scrape_samples_post_metric_relabeling scrape_samples_scraped up 

7 向添加一个node节点

下载
wget https://github.com/prometheus/node_exporter/releases/download/v0.17.0/node_exporter-0.17.0.linux-amd64.tar.gz 安装agent tar xf node_exporter-0.17.0.linux-amd64.tar.gz cd node_exporter-0.17.0.linux-amd64 ./node_exporter 向prometheus 注册节点 vim prometheus.yml scrape_configs下添加 - job_name: 'linux-node' static_configs: - targets: ['10.10.25.149:9100'] labels: instance: node1 重启 prometheus 

8 将prometheus写成系统服务

cat>/lib/systemd/system/prometheus.service<<EOF
[Service]
Restart=on-failure
WorkingDirectory=/usr/local/prometheus/
ExecStart=/usr/local/prometheus/prometheus --config.file=/usr/local/prometheus/prometheus.yml
[Install]
WantedBy=multi-user.target
EOF


chown 644 /lib/systemd/system/prometheus.service
systemctl daemon-reload
systemctl enable prometheus
systemctl start prometheus
systemctl status prometheus

9 将agent写成系统服务

cat>/lib/systemd/system/node_exporter.service<<EOF
[Service]
Restart=on-failure
WorkingDirectory=/root/node_exporter-0.17.0.linux-amd64
ExecStart=/root/node_exporter-0.17.0.linux-amd64/node_exporter
[Install]
WantedBy=multi-user.target
EOF


chown 644 /lib/systemd/system/node_exporter.service
systemctl daemon-reload
systemctl enable node_exporter
systemctl start node_exporter
systemctl status node_exporter

10 将remote_storage_adapter注册为系统服务

cat>/lib/systemd/system/remote_storage_adapter.service<<EOF
[Service]
Restart=on-failure
WorkingDirectory=/root/
ExecStart=/root/remote_storage_adapter --influxdb-url=http://127.0.0.1:8086/ --influxdb.database="prometheus" --influxdb.retention-policy=autogen
[Install]
WantedBy=multi-user.target
EOF


chown 644 /lib/systemd/system/remote_storage_adapter.service
systemctl daemon-reload
systemctl enable remote_storage_adapter
systemctl start remote_storage_adapter
systemctl status remote_storage_adapter

五 安装 grafana

1 下载 

wget https://dl.grafana.com/oss/release/grafana-6.0.2-1.x86_64.rpm

2 安装

yum install  grafana-6.0.2-1.x86_64.rpm
systemctl start grafana-server systemctl enable grafana-server grafana-server -v grafana-server 监听端口为 3000

3 访问 grafana-server 

http://ServerIP:3000
默认用户名密码为: admin admin

六 安装 mysql

1 添加源

rpm -Uvh http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm

yum repolist enabled | grep "mysql.*-community.*"

2 安装 mysql-5.6

yum -y install mysql-community-server

3 启动mysql并简单安全设置

systemctl enable mysqld
systemctl start mysqld
systemctl status mysqld

mysql_secure_installation 设置密码一路Y


4 创建grafana 数据库

create database grafana;
create user grafana@'%' IDENTIFIED by 'grafana'; grant all on grafana.* to grafana@'%'; flush privileges;

七 修改grafana默认数据库并配置grafana

1 修改配置文件连接mysql

vim /etc/grafana/grafana.ini

[database]
type = mysql
host = 127.0.0.1:3306
name = grafana
user = grafana
password =grafana
url = mysql://grafana:grafana@localhost:3306/grafana

[session]
provider = mysql
provider_config = `grafana:grafana@tcp(127.0.0.1:3306)/grafana`

2 重启grafana

systemctl restart grafana-server

3 访问grafana

http://serverip:3000

4 查看数据库

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| grafana            |
| mysql              |
| performance_schema | +--------------------+ mysql> use grafana Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +--------------------------+ | Tables_in_grafana | +--------------------------+ | alert | | alert_notification | | alert_notification_state | | annotation | | annotation_tag | | api_key | | dashboard | | dashboard_acl | | dashboard_provisioning | | dashboard_snapshot | | dashboard_tag | | dashboard_version | | data_source | | login_attempt | | migration_log | | org | | org_user | | playlist | | playlist_item | | plugin_setting | | preferences | | quota | | server_lock | | session | | star | | tag | | team | | team_member | | temp_user | | test_data | | user | | user_auth | | user_auth_token | +--------------------------+ 33 rows in set (0.00 sec) 

5 配置 grafana 添加数据源

由于使用influxDB作为prometheus的持久化存储,所以添加的influxDB数据源,由于influxDB未设置密码所以此处没有填写密码

转载于:https://www.cnblogs.com/cheyunhua/p/11376756.html

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值