【Linux-ELK】安装部署文档(含数据备份与恢复)

1.简介

参考我的专栏《ELK》

2.版本选择

  • 查看 elasticsearch官网
    • 稳定版 7.17.1 【选择】
    • 版本不稳定 8.1.1

选择稳定版就可以

3.安装(Linux)

3.1.下载

官网下载最新包,参考本文【附录】

3.2.解压

tar -xzf elasticsearch-7.17.1-linux-x86_64.tar.gz
tar -xzf kibana-7.17.1-linux-x86_64.tar.gz
tar -xzf logstash-7.17.1-linux-x86_64.tar.gz

3.3.创建用户组

启动elasticsearch和kibana要求以非root用户启动

# root用户
groupadd es
useradd -g es es
chown -R es:es /opt/elk/*
# 设置es密码123456
passwd es

3.4.设置环境变量

vim /etc/profile

export ES_JAVA_HOME=/opt/elk/elasticsearch-7.17.1/jdk
export LS_JAVA_HOME=/opt/elk/logstash-7.17.1/jdk
export PATH=$ES_JAVA_HOME/bin:$LS_JAVA_HOME/bin:$PATH
source /etc/profile

3.5.修改最大操作文件数

vim /etc/security/limits.conf
* hard nofile 65536
* soft nofile 65536

vim /etc/sysctl.conf 
#添加 vm.max_map_count=655360,然后加载参数
#刷新配置文件
sysctl -p 

3.6.开放服务器端口

firewall-cmd --zone=public --add-port=9200/tcp --permanent 
firewall-cmd --zone=public --add-port=5601/tcp --permanent
firewall-cmd --zone=public --add-port=5044/tcp --permanent
firewall-cmd --reload

3.7.新建文件夹并授权

mkdir -pv /opt/elk/elasticsearch-7.17.1/{data,logs}
mkdir -pv /opt/elk/kibana-7.17.1/{data,logs}
mkdir -pv /opt/elk/logstash-7.17.1/{data,logs}

chown -R es:es /opt/elk/*

3.8.elasticsearch

1)生成密钥证书

参考

cd /opt/elk/elasticsearch-7.17.1/bin
sh elasticsearch-certutil ca
sh elasticsearch-certutil cert --ca elastic-stack-ca.p12
# 输入密码 123456
# 将生成的证书 放到 /opt/elk/elasticsearch-7.17.1/config
# elastic-certificates.p12
# elastic-stack-ca.p12
sh elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
sh elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
2)查看elk用户权限
vim /etc/passwd
es:x:1001:1001::/home/es:/bin/bash
3)配置
vim /opt/elk/elasticsearch-7.17.1/config/elasticsearch.yml

# 集群名称
cluster.name: es1-application
# 节点名称
node.name: node-1
# data存储目录
path.data: /opt/elk/elasticsearch-7.17.1/data
# logs日志的目录
path.logs: /opt/elk/elasticsearch-7.17.1/logs
# 允许所有ip连接
network.host: 0.0.0.0
# 监听的端口
http.port: 9200
cluster.initial_master_nodes: ["node-1"]
xpack.security.enabled: true
# xpack安全
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

# https://blog.csdn.net/weixin_43820556/article/details/120165948
# 解决 [ERROR] exception during geoip databases update
# ingest.geoip.downloader.enabled: false
4)启动
# 启动
sh /opt/elk/elasticsearch-7.17.1/bin/elasticsearch -d
# 查看是否运行
ps -ef | grep elastic
# 初始化内置用户密码
sh /opt/elk/elasticsearch-7.17.1/bin/elasticsearch-setup-passwords interactive
# 密码全部设置123456
5)重启ES
ps -ef | grep elastic
# xxx 为输出的第二列
kill -9 xxx
sh /opt/elk/elasticsearch-7.17.1/bin/elasticsearch -d
6)查看ES运行状态
curl http://IP:9200 -u elastic:123456

3.9.kibana

1)配置
vim /opt/elk/kibana-7.17.1/config/kibana.yml
server.port: 5601
server.host: "192.xxx.xxx.134"
elasticsearch.hosts: ["http://192.xxx.xxx.134:9200"]
kibana.index: ".kibana"
kibana.defaultAppId: "home"
elasticsearch.username: "kibana_system"
elasticsearch.password: "123456"
pid.file: /opt/elk/kibana-7.17.1/run/kibana/kibana.pid
logging.dest: /opt/elk/kibana-7.17.1/logs/stdout.log
i18n.locale: "zh-CN"
server.publicBaseUrl: "http://192.xxx.xxx.134:5601"
2)启动
nohup /opt/elk/kibana-7.17.1/bin/kibana &
ps -ef | grep kibana
3)查看服务状态
访问
http://192.xxx.xxx.135:5601
elastic:123456
4)问题
server.publicBaseUrl 缺失,在生产环境中运行时应配置。某些功能可能运行不正常。 请参阅文档。

3.10.logstash

1)配置
vim /opt/elk/logstash-7.17.1/config/logstash.yml

node.name: logstash_node1
path.data: /opt/elk/logstash-7.17.1/data
pipeline.id: logstash_node1
pipeline.workers: 2
pipeline.batch.size: 125
pipeline.batch.delay: 50
pipeline.ordered: auto
# ------------ API Settings -------------
api.enabled: true
api.http.host: 192.xxx.xxx.135
api.http.port: 9600-9700
api.environment: "production"
api.ssl.enabled: true
api.ssl.keystore.path: /path/to/keystore.jks
api.ssl.keystore.password: "123456"
api.auth.type: basic
api.auth.basic.username: "logstash-user"
api.auth.basic.password: "123456"

path.logs: /opt/elk/logstash-7.17.1/logs

# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: "123456"
#xpack.monitoring.elasticsearch.proxy: ["http://proxy:port"]
xpack.monitoring.elasticsearch.hosts: ["https://192.xxx.xxx.134:9200"]
# an alternative to hosts + username/password settings is to use cloud_id/cloud_auth
#xpack.monitoring.elasticsearch.cloud_id: monitoring_cluster_id:xxxxxxxxxx
#xpack.monitoring.elasticsearch.cloud_auth: logstash_system:password
# another authentication alternative is to use an Elasticsearch API key
#xpack.monitoring.elasticsearch.api_key: "id:api_key"
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.proxy: ["http://proxy:port"]
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
# an alternative to hosts + username/password settings is to use cloud_id/cloud_auth
#xpack.management.elasticsearch.cloud_id: management_cluster_id:xxxxxxxxxx
#xpack.management.elasticsearch.cloud_auth: logstash_admin_user:password
# another authentication alternative is to use an Elasticsearch API key
#xpack.management.elasticsearch.api_key: "id:api_key"
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s
vim /opt/elk/logstash-7.17.1/config/logstash-sample.conf

# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://192.xxx.xxx.135:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "123456"
  }
}
2)测试
./bin/logstash -e 'input { stdin {} } output { stdout {} }' 
3)启动服务
nohup /opt/elk/logstash-7.17.1/bin/logstash -f /opt/elk/logstash-7.17.1/config/logstash-sample.conf &

nohup /opt/elk/logstash-7.17.1/bin/logstash >> /opt/elk/logstash-7.17.1/logs/logstash.log &
4)查看logstash运行状态
ln -s /opt/elk/logstash-7.17.1/bin/logstash.lib.sh /bin/
B1.Logstash 接收 RabbitMQ 信息

参考

input {
  # 输入端对接rabbitMQ
  rabbitmq {
		host => "RabbitMQ Server IP"
		port => 5672
		# user => "RabbitMQ Server User Name"
		# password => "RabbitMQ Server User Password"
		# 配置指定rabbimq的交换器, 可选项
		# exchange => "pro.community.exchange"  
		# 交换器类型  
        # exchange_type => "direct"
        # routingkey
        # key => "route.logstash.es.create"
        # mq提供的队列名,发送消息,名称可以自定义
		queue => "xxx_logstash_sender"
        # 是否持久化队列
		durable => true
		codec => json
        # 自定义字段, 方便有多个源时在filter中判断
	}
}

filter {
  # mutate过滤器能够帮助你修改指定字段的内容
  mutate {
    # 移除不需要的字段
    remove_field => ["tags", "type","@version","@timestamp"]
    # update => { "topicName" => "%{topicName}" }
  }
}

output {
    # 连接es端,这块配置可以自定义
    elasticsearch {
      # 对es的操作行为 新增索引
      # action => "create" # delete update
      hosts => "http://localhost:9200"
      # user => "elastic"
      # password => "密码"
      # 指定ES的索引名称
      index => "%{writeTargetEsIndex}"
      # es文档id,便于修改和删除
      document_id => "%{index}"
      # 可选
      # document_type => "%{documentType}"
    }
    
    stdout{}  
}

3.11.filebeat

1)配置
vim /opt/elk/filebeat-7.17.1/filebeat.yml

2)设置
ll /etc/sudoers
vim /etc/sudoers

# 添加
admin    ALL=(ALL)       ALL
es    ALL=(ALL)       ALL

chmod u-w /etc/sudoers

sudo filebeat modules enable kibana

4.客户端(Client)

4.1.Python

python -m pip install elasticsearch==7.17.1

python -m pip uninstall elasticsearch==8.1.0
1)测试
# 安装雪花算法库
python -m pip install pysnowflake

5.数据备份与恢复

首先需要在Linux服务器上新建备份

mkdir -pv /opt/elk/logstash-7.17.1/snapshot

5.1.Kibana

  1. 在 elasticsearch.yml 中,配置备份路径 path.repo, 添加进配置文件末尾即可
# 这里是多个的配置,如果是只有一个可以将中括号去掉
path.repo: ["/opt/elk/logstash-7.17.1/snapshot"]
  1. elasticsearch 重启
ps -ef | grep elastic
# xxx 为输出的第二列
kill -9 xxx
sh /opt/elk/elasticsearch-7.17.1/bin/elasticsearch -d
  1. es 获取备份库
GET _snapshot/_all
  1. es 创建备份库
PUT _snapshot/backup
{
  "type": "fs",
  "settings": {
    "location": "data_bak",
    "compress": true,
    "max_snapshot_bytes_per_sec": "50mb",
    "max_restore_bytes_per_sec": "50mb"
  }
}
# 备份路径为(配置文件中的路径) /opt/elk/logstash-7.17.1/snapshot/data_bak
# compress: 是否压缩,默认为true
# max_snapshot_bytes_per_sec: 每个节点快照速率. 默认40mb/s
# max_restore_bytes_per_sec: 节点恢复速率. 默认40mb/s
  1. es 备份数据
# 备份部分数据,也可以备份全部数据
PUT _snapshot/backup/snapshot_20221027_002?wait_for_completion=true
{
  "indices": "log*,recore_base*,prod_resp*",
  "ignore_unavailable": true,
  "include_global_state": false
}
# indices: 索引
# ignore_unavailable: 忽略不可用
# include_global_state: 包括全局状态
  1. 查看备份
GET _snapshot/backup/snapshot_20221027_002
  1. 恢复备份数据
# 清空数据,如果有权限,也同时清空,需要重新初始化用户名和密码,谨慎
# DELETE _all

# 数据量小可以wait_for_completion=true。数据量大的话用false
POST _snapshot/backup/snapshot_20221027_002/_restore?wait_for_completion=false
{
  "indices": "log*,recore_base*,prod_resp*",
  "ignore_unavailable": true,
  "include_global_state": false
}

注意: 数据迁移
/opt/elk/logstash-7.17.1/snapshot/data_bak 文件夹 拷贝到 目标服务器上 /opt/elk/logstash-7.17.1/snapshot 文件夹中

  1. 删除备份数据
DELETE _snapshot/backup/snapshot_20221027_002

5.2.Linux 命令行

  1. 在 elasticsearch.yml 中,配置备份路径 path.repo, 添加进配置文件末尾即可
# 这里是多个的配置,如果是只有一个可以将中括号去掉
path.repo: ["/opt/elk/logstash-7.17.1/snapshot"]
  1. elasticsearch 重启
ps -ef | grep elastic
# xxx 为输出的第二列
kill -9 xxx
sh /opt/elk/elasticsearch-7.17.1/bin/elasticsearch -d
  1. es 获取备份库
curl -XGET http://192.xxx.xxx.135:5601/_snapshot/backup/_all
  1. es 创建备份库
curl  -H "Content-Type: application/json" -XPUT http://192.xxx.xxx.135:5601/_snapshot/backup?wait_for_completion=true -d '{"type": "fs","settings": {"location": "data_bak","compress": true,"compress": true,"max_snapshot_bytes_per_sec": "50mb","max_restore_bytes_per_sec": "50mb"}}'

# 备份路径为(配置文件中的路径) /opt/elk/logstash-7.17.1/snapshot/data_bak
# location 可以写绝对路径
# compress: 是否压缩,默认为true
# max_snapshot_bytes_per_sec: 每个节点快照速率. 默认40mb/s
# max_restore_bytes_per_sec: 节点恢复速率. 默认40mb/s

返回结果如下,则说明创建成功

{
    "acknowledged": true
}
  1. es 备份数据
curl -XPUT http://192.xxx.xxx.135:5601/_snapshot/backup/snapshot_20221027_002?wait_for_completion=true -d '{"indices": "log*,recore_base*,mirror*,prod_resp*","ignore_unavailable": true,"include_global_state": false}'

# indices: 索引
# ignore_unavailable: 忽略不可用
# include_global_state: 包括全局状态
  1. 查看备份
curl -XGET http://192.xxx.xxx.135:5601/_snapshot/backup/snapshot_20221027_002
  1. 恢复备份数据
# 数据打包
tar -cvf snapshot_20221027_002.tar.gz /opt/elk/logstash-7.17.1/snapshot/data_bak
# 清空数据
curl -XDELETE http://192.xxx.xxx.135:5601/_all
# 恢复数据
curl -XPOST http://192.xxx.xxx.135:5601/snapshot/backup/snapshot_20221027_002/_restore?wait_for_completion=true -d '{"indices": "log*,recore_base*,mirror*,prod_resp*","ignore_unavailable": true,"include_global_state": false}'

注意: 数据迁移
/opt/elk/logstash-7.17.1/snapshot/data_bak 文件夹 拷贝到 目标服务器上 /opt/elk/logstash-7.17.1/snapshot 文件夹中

  1. 删除备份数据
curl -XDELETE http://192.xxx.xxx.135:5601/_snapshot/backup/snapshot_20221027_002

附录

A1.参考

  • 3
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
要在Linux环境下安装部署ELK(Elasticsearch、Logstash、Kibana),可以按照以下步骤进行操作: 1. 首先,下载并安装Node.js:使用`wget`命令下载Node.js的tar包,例如`wget https://nodejs.org/dist/v8.11.3/node-v8.11.3-linux-x64.tar.xz`。然后,使用`tar -xf`命令解压下载的tar包文件,例如`tar -xf node-v8.11.3-linux-x64.tar.xz`。接下来,配置环境变量,在`/etc/profile`文件中添加以下内容: ``` export NODE_HOME=/usr/local/elk/node-v8.11.3-linux-x64 export PATH=$PATH:$NODE_HOME/bin ``` 最后,使用`source /etc/profile`命令刷新环境变量。 2. 验证Node.js安装是否成功,可以使用`node -v`命令查看Node.js版本。 3. 安装ELK的依赖包:使用`npm install -g`命令进行全局安装安装的包将放置在`/usr/local`或Node.js的安装目录下。如果不加`-g`参数,则是进行本地安装,包将放在当前目录的`./node_modules`下。 4. 下载并解压Kibana:使用`tar -zxvf`命令解压已下载的Kibana压缩包文件,例如`tar -zxvf kibana-7.8.0-linux-x86_64.tar.gz`。 至此,ELKLinux环境下的安装部署已经完成。请注意,以上步骤仅为基本操作,具体的安装部署过程可能因系统版本和个人需求而有所不同。请参考官方文档或相关教程以获得更详细的指导。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* *3* [elk安装部署linux环境(亲测有效)](https://blog.csdn.net/weixin_40691089/article/details/123635331)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

琴 韵

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值