极客时间05——week_days

  • 熟悉 ELK 各组件的功能、Elasticsearch 的节点角色类型
  • 熟悉索引、doc、分片与副本的概念
  • 掌握不同环境的 ELK 部署规划,
  • 基于 deb 或二进制部署 Elasticsearch 集群 了解
  • Elasticsearch API 的简单使用
  • 安装 head 插件管理 ES 的数据
  • 安装 Logstash
  • 收集不同类型的系统日志并写入到 ES 的不同 index 安装 Kibana、查看 ES 集群的数据

1、熟悉 ELK 各组件的功能、Elasticsearch 的节点角色类型

  • elk组成
    elasticsearch 负责数据存储及检索,数据以json文档格式存储,基于api接口进行数据读写 9200客户端端口 9300 集群通信端口,内存是宿主机的一半,最大不能超30G,推荐cpu32c,磁盘ssd,网卡万兆
    logstash 负责日志收集、日志处理,发送到elasticsearch
    kibana 负责es读取数据进行可视化展示及数据管理

elasticsearch 主要节点类型

  • data node: 数据节点、负责数据的存储
  • master node: 主节点,负责index的创建、删除、分片的分配,node节点的添加、删除 node.roles[master] node.roles:[data]
  • client node /coordinating-node: 客户端节点或协调节点、将数据读写请求转发data node、将集群管理相关的操作转发到masternode,集群访问入口,不存储数据,不参与master角色选举 node.roles:[]
  • ingest节点: 预处理节点,在检索数据之前可以对数据做预处理操作,所有节点默认都支持ingest操作

2、熟悉索引、doc、分片与副本的概念

elasticsearch 节点角色类型

  • node: 存储业务数据的主机
  • cluster: 多个主机组成的高可用elasticsearch集群环境
  • document: 文档简称doc,存储在elasticsearch的数据
  • index: 一类相同的数据,逻辑上同一个index 查询、修改、删除
  • shard: 分片,index逻辑拆分存储,一个或多个,多个分片合起来就是index的所有数据
  • replica: 一个分片的跨主机完整备份,分为主分片和副本分片

3、掌握不同环境的elk部署规划

小型业务环境
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

4、基于 deb 或二进制部署 Elasticsearch 集群 了解

ip存放服务版本端口
192.168.80.85 es-node1elasticsearch01 kibana8.5.1
192.168.80.71 es-node2elasticsearch028.5.1
192.168.80.66 es-node3elasticsearch038.5.1

4.1、环境准备

1.1内核参数指定

vim /etc/sysctl.conf
#新加
vm.max_map_count=262144

1.2 添加主机名解析

vim /etc/hosts
192.168.80.85 es-node1
192.168.80.71 es-node2
192.168.80.66 es-node3

1.3 资源limit优化

cat /etc/security/limits.conf
root soft core unlimited
root hard core unlimited
root soft nproc 1000000
root hard nproc 1000000
root soft nofile 1000000
root hard nofile 1000000
root soft memlock 32000
root hard memlock 32000
root soft msgqueue 8192000
root hard msgqueue 8192000
* soft core unlimited
* hard core unlimited
* soft nproc 1000000
* hard nproc 1000000
* soft nofile 1000000
* hard nofile 1000000
* soft memlock 32000
* hard memlock 32000
* soft msgqueue 8192000
* hard msgqueue 8192000

1.4上传安装包,创建指定目录

mkdir -p /opt/elasticsearch
mkdir -p /opt/elasticsearch/esdata /opt/elasticsearch/eslog /opt/elasticsearch/apps
#目录结构
├── apps
├── docker-compose.yaml
├── esdata
└── eslog
ln -sv /opt/elasticsearch/apps/elasticsearch-8.5.1 elasticsearch
groupadd -g 2888 elasticsearch && useradd -u 2888 -g 2888 -r -m -s /bin/bash elasticsearch
passwd elasticsearch

4.2、xpack认证签发环境

vim instances.yml
instances:
  - name: "es-node1"
    ip:
      - "192.168.80.85"
  - name: "es-node2"
    ip:
      - "192.168.80.71"
  - name: "es-node3"
    ip:
      - "192.168.80.66"

2.1生成ca及证书

#生成ca私钥,默认名字elastic-stack-ca.p12
bin/elasticsearch-certutil ca
#生成ca公钥,默认名称为elastic-certificates.p12
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
#签发elasticsearch集群主机证书
bin/elasticsearch-certutil cert --silent --in instances.yml --out certs.zip --pass Tenda@cscloud123 --ca elastic-stack-ca.p12
Enter password for CA (elastic-stack-ca.p12) :  #ca没录入密码直接回车跳过
#证书分发
mkdir config/certs #三台服务器新建
cp -rp es-node1/es-node1.p12 config/certs/
scp -r es-node2 root@192.168.80.71:/opt/elasticsearch/apps/elasticsearch
cp -rp es-node2/es-node2.p12 config/certs/
scp -r es-node3 root@192.168.80.66:/opt/elasticsearch/apps/elasticsearch
cp -rp es-node3/es-node3.p12 config/certs/
#生成keystopre文件(保存了证书密码的认证文件)
bin/elasticsearch-keystore create #创建keystore⽂件
bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password #证书密码
bin/elasticsearch-keystore add
xpack.security.transport.ssl.truststore.secure_password #录入证书密码
# 拷贝keystore文件到其余node节点
cd config
scp ...

2.2编辑配置文件

vim apps/elasticsearch/config/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: tenda-es-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: es-node1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /opt/elasticsearch/esdata
#
# Path to log files:
#
path.logs: /opt/elasticsearch/eslogs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["192.168.80.85", "192.168.80.71", "192.168.80.66"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["192.168.80.85", "192.168.80.71", "192.168.80.66"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# --------------------------------- Readiness ----------------------------------
#
# Enable an unauthenticated TCP readiness endpoint on localhost
#
#readiness.port: 9399
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
action.destructive_requires_name: false

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: /opt/elasticsearch/apps/elasticsearch/config/certs/es-node1.p12
xpack.security.transport.ssl.truststore.path: /opt/elasticsearch/apps/elasticsearch/config/certs/es-node1.p12

默认jvm虚拟机配置4g.测试环境修改配置 /opt/elasticsearch/apps/elasticsearch/config/jvm.options

 32 -Xms2g
 33 -Xmx2g

2.3 新建elasticsearch.service文件

vim /lib/systemd/system/elasticsearch.service
[Unit]
Description=Elasticsearch
Documentation=http://www.elastic.co
Wants=network-online.target
After=network-online.target
[Service]
RuntimeDirectory=elasticsearch
Environment=ES_HOME=/opt/elasticsearch/apps/elasticsearch
Environment=ES_PATH_CONF=/opt/elasticsearch/apps/elasticsearch/config
Environment=PID_DIR=/opt/elasticsearch/apps/elasticsearch
WorkingDirectory=/opt/elasticsearch/apps/elasticsearch
User=elasticsearch
Group=elasticsearch
ExecStart=/opt/elasticsearch/apps/elasticsearch/bin/elasticsearch --quiet
# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.
StandardOutput=journal
StandardError=inherit
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536
# Specifies the maximum number of processes
LimitNPROC=4096
# Specifies the maximum size of virtual memory
LimitAS=infinity
# Specifies the maximum file size
LimitFSIZE=infinity
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0
# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM
# Send the signal only to the JVM rather than its control group
KillMode=process
# Java process is never killed
SendSIGKILL=no
# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143
[Install]
WantedBy=multi-user.target

4.3、用户管理

#批量设置密码
bin/elasticsearch-setup-passwords interactive
#创建管理员账户
bin/elasticsearch-users useradd serveradmin -p server123 -r superuser

5、Elasticsearch API 的简单使用

curl -u serveradmin:server123 -X GET http://192.168.80.85:9200/_cat 集群支持的操作
在这里插入图片描述
curl -u serveradmin:server123 -X GET http://192.168.80.85:9200/_cat/master?v 查看master节点状态
在这里插入图片描述
curl -u serveradmin:server123 -X GET http://192.168.80.85:9200/_cat/nodes?v 查看node节点状态
在这里插入图片描述
curl -u serveradmin:server123 -X GET http://192.168.80.85:9200/_cat/health?v 获取集群心跳信息
在这里插入图片描述
curl -u serveradmin:server123 -X PUT http://192.168.80.85:9200/test_index?pretty #创建索引test_index,pretty格式序列化
在这里插入图片描述

curl -u serveradmin:server123 -X GET http://192.168.80.85:9200/test_index?pretty #查看索引

在这里插入图片描述
curl -u serveradmin:server123 -X POST “http://192.168.80.85:9200/test_index/_doc/1?pretty” -H ‘Content-Type: application/json’ -d’{“name”: “Jack”,“age”: 19}’ #上传数据
在这里插入图片描述
curl -u serveradmin:server123 -X GET “http://192.168.80.85:9200/test_index/_doc/1?pretty” #查看文档
在这里插入图片描述
curl -u serveradmin:server123 -X PUT “http://192.168.80.85:9200/test_index/_settings” -H ‘content-Type:application/json’ -d’{“number_of_replicas”:2}’ #修改副本数,可动态调整

curl -u serveradmin:server123 -X PUT “http://192.168.80.85:9200/test_index/_settings?pretty” #查看索引设置
在这里插入图片描述
curl -u serveradmin:server123 -X DELETE http://192.168.80.85:9200/test_index?pretty #删除索引
curl -u serveradmin:server123 -X POST “http://192.168.80.85:9200/test_index/_open?pretty” 开启索引
在这里插入图片描述

6、安装 head 插件管理 ES 的数据

在这里插入图片描述

7、安装 Logstash

rpm -ivh logstash-8.5.1-x86_64.rpm
修改点 vim /etc/logstash/logstash.yml
在这里插入图片描述

8、搭建kibana ,通过logstash收集系统日志发往es

vim /etc/logstash/conf.d/logstash.conf
input {
file {
path => "/var/log/secure"
type => "systemlog"
start_position => "beginning"
stat_interval => "1"
}
}
output {
if [type] == "systemlog" {
elasticsearch {
hosts => ["192.168.80.85:9200"]
index => "systemlog-%{+YYYY.MM.dd}"
password => "server123"
user => "serveradmin"
}}
}
systemctl start logstash

访问es,验证数据
在这里插入图片描述
搭建kibana服务器
rpm -ivh kibana-8.5.1-x86_64.rpm
vim /etc/kibana/kibana.yml
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

systemctl start kibana
kibana主界面
Stack Management–>数据视图–>创建数据视图
在这里插入图片描述
discovery查看视图
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值