docker部署ELK

1. 环境配置

IP分类
192.168.20.51elasticsearch1 & kibana
192.168.20.52elasticsearch2
192.168.20.53elasticsearch3

在最后添加一行

sudo vim /etc/sysctl.conf
vm.max_map_count=655360

执行并生效

sudo sysctl -p
sudo sysctl -a | grep max_map_count

2. 配置ELK

2.1 192.168.20.51的配置

编辑elasticsearch.yml文件

cluster.name: es-cluster
node.name: node1
node.master: true
node.data: true
bootstrap.memory_lock: true
network.host: 192.168.20.51
http.port: 9200
discovery.seed_hosts: ["192.168.20.52","192.168.20.53"]
cluster.initial_master_nodes: ["node1","node2","node3"]```

编辑kibana.yml文件

server.host: "192.168.20.51"
elasticsearch.hosts: ["http://192.168.20.51:9200","http://192.168.20.52:9200","http://192.168.20.53:9200"]
i18n.locale: "zh-CN"

编辑docker-compose.yml启动

version: '2.2'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
    container_name: es01
    hostname: elastic
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - /data/es_data:/usr/share/elasticsearch/data
      - /data/es_logs:/usr/share/elasticsearch/logs
      - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - 9200:9200
      - 9300:9300
    network_mode: "host"

  kibana:
    image: docker.elastic.co/kibana/kibana:7.15.2
    container_name: kibana
    hostname: kibana
    restart: always
    volumes:
      - ./kibana.yml:/usr/share/kibana/config/kibana.yml
    ports:
      - 5601:5601
    network_mode: "host"
    depends_on:
      - es01

注意:映射目录要创建,权限修改

mkdir /data/es_data /data/es_logs

启动

docker-compose up -d

2.2 192.168.20.52的配置

编辑elasticsearch.yml

cluster.name: es-cluster
node.name: node2
node.master: true
node.data: true
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.20.51","192.168.20.53"]
cluster.initial_master_nodes: ["node1","node2","node3"]

编辑docker-compose.yml

version: '2.2'
services:
  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
    container_name: es02
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - /data/es_data:/usr/share/elasticsearch/data
      - /data/es_logs:/usr/share/elasticsearch/logs
      - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - 9200:9200
      - 9300:9300
    network_mode: "host"

注意:映射目录要创建,权限修改

mkdir /data/es_data /data/es_logs

启动

docker-compose up -d

2.3 192.168.20.53的配置

编辑elasticsearch.yml

cluster.name: es-cluster
node.name: node3
node.master: true
node.data: true
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["192.168.20.51","192.168.20.52"]
cluster.initial_master_nodes: ["node1","node2","node3"]

编辑docker-compose.yml

version: '2.2'
services:
  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
    container_name: es03
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - /data/es_data:/usr/share/elasticsearch/data
      - /data/es_logs:/usr/share/elasticsearch/logs
      - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - 9200:9200
      - 9300:9300
    network_mode: "host"

注意:映射目录要创建,权限修改

mkdir /data/es_data /data/es_logs

启动

docker-compose up -d

2.4 查看

浏览器查看

192.168.20.51:9200
192.168.20.52:9200
192.168.20.53:9200
192.168.20.51:5601

3. 启动filebeat

编辑filebeat.docker.yml

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

filebeat.inputs:
- type: log
  paths:
    - /log/syslog
  exclude_lines: ['sda']

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

setup.template.settings:
  index.number_of_shards: 3

processors:
- add_cloud_metadata: ~
- decode_json_fields:
    fields: ['message']
    target: ''
    overwrite_keys: true

output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
  username: '${ELASTICSEARCH_USERNAME:}'
  password: '${ELASTICSEARCH_PASSWORD:}'

filebeat脚本filebeat.sh

#!/bin/bash

docker run -d \
  -v /var/log/:/log/ \
  -v /data/filebeat_registry:/usr/share/filebeat/data/registry/ \
  -h filebeat \
  --name=filebeat \
  --user=root \
  --volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \
  --volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
  --volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
  docker.elastic.co/beats/filebeat:7.15.2 filebeat -e -strict.perms=false \
  -E output.elasticsearch.hosts=["192.168.20.51:9200","192.168.20.52:9200","192.168.20.53:9200"]

启动filebeat.sh

./filebeat.sh

4. 设置基本安全

4.1 启用elasticsearch安全功能

每个es节点的elasticsearch.yml文件添加开启安全功能

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true

重新启动elasticsearch
任意的es节点,设置内置账户密码

./bin/elasticsearch-setup-passwords interactive

我这里全部设置密码为123456,将interactive替换为auto,随机生成密码
更改kibana的连接密码,编辑kibana.yml文件

server.host: "192.168.20.51"
elasticsearch.hosts: ["http://192.168.20.51:9200","http://192.168.20.52:9200","http://192.168.20.53:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "123456"
i18n.locale: "zh-CN"

这里方便重启kibana,从docker-compose中剥离出kibana
编写**kibana.sh **启动脚本

#!/bin/bash

kibana_env="-v $(pwd)/kibana.yml:/usr/share/kibana/config/kibana.yml"
docker run -d $kibana_env --network=host --name kibana -h kibana docker.elastic.co/kibana/kibana:7.15.2

启动kibana

chmod +x kibana.sh
./kibana.sh

浏览器访问的时候需要账户密码,用户elastic为最高权限

4.2 生成CA证书

在任意一个es节点上,生成CA证书,还可以为该证书设置密码,这里我不设置,直接回车

./bin/elasticsearch-certutil ca

为集群中的节点生成证书和私钥,证书无密码,直接回车就好

./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12

将证书复制到每个es节点的**/data/certs**下

mkdir /data/certs

这里相比之前就增加一行挂载证书目录

version: '2.2'
services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
    container_name: es01
    hostname: elastic
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - /data/certs:/usr/share/elasticsearch/config/certs
      - /data/elk_data:/usr/share/elasticsearch/data
      - /data/elk_logs:/usr/share/elasticsearch/logs
      - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - 9200:9200
      - 9300:9300
    network_mode: "host"

每个es节点都同样操作

4.2 增加es证书认证

每个es节点的elasticsearch.yml都增加以下内容

xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

重新所有的es节点

docker restart es01

4.3 为elasticsearch加密HTTP通信

在任意的es节点

./bin/elasticsearch-certutil http
Generate a CSR? [y/N]n
Use an existing CA? [y/N]y
Password for elastic-stack-ca.p12:
For how long should your certificate be valid? [5y] 5y
Generate a certificate per node? [y/N]y
node #1 name: es-cluster
Is this correct [Y/n]y
When you are done, press <ENTER> once more to move on to the next step.
192.168.20.51
192.168.20.52
192.168.20.53
Is this correct [Y/n]y
Do you wish to change any of these options? [y/N]n
Generate additional certificates? [Y/n]n
What filename should be used for the output zip file? [/usr/share/elasticsearch/elasticsearch-ssl-http.zip]
  1. 当系统询问您是否要生成 CSR 时,请输入 。n
  2. 当系统询问您是否要使用现有 CA 时,请输入 。y
  3. 输入您的 CA 的路径。这是您为群集生成的文件的绝对路径。elastic-stack-ca.p12
  4. 输入您的 CA 的密码。我这里无密码,直接回车
  5. 输入证书的过期值。您可以输入有效期(以年、月或天为单位)。例如,输入 90 天。90D
  6. 当系统询问您是否要为每个节点生成一个证书时,请输入 。y
    此时当前目录生成一个elasticsearch-ssl-http.zip,解压该文件,将证书复制到挂载目录certs下面
cp elasticsearch/http.p12 config/certs/
cp kibana/elasticsearch-ca.pem config/certs/

把这两个证书复制到每个es节点
每个es节点的elasticsearch.yml都增加以下内容

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/http.p12

将私钥的密码添加到 Elasticsearch 中的安全设置中。

./bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password

重启所有的es节点
kibana.yml文件增加内容

# 修改为https的方式连接
elasticsearch.hosts: ["https://192.168.20.51:9200","https://192.168.20.52:9200","https://192.168.20.53:9200"]

xpack.security.encryptionKey: "something_at_least_32_characters"
xpack.encryptedSavedObjects.encryptionKey: "encryptedSavedObjects12345678909876543210"
elasticsearch.ssl.certificateAuthorities: /usr/share/kibana/config/certs/elasticsearch-ca.pem

编写**kibana.sh **,增加证书挂载目录

#!/bin/bash

kibana_env="-v $(pwd)/kibana.yml:/usr/share/kibana/config/kibana.yml -v /data/certs:/usr/share/kibana/config/certs"
docker run -d $kibana_env --network=host --name kibana -h kibana docker.elastic.co/kibana/kibana:7.15.2

启动kibana

./kibana.sh

5. 加密filebeat

更改**filebeat.docker.yml **

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

filebeat.inputs:
- type: log
  paths:
    - /log/syslog
      #  exclude_lines: ['sda']

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

setup.template.settings:
  index.number_of_shards: 3

processors:
- add_cloud_metadata: ~
- decode_json_fields:
    fields: ['message']
    target: ''
    overwrite_keys: true

output.elasticsearch:
  protocol: 'https'
  hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
  username: 'elastic'
  password: '123456'
  ssl:
    certificate_authorities: ["certs/elasticsearch-ca.pem"]
    verification_mode: "certificate"

更新filebeat.sh

#!/bin/bash

docker run -d \
  -v /var/log/:/log/ \
  -v /data/filebeat_registry:/usr/share/filebeat/data/registry/ \
  -v /data/certs:/usr/share/filebeat/certs/ \
  -h filebeat \
  --name=filebeat \
  --user=root \
  --volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \
  --volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
  --volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
  docker.elastic.co/beats/filebeat:7.15.2 filebeat -e -strict.perms=false \
  -E output.elasticsearch.hosts=["192.168.20.51:9200","192.168.20.52:9200","192.168.20.53:9200"]

启动 filebeat.sh,浏览器kibana查看日志

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
你可以使用Docker部署ELK(Elasticsearch, Logstash, Kibana)堆栈。以下是一些步骤: 1. 安装DockerDocker Compose:请确保你的机器上已经安装了DockerDocker Compose。 2. 创建一个新的目录并在该目录下创建一个`docker-compose.yml`文件。 3. 在`docker-compose.yml`文件中添加以下内容: ```yaml version: '3' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.14.0 container_name: elasticsearch environment: - discovery.type=single-node ports: - 9200:9200 - 9300:9300 volumes: - ./data:/usr/share/elasticsearch/data logstash: image: docker.elastic.co/logstash/logstash:7.14.0 container_name: logstash volumes: - ./logstash/config:/usr/share/logstash/pipeline ports: - 5044:5044 kibana: image: docker.elastic.co/kibana/kibana:7.14.0 container_name: kibana ports: - 5601:5601 ``` 这个`docker-compose.yml`文件定义了三个服务:Elasticsearch、Logstash和Kibana。每个服务都使用了ELK堆栈的官方Docker镜像。 4. 创建一个`data`目录,用于保存Elasticsearch的数据。 5. 在一个终端窗口中,导航到包含`docker-compose.yml`文件的目录,并运行以下命令来启动ELK堆栈: ```bash docker-compose up ``` 这将启动Elasticsearch、Logstash和Kibana容器,并将它们连接在一起。 6. 访问Kibana:在浏览器中访问`http://localhost:5601`,你将看到Kibana的登录界面。 现在,你已经成功地使用Docker部署ELK堆栈。你可以通过Logstash将日志数据发送到Elasticsearch,并使用Kibana来可视化和分析这些日志数据。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值