docker安装Elasticsearch7.8.0+Kibana7.8.0+FileBeat7.8.0

docker安装Elasticsearch7.8.0+Kibana7.8.0+FileBeat7.8.0

一、设置参数

sudo vim /etc/sysctl.conf

vm.max_map_count=262144

一、获取elastic-certificates.p12

拉取镜像:docker pull docker.elastic.co/elasticsearch/elasticsearch:7.8.0

启动:sudo docker run -dit --name=es 镜像Id /bin/bash

进入容器实例:docker exec -it --user root 容器Id bash

进入bin

执行:elasticsearch-certutil ca

执行:elasticsearch-certutil cert --ca elastic-stack-ca.p12

生成证书:elastic-stack-ca.p12

退出容器:exit

执行:docker cp es:/usr/share/elasticsearch/elastic-certificates.p12 .

将复制出来的elastic-certificates.p12拷贝到每一个集群的服务器上。

二、配置ek

docker-compose.yml

version: '3'
services:
  elasticsearch:                   
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0    
    container_name: elasticsearch7.8.0   
    restart: always                 
    environment:
      - node.name=node-218                   
      - network.publish_host=192.168.1.218 
      - network.host=0.0.0.0                
      - discovery.seed_hosts=192.168.1.218,192.168.1.219,192.168.1.217         
      - cluster.initial_master_nodes=192.168.1.218,192.168.1.219,192.168.1.217  
      - cluster.name=es-cluster     
      - bootstrap.memory_lock=true  
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"    
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.ym
      - ./elasticsearch/data:/usr/share/elasticsearch/data
      - ./elasticsearch/logs:/usr/share/elasticsearch/logs
      - ./elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12
    ports:
      - 9200:9200    
      - 9300:9300   
  kibana:
    image: docker.elastic.co/kibana/kibana:7.8.0
    container_name: kibana
    environment:
      - elasticsearch.hosts=http://192.168.1.218:9200              
    volumes:
      - ./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
    hostname: kibana
    depends_on:
      - elasticsearch7.8.0                                               
    restart: always
    ports:
      - 5601:5601

elasticsearch.yml

network.host: 0.0.0.0
http.cors.enabled: true      # 设置跨域,主要用于head插件访问es
http.cors.allow-origin: "*"  # 允许所有域名访问
xpack.security.enabled: true
#开启验证
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.type: PKCS12
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.type: PKCS12
xpack.security.audit.enabled: true

kiblana.yml

server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://elasticserch:9200" ]
#开启验证,kibana连接elasticsearch的用户名密码,与后面生成的密码一致
elasticsearch.username: kibana
elasticsearch.password: ******

执行:docker-compose up -d    会自己去拉取镜像并启动。

在另外集群的机器上配置如下:

docker-compose.yml

version: '3'
services:
  elasticsearch:                    
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0      
    container_name: elasticsearch7.8.0  
    restart: always                 
    environment:
      - node.name=node-219                   
      - network.publish_host=192.168.1.219 
      - network.host=0.0.0.0               
      - discovery.seed_hosts=192.168.1.218,192.168.1.219,192.168.1.217         
      - cluster.initial_master_nodes=192.168.1.218,192.168.1.219,192.168.1.217 
      - cluster.name=es-cluster     
      - bootstrap.memory_lock=true  
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"    
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml  
      - ./elasticsearch/data:/usr/share/elasticsearch/data
      #- ./elasticsearch/config:/usr/share/elasticsearch/config
      - ./elasticsearch/logs:/usr/share/elasticsearch/logs
      - ./elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12

    ports:
      - 9200:9200    
      - 9300:9300         

elasticsearch.yml

network.host: 0.0.0.0
http.cors.enabled: true      # 设置跨域,主要用于head插件访问es
http.cors.allow-origin: "*"  # 允许所有域名访问
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.type: PKCS12
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.type: PKCS12
xpack.security.audit.enabled: true

启动:docker-compose up -d

进入集群中的任意一台服务器:

进入bin目录

执行:elasticsearch-setup-passwords -h

执行:elasticsearch-setup-passwords auto(自动生成密码)   或者 elasticsearch-setup-passwords interactive(自定义密码)

查看容器是否正常启动:docker ps -a

访问kibana,输入用户名密码即可

三、在需要收集日志的服务器上安装filebeat

官网下载filebeat,解压到服务器

filebeat.yml

#模块收集nginx日志
filebeat.config.modules:
  path: ./modules.d/*.yml
  reload.enabled: true
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
  host: "192.168.1.218:5601"
output.elasticsearch:
  hosts: ["192.168.1.217:9200","192.168.1.218:9200","192.168.1.219:9201"]
  username: "elastic"
  password: "******"

启用nginx模块

# 启动
./filebeat modules enable nginx

 在modules.d目录下,启用后的nginx.yml如下

# Module: nginx
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.8/filebeat-module-nginx.html

- module: nginx
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/usr/local/openresty/nginx/logs/*access.log"]
  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/usr/local/openresty/nginx/logs/error.log"]

  # Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
  ingress_controller:
    enabled: false

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

 启动filebeat:./filebeat -e -c filebeat.yml

再次访问kibana

 

 

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是在Docker安装ElasticsearchKibana的步骤: 1. 安装DockerDocker Compose 2. 创建一个文件夹并在其创建一个名为docker-compose.yml的文件 3. 在docker-compose.yml文件添加以下内容: ``` version: '3' services: es01: image: docker.elastic.co/elasticsearch/elasticsearch:7.15.1 container_name: es01 environment: - node.name=es01 - discovery.seed_hosts=es02 - cluster.initial_master_nodes=es01,es02 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata01:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - esnet es02: image: docker.elastic.co/elasticsearch/elasticsearch:7.15.1 container_name: es02 environment: - node.name=es02 - discovery.seed_hosts=es01 - cluster.initial_master_nodes=es01,es02 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata02:/usr/share/elasticsearch/data networks: - esnet kibana: image: docker.elastic.co/kibana/kibana:7.15.1 container_name: kibana environment: ELASTICSEARCH_URL: http://es01:9200 ports: - 5601:5601 networks: - esnet volumes: esdata01: driver: local esdata02: driver: local networks: esnet: ``` 4. 保存并关闭文件 5. 在终端导航到该文件夹并运行以下命令: ``` docker-compose up -d ``` 6. 等待一段时间,直到所有容器都启动并运行 7. 打开浏览器并输入以下地址来访问Kibana: ``` http://localhost:5601 ``` 8. 你现在可以开始使用ElasticsearchKibana进行数据分析和可视化了。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值