docker安装使用ELK

1. 安装docker

Docker 分为 CE 和 EE 两大版本。CE 即社区版(免费),EE 即企业版,强调安全,付费使用,这里我们使用的CE版

为了确保系统的稳定性,建议先update一下

sudo yum update

安装依赖包

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

添加docker镜像

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

如果官方源下载速度太慢,建议使用国内源

sudo yum-config-manager --add-repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo

安装docker

sudo yum makecache fast
sudo yum install docker-ce

测试是否安装成功

docker run hello-world

建立一个docker组,并将当前用户加入到此组中,这样不用root用户即可访问到 Docker 引擎的 Unix socket

# 创建docker组
sudo groupadd docker
# 将当前用户加入到组中
sudo usermod -aG docker $USER

如果上面安装失败,我们可以卸载docker,重新安装

sudo yum remove docker \
  docker-client \
  docker-client-latest \
  docker-common \
  docker-latest \
  docker-latest-logrotate \
  docker-logrotate \
  docker-selinux \
  docker-engine-selinux \
  docker-engine

2. 安装docker-compose

docker-compose是一个docker编排工具,它可以有效的解决我们镜像之间的依赖关系

这里提供两种方式安装:

直接下载
  1. 下载docker-compose文件
curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  1. 赋予文件可执行权限
sudo chmod +x /usr/local/bin/docker-compose
  1. 验证是否安装成功
docker-compose version
pip方式安装
  1. 安装pip
#安装依赖
yum -y install epel-release
#安装pip
yum -y install python-pip
#更新pip
pip install --upgrade pip
# 验证pip
pip --version
  1. 安装docker-compose
pip install -U docker-compose==1.23.2
  1. 验证安装是否成功
docker-compose version

3. 安装ELKC

ELKC为 elasticsearch(搜索型数据库)、logstash(日志搜集、过滤、分析)、kibana(提供Web页面分析日志)、cerebro(监控elasticsearch状态)

mkdir -p /usr/share/elasticsearch/data

docker-compose.yml 文件如下

version: '2.2'
services:
 # elasticsearch节点1
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.1.0
    container_name: es7_01
    environment:
      - cluster.name=pibigstar
      - node.name=es7_01
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.seed_hosts=es7_01
      - cluster.initial_master_nodes=es7_01,es7_02
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - es7data1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - es7net
 # elasticsearch节点2
  elasticsearch2:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.1.0
    container_name: es7_02
    environment:
      - cluster.name=pibigstar
      - node.name=es7_02
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.seed_hosts=es7_01
      - cluster.initial_master_nodes=es7_01,es7_02
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - es7data2:/usr/share/elasticsearch/data
    networks:
      - es7net
# kibana
  kibana:
    image: docker.elastic.co/kibana/kibana:7.1.0
    container_name: kibana7
    environment:
      - I18N_LOCALE=zh-CN
      - XPACK_GRAPH_ENABLED=true
      - TIMELION_ENABLED=true
      - XPACK_MONITORING_COLLECTION_ENABLED="true"
    ports:
      - "5601:5601"
    networks:
      - es7net
# cerebro   
  cerebro:
    image: lmenezes/cerebro:0.8.3
    container_name: cerebro
    ports:
      - "9000:9000"
    command:
      - -Dhosts.0.host=http://elasticsearch:9200
    networks:
      - es7net
volumes:
  es7data1:
    driver: local
  es7data2:
    driver: local

networks:
  es7net:
    driver: bridge

启动

docker-compose up

注意:

1、如果你看到这个提示:
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least”
那说明你设置的 max_map_count 小了,编辑/etc/sysctl.conf,追加以下内容:vm.max_map_count=262144保存后,执行:sysctl -p重新启动。

2、如果启动过程中出现问题,关闭后再次启动前要先清除下数据

# 停止容器并且移除数据
docker-compose down -v
# 再次启动
docker-compose up
  • kibnan页面:http://localhost:5601
  • cretebro页面:http://localhost:9000

4. 启动Logstash

  1. 下载测试数据
    http://files.grouplens.org/datasets/movielens/ml-latest-small.zip

  2. 下载Logstash

https://www.elastic.co/cn/downloads/logstash

  1. 配置logstash.conf
input {
  file {
    path => ["F:/elasticsearch/ml-latest-small/movies.csv"]
    start_position => "beginning"
    sincedb_path => "nul"
  }
}

filter {
  csv {
    separator => ","
    columns => ["id","content","genre"]
  }

  mutate {
    split => { "genre" => "|" }
    remove_field => ["path", "host","@timestamp","message"]
  }

  mutate {

    split => ["content", "("]
    add_field => { "title" => "%{[content][0]}"}
    add_field => { "year" => "%{[content][1]}"}
  }


  mutate {
    convert => {
      "year" => "integer"
    }
    strip => ["title"]
    remove_field => ["path", "host","@timestamp","message","content"]
  }


}

output {
   elasticsearch {
     hosts => "http://localhost:9200"
     index => "movies"
     document_id => "%{id}"
   }
  stdout {}
}
  1. 启动logstash
cd bin
logstash -f F:\elasticsearch\conf\logstash.cnf
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值