请尊重知识产权,博客原文地址 http://blog.csdn.net/qq1032355091/article/details/79559003
elasticsearch-5.6.4安装
解压安装包到/data目录
修改权限
sudo chown elk:elk -R /data/elasticsearch-5.6.4
创建文件夹
sudo mkdir -pv /data/es/data sudo mkdir -pv /data/es/logs sudo chown elk:elk -R /data/es/
编辑配置文件 sudo vi /etc/sysctl.conf ,添加以下内容。
vm.max_map_count=655360
执行以下命令
sudo sysctl -p
编辑config/elasticsearch.yml文件
cluster.name: dev-es node.name: node-1 path.data: /data/es/data path.logs: /data/es/logs network.host: 0.0.0.0 #监控内外网ip
命令行启动
bin/elasticsearch -d
浏览器访问http://3.7.60.83:9200/,返回如下结果表示安装成功
{ "name" : "node-1", "cluster_name" : "dev-es", "cluster_uuid" : "Se2O-CGlS967KjMal2H73A", "version" : { "number" : "5.6.4", "build_hash" : "8bbedf5", "build_date" : "2017-10-31T18:55:38.105Z", "build_snapshot" : false, "lucene_version" : "6.6.1" }, "tagline" : "You Know, for Search" }
elasticsearch-head插件安装
编辑elasticsearch-5.6.4/config/elasticsearch.yml 追加入以下内容:
http.cors.enabled: true http.cors.allow-origin: "*"
安装依赖
sudo yum install -y nodejs sudo yum -y install openssl
用npm安装grunt
sudo npm install -g grunt-cli
解压head插件包并移动到elasticsearch-5.6.0/目录下
elasticsearch-5.6.4/elasticsearch-head-master目录下
npm install
修改elasticsearch-5.6.4/elasticsearch-head-master/Gruntfile.js文件
connect: { server: { options: { port: 9100, hostname: '0.0.0.0', #增加主机 base: '.', keepalive: true } } }
修改elasticsearch-5.6.4/elasticsearch-head-master/_site/app.js文件
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://1.7.0.9:9200"; #修改地址为es的外网地址,一定是外网地址,内网head访问不到
在elasticsearch-5.6.4/elasticsearch-head-master目录下,运行启动命令:
grunt server
浏览器访问http://1.7.0.9:9100/
成功
kibana-5.6.4安装
解压安装包到/data目录
修改kibana-5.6.4/config/kibana.yml
server.host: "10.0.6.4" #kibana监听内网地址 elasticsearch.url: "http://10.0.6.37:9200" #配置es的内网地址 server.port: 5602
命令启动
kibana-5.6.4/bin/kibana
./kibana-5.6.4/bin/kibana
浏览器访问http://1.2.2.3:5602
nginx-1.12.2 安装
安装依赖
sudo yum install -y gcc-c++ sudo yum install -y openssl openssl-devel
解压nginx-1.12.2 安装包并进入目录依次执行
./configure make sudo make install
进入cd /usr/local/nginx目录,命令启动
sudo ./sbin/nginx #启动 sudo ./sbin/nginx -s stop #停止
浏览器访问http://1.2.2.3成功,就表示nginx安装成功
创建文件夹mkdir -pv /usr/local/nginx/conf/conf.d/
编辑配置文件vim /usr/local/nginx/conf/conf.d/kibana.conf
server { listen 80; server_name 1.2.2.3; #当前主机名 auth_basic "Restricted Access"; auth_basic_user_file /usr/local/nginx/conf/htpasswd.users; #登录验证 location / { proxy_pass http://10.0.6.4:5602; #转发到kibana proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }
修改nginx主配置文件导入配置到http{}项 vim /usr/local/nginx/conf/nginx.conf
include /usr/local/nginx/conf/conf.d/*.conf;
设置密码
sudo yum install -y httpd-tools sudo htpasswd -bc /usr/local/nginx/conf/htpasswd.users admin bmkpes2017 sudo htpasswd -bc /usr/local/nginx/conf/htpasswd.users admin bmkpesprod2017
重启nginx
重新浏览器访问http://1.2.2.3,弹出用户名密码输入框,输入跳转到kibana即可
logstash-5.6.4安装
解压,配置环境变量即可。
kafka_2.11-0.10.1.0 安装(使用kafka集成zookeeper)
解压安装包到/data目录,并修改权限,并配置环境变量
修改config/zookeeper.properties文件
dataDir=/data/zookeeper_data clientPort=2181
修改config/server.properties文件
broker.id=0 delete.topic.enable=true listeners=PLAINTEXT://0.0.0.0:9092 advertised.listeners=PLAINTEXT://1.2.2.5:9092 log.dirs=/data/kafka-logs zookeeper.connect=localhost:2181
新建文件夹 /data/kafka-logs 、 /data/zookeeper_data 并修改权限
sudo chown elk:elk -R /data/kafka-logs sudo chown elk:elk -R /data/zookeeper_data
启动zookeeper、启动kafka
zookeeper-server-start.sh /data/kafka_2.11-0.10.1.0/config/zookeeper.properties kafka-server-start.sh /data/kafka_2.11-0.10.1.0/config/server.properties
编写操作脚本 kafka.sh
source /etc/profile #前台启动zk if [ $1 = "zk" ] then zookeeper-server-start.sh /data/kafka_2.11-0.10.1.0/config/zookeeper.properties #后台启动zk elif [ $1 = "zkbg" ] then nohup zookeeper-server-start.sh /data/kafka_2.11-0.10.1.0/config/zookeeper.properties >/dev/null 2>&1 & #关闭zk elif [ $1 = "zkstop" ] then zookeeper-server-stop.sh #前台启动kafka elif [ $1 = "kafka" ] then kafka-server-start.sh /data/kafka_2.11-0.10.1.0/config/server.properties #后台启动kafka elif [ $1 = "kafkabg" ] then nohup kafka-server-start.sh /data/kafka_2.11-0.10.1.0/config/server.properties >/dev/null 2>&1 & #关闭kafka elif [ $1 = "kafkastop" ] then kafka-server-stop.sh elif [ $1 = "consumer" ] then kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic $2 elif [ $1 = "producer" ] then kafka-console-producer.sh --broker-list localhost:9092 --topic $2 elif [ $1 = "create" ] then kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic $2 #创建topic elif [ $1 = "list" ] then kafka-topics.sh --list --zookeeper localhost:2181 #查看topic elif [ $1 = "delete" ] then kafka-topics.sh --delete --zookeeper localhost:2181 --topic $2 #删除topic else echo "parameter invalid" fi
zookeeper-3.4.10集群安装
解压安装包
tar -zxvf zookeeper-3.4.10.tar.gz -C /data
/data/zookeeper-3.4.10/conf文件下重命名文件
mv zoo_sample.cfg zoo.cfg
编辑zoo.cfg文件
dataDir=/data/zookeeper-3.4.10/data clientPort=2181 server.0=master:2888:3888 server.1=slave01:2888:3888 server.2=slave02:2888:3888
创建data目录
mkdir /data/zookeeper-3.4.10/data
/data/zookeeper-3.4.10/data目录下添加myid文件
echo 0 > myid
分发zookeeper文件夹到另外2个节点
scp -r /data/zookeeper-3.4.10 slave01:/data scp -r /data/zookeeper-3.4.10 slave02:/data
在另外2个节点下分别修改myid文件中的值
slave01: echo 1 > myid slave02: echo 2 > myid