启动一台Nginx和三台kafka
安装nginx-kafka插件
1.安装git
yum install -y git
2.切换到/usr/local/src目录,然后将kafka的c客户端源码clone到本地
cd /usr/local/src
git clone https://github.com/edenhill/librdkafka 下载慢
git clone https://gitee.com/kalista-wangcc_admin/librdkafka 下载快
3.进入到librdkafka,然后进行编译
cd librdkafka
yum install -y gcc gcc-c++ pcre-devel zlib-devel
./configure
make && make install
4.安装nginx整合kafka的插件,进入到/usr/local/src,clone nginx整合kafka的源码
cd /usr/local/src
git clone https://github.com/brg-liuwei/ngx_kafka_module
5.进入到nginx的源码包目录下 (编译nginx,然后将将插件同时编译)
cd /usr/local/src/nginx-1.12.2
./configure --add-module=/usr/local/src/ngx_kafka_module/
make
make install
6.修改nginx的配置文件,详情请查看当前目录的nginx.conf
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
kafka;
kafka_broker_list 192.168.88.11:9092 192.168.88.12:9092 192.168.88.13:9092;
server {
listen 80;
#Nginx的主机名
server_name master;
#charset koi8-r;
#access_log logs/host.access.log main;
#创建的两个主题开启消费端(track和user)
location = /kafka/track {
kafka_topic track;
}
location = /kafka/user {
kafka_topic user;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
7.启动zk和kafka集群(创建topic)
/usr/local/zookeeper-3.4.9/bin/zkServer.sh start
/usr/localkafka/kafka_2.11-0.10.2.1/bin/kafka-server-start.sh -daemon /bigdata/kafka_2.11-0.10.2.1/config/server.properties
8.启动nginx,报错,找不到kafka.so.1的文件
error while loading shared libraries: librdkafka.so.1: cannot open shared object file: No such file or directory
9.加载so库
echo "/usr/local/lib" >> /etc/ld.so.conf
ldconfig
10.测试,向nginx中写入数据,然后观察kafka(创建两个消费端track 和 user)的消费者能不能消费到数据
curl master/kafka/track -d "message send to kafka topic"
curl localhost/kafka/track -d "老赵666"