linux下Kafka安装部署
系统环境
组件 | 版本 |
6.5 64x | |
zookeeper | 3.4.5 |
kafka | 172.16.177.219 |
单节点安装
在安装好jdk
官方下载地址:http://kafka.apache.org/downloads.html
下载kafka并且解压
tar zxvf kafka_2.11-0.11.0.0.tgz mv kafka_2.11-0.11.0.0/ /opt/module/kafka
cd /opt/module/kafka
3)在/opt/module/kafka 目录下创建 logs 文件夹
[root@localhost kafka]$ mkdir logs
4)修改配置文件
[root@localhost kafka]$ cd /opt/module/kafka/config/
[root@localhost config]$ vi server.properties
#broker 的全局唯一编号,不能重复 broker.id=0
#删除 topic 功能使能
delete.topic.enable=true
#kafka 运行日志存放的路径
log.dirs=/opt/module/kafka/logs
#配置连接 Zookeeper 集群地址 zookeeper.connect=hadoop102:2181,hadoop103:2181,hadoop104:2181
zookeeper.connect=172.16.177.219:2181
修改listeners=PLAINTEXT://192.168.10.175:9092 配置了地址才可以外网访问
5)配置环境变量
[root@localhost module]$ sudo vi /etc/profile
#KAFKA_HOME
export KAFKA_HOME=/opt/module/kafka
export PATH=$PATH:$KAFKA_HOME/bin
[root@localhost module]$ source /etc/profile
启动kafka默认配置
[root@localhost kafka]$ bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
[root@localhost kafka]$ bin/kafka-server-start.sh -daemon config/server.properties
创建 topic 名为 “first”
[root@localhost kafka]$ bin/kafka-topics.sh --zookeeper 172.16.177.219:2181 --create --replication-factor 1 --partitions 1 --topic first
创建客户端
[root@localhost kafka]$ bin/kafka-console-producer.sh --broker-list 172.16.177.219:9092 --topic first
创建消费端
[root@localhost kafka]$ bin/kafka-console-consumer.sh \--zookeeper 172.16.177.219:2181 --topic first --from-beginning
对接php代码使用,已laravel5.8框架座位基础,案例思路在
1.创建产生者代码 \app\Http\Controllers\KafkaCrontroller.php创建文件,同时在路由中添加相关访问。
<?php
namespace App\Http\Controllers;
class KafkaCrontroller extends Controller
{
public function Producer(){
$value='ceshi';
$topic ='first';
$config = \Kafka\ProducerConfig::getInstance();
$config->setMetadataRefreshIntervalMs(10000);
$config->setMetadataBrokerList('172.16.177.219:9092');
$config->setBrokerVersion('1.0.0');
$config->setRequiredAck(1);
$config->setIsAsyn(false);
$config->setProduceInterval(500);
$producer = new \Kafka\Producer(function() {
return array(
array(
'topic' => 'first',
'value' => 'test....message.',
'key' => ''
),
);
});
//$producer->setLogger($logger);
$producer->success(function($result) {
var_dump($result);
});
$producer->error(function($errorCode) {
var_dump($errorCode);
});
$producer->send(true);
}
}
运行客户端插入数据到kafka
2.创建消费者队列
可以在phpstorm中执行终端命令
php artisan make:command ConsumerKafka
生成文件app\Console\Commands\ConsumerKafka.php
在生成的任务文件中添加消费者方法
<?php
namespace App\Console\Commands;
use Illuminate\Console\Command;
use Illuminate\Support\Facades\Redis;
use Monolog\Logger;
class ConsumerKafka extends Command
{
/**
* The name and signature of the console command.
*
* @var string
*/
protected $signature = 'consumer:kafka';
/**
* The console command description.
*
* @var string
*/
protected $description = '处理异步kafka消息';
/**
* Create a new command instance.
*
* @return void
*/
public function __construct()
{
parent::__construct();
}
/**
* Execute the console command.
*
* @return mixed
*/
public function handle()
{
$config = \Kafka\ConsumerConfig::getInstance();
$config->setMetadataRefreshIntervalMs(10);
$config->setMetadataBrokerList('172.16.177.219:9092');
$config->setGroupId('lest1');
$config->setBrokerVersion('1.0.0');
$config->setTopics(['first']);
$consumer = new \Kafka\Consumer();
$consumer->start(function($topic, $part, $message) {
var_dump($message);
});
}
}
终端启动消费者队列,会接收一下数据,
D:\www\laravel58>php artisan consumer:kafka
array(3) {
["offset"]=>
int(31)
["size"]=>
int(38)
["message"]=>
array(6) {
["crc"]=>
int(3905038476)
["magic"]=>
int(1)
["attr"]=>
int(0)
["timestamp"]=>
int(-1)
["key"]=>
string(0) ""
["value"]=>
string(16) "test....message."
}
}