Kafka日记(四) 实战PHP

本篇讲解如何使用PHP扩展(用了swoole)

简单使用,自己可以再封装一下。

使用api时,$conf->set(‘enable.auto.commit’, ‘false’),可以关闭自动提交,进行手动的提交,开启自动提交。

php swooleProcess.php 使用swoole创建进程开启消费,判断topic的分区数创建进程开始消费

topic不存在会自动创建 创建的分区和副本数量默认broker配置,服务器端配置,代码无法决定!!!

## 是否允许自动创建topic ,若是false,就需要通过命令创建topic
auto.create.topics.enable =true
 
## 一个topic ,默认分区的replication个数 ,不得大于集群中broker的个数
default.replication.factor =1
 
## 每个topic的分区个数,若是在topic创建时候没有指定的话 会被topic创建时的指定参数覆盖
num.partitions =1
 
实例 --replication-factor3--partitions1--topic replicated-topic :名称replicated-topic有一个分区,分区被复制到三个broker上。

文章最后有conf和topicConf所有配置项。

swooleProcess.php

<?php
/**
 * Created by PhpStorm.
 * User: zhangwei
 * Date: 2019/3/21
 * Time: 14:48
 */
include_once ('SwooleProcess.class.php');
include_once ('KafkaHelp.class.php');
$topic = 'test9';

$sp = new SwooleProcess($topic);

<?php
/**
 * Created by PhpStorm.
 * User: zhangwei
 * Date: 2019/3/20
 * Time: 8:51
 */
include_once ('./MMysql.class.php');
class KafkaHelp
{

    protected $producer;
    protected $topic;

    /**发送消息到kafka
     * @param $playload 消息内容
     * @param $key 键值
     */
    public function produce($playload, $key)
    {
        $producer = $this->producer();

        $topic = $producer->newTopic($this->topic);
        //采用默认的随机方法选择分区,$key是用来选择分区的指标
        $topic->produce(RD_KAFKA_PARTITION_UA, 0, $playload, $key);

        while($producer->getOutQLen() > 0) {
            $producer->poll(50);
        }
    }

    /**生产者
     * @return \RdKafka\Producer
     */
    protected function producer()
    {
        if(!$this->producer) {
            $producer = new \RdKafka\Producer();
            $producer->addBrokers($this->getBroker());
            $this->producer = $producer;
        }
        return $this->producer;
    }


    /**获取kafka server list
     * @return mixed
     */
    protected function getBroker()
    {
        return '127.0.0.1:9092,127.0.0.1:9093,127.0.0.1:9094';
    }

    /**设置topic
     * @param $topic
     */
    public function setTopic($topic)
    {
        $this->topic = $topic;
    }

    /**
     * 获取分区
     * @return mixed
     */
    public function getPartitions()
    {
        $consumer = new \RdKafka\Consumer();
        $consumer->addBrokers($this->getBroker());

        $topic   = $consumer->newTopic($this->topic);
        $allInfo = $consumer->getMetadata(false, $topic, 60e3);
        $topics  = $allInfo->getTopics();
        // 循环一个topic
        foreach($topics as $tp) {
            $partitions = $tp->getPartitions(); // 获取partion信息
            break;
        }
        // 初次创建 $allIndo get不到Topic的partitions信息
        if(!count($partitions)){
            return $this->getPartitions();
        }
        return $partitions;
    }

    /**
     * high level
     * @throws Exception
     */
    public function highlevelConsumer()
    {
        $conf = new \RdKafka\Conf();
        //这里就是自动均衡分配分区给消费者的意思
        $conf->setRebalanceCb(function(\RdKafka\KafkaConsumer $kafka, $err, array $partitions = null){
            switch($err) {
                case RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS:
                    echo 'assign:\n';
                    var_dump($partitions);
                    $kafka->assign($partitions);
                    break;

                case RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS:
                    $kafka->assign(null);
                    break;

                default:
                    throw new \Exception($err);
            }
        });

        $conf->set('group.id', 'comsumer-group-'.$this->topic);
        $conf->set('client.id', 'client-'.$this->topic);
        $conf->set('bootstrap.servers', $this->getBroker());
        $conf->set('enable.auto.commit', 'false');
        $conf->set('enable.auto.offset.store', 'false');

        $topicConf = new \RdKafka\TopicConf();
        $topicConf->set('offset.store.method', 'broker');
        // 如果没有检测到有保存的offset,就从最小开始
        $topicConf->set('auto.offset.reset', 'smallest');
        $topicConf->set('request.required.acks',-1);


        $conf->setDefaultTopicConf($topicConf);

        $consumer = new \RdKafka\KafkaConsumer($conf);
        $consumer->subscribe([$this->topic]);

        while(true) {
            $message = $consumer->consume(120 * 1000);
            switch($message->err) {
                case RD_KAFKA_RESP_ERR_NO_ERROR:
                    echo $this->topic.'-'.$message->partition.'-'.$message->offset."\n";
//                    sleep(1);
                    // 连接数据库
                    $mysqlConfig = [
//                        'host' => '39.108.219.96',
                        'host' => '127.0.0.1',
                        'port' => '3306',
                        'user' => 'root',
//                        'passwd' => '123456',
                        'passwd' => 'root',
                        'dbname' => 'kafka'
                    ];
                    $db = new MMysql($mysqlConfig);
                    $insertMsg = [
                        'payload' => $message->payload,
                        'partition' => $message->partition,
                        'offset' => $message->offset,
                        'len' => $message->len,
                        'timestamp' => $message->times
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值