mysql+maxwell+kafka+flink+kafka=>落库(mysql+hbase等),监控mysql某个数据库的动态变化(insert,delete,update操作)

1.maxwell安装配置
服务器上执行 wget https://github.com/zendesk/maxwell/releases/download/v1.10.7/maxwell-1.10.7.tar.gz
2.解压
tar -zxvf maxwell-1.10.7.tar.gz
3.mysql配置
修改mysql配置文件
vim /etc/my.cnf

[mysqld]#原来有的参数就不需要改,主要是加binlog_format这一行
server_id=1
log-bin=master
binlog_format=row

授权新用户
GRANT ALL on maxwell.* to ‘maxwell’@’%’ identified by ‘123456’;
GRANT SELECT, REPLICATION CLIENT, REPLICATION SLAVE on . to ‘maxwell’@’%’;
FLUSH PRIVILEGES;
执行这个如果报错:Can’t find any matching row in the user table 在后面加上密码即可
GRANT SELECT, REPLICATION CLIENT, REPLICATION SLAVE on . to ‘maxwell’ identified by ‘123456’;
4.注意:一定要创建一个元数据库(默认是maxwell),用来存储maxwell的元数据
然后在这个新库对maxwell这个用户授权
GRANT ALL on maxwell.* to ‘maxwell’@’%’ identified by ‘123456’;
GRANT SELECT, REPLICATION CLIENT, REPLICATION SLAVE on . to ‘maxwell’@’%’ [identified by ‘123456’];如果不加后面密码报错,纠正方式跟第四步相同,这一步不操作后面会报错
在这里插入图片描述
FLUSH PRIVILEGES;

5.启动测试
参数介绍
–user mysql数据库用户名
–password mysql数据库密码
–host mysql 数据库服务器
–producer 指定数据输出到哪里(stdout是打印到控制台,kafka是输出到kafka对应的topic)
–kafka.bootstrap.servers 指定kafka地址
–kafka_topic 指定kakfa的topic是哪一个
–daemon 守护进程 后台运行
启动测试(打印到控制台)
./maxwell --user=‘maxwell’ ----password=‘123456’ --host=‘192.168.2.9’ --producer=stdout
6.输出到kafka(单节点)
在kafka对应的bin目录下创建一个topic用来收集maxwell监控的数据
./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 6 --topic maxwell
启动
./maxwell --user=‘maxwell’ --password=‘123456’ --producer=kafka --host=’’ --port=’’
–kafka.bootstrap.servers=192.168.2.29:9092 --kafka_topic=maxwell --daemon(后台)
之后jps一下看看是否成功
有Maxwell这个进程代表成功
7.启动消费者查看打入maxwell这个topic的数据
在这里插入图片描述
能看到这种格式的json串代表成功,获取json串中的data为最新更新或者插入的数据
8.引入flink消费maxwell中并清洗json
导入maven依赖

<properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <flink.version>1.9.0</flink.version>
        <scala.binary.version>2.11</scala.binary.version>
    </properties>

    <dependencies>

        <dependency>
            <groupId>com.google.guava</groupId>
            <artifactId>guava</artifactId>
            <version>18.0</version>
        </dependency>


        <!-- https://mvnrepository.com/artifact/mysql/mysql-connector-java -->
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <!--            <version>5.1.27</version>-->
            <version>5.1.27</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.6.0</version>
            <exclusions>
                <exclusion>
                    <artifactId>httpclient</artifactId>
                    <groupId>org.apache.httpcomponents</groupId>
                </exclusion>
                <exclusion>
                    <artifactId>jets3t</artifactId>
                    <groupId>net.java.dev.jets3t</groupId>
                </exclusion>

            </exclusions>

        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.6.0</version>

        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.6.0</version>

        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>1.2.0</version>

        </dependency>
        <dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>2.8.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-filesystem_2.11</artifactId>
            <version>1.6.1</version>
        </dependency>

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka-0.11_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>

        <dependency>
            <groupId>com.aliyun.openservices</groupId>
            <artifactId>flink-log-connector</artifactId>
            <version>0.1.12</version>
        </dependency>

        <dependency>
            <groupId>com.google.protobuf</groupId>
            <artifactId>protobuf-java</artifactId>
            <version>2.5.0</version>
        </dependency>
        <dependency>
            <groupId>com.aliyun.openservices</groupId>
            <artifactId>aliyun-log</artifactId>
            <version>0.6.32</version>

        </dependency>
        <!--        <dependency>-->
        <!--            <groupId>com.aliyun.openservices</groupId>-->
        <!--            <artifactId>log-loghub-producer</artifactId>-->
        <!--            <version>0.1.8</version>-->
        <!--        </dependency>-->

        <!-- https://mvnrepository.com/artifact/com.alibaba/fastjson -->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.54</version>
        </dependency>

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-metrics-dropwizard</artifactId>
            <version>${flink.version}</version>

        </dependency>

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-core</artifactId>
            <version>${flink.version}</version>


        </dependency>

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-clients -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-clients_2.11</artifactId>
            <version>${flink.version}</version>


        </dependency>


        <!-- Apache Flink dependencies -->
        <!-- These dependencies are provided, because they should not be packaged into the JAR file. -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-java</artifactId>
            <version>${flink.version}</version>

        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>

        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.7.7</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>

        <dependency>
            <groupId>com.aliyun.oss</groupId>
            <artifactId>aliyun-sdk-oss</artifactId>
            <version>3.5.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-statebackend-rocksdb_2.11</artifactId>
            <version>${flink.version}</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-scala -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-scala_${scala.binary.version}</artifactId>
            <version>1.9.0</version>
        </dependency>


    </dependencies>```
    
实现代码如下:


import java.util.Properties

import com.alibaba.fastjson.{JSON, JSONObject}
import org.apache.flink.api.common.restartstrategy.RestartStrategies
import org.apache.flink.streaming.api.{CheckpointingMode, TimeCharacteristic}
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaConsumer011, FlinkKafkaProducer011}
import org.apache.flink.streaming.util.serialization.SimpleStringSchema




object cateNamesPrefer extends Serializable{

  //指定jzb的kafkaip(生产者)
  private val KAFKA_BROKER_JZB = "ip:port"
  //指定消费者组
  private val group_id = "cateNamesPrefer"

  def main(args: Array[String]): Unit = {
    //获取执行环境
    val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment

    //设置传入flink的相关参数
    //env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
    //env.enableCheckpointing(1000)
    //env.getCheckpointConfig.setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE)
    env.setRestartStrategy(RestartStrategies.fixedDelayRestart(5,5000))//重启策略
    //配置kafka消费者(神策)
    val properties = new Properties()

    properties.setProperty("bootstrap.servers",KAFKA_BROKER_JZB)

    properties.setProperty("group.id",group_id)

    properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")

    properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")

    //读取家长帮通过maxwell监控过来的数据并且变成datastream
    val sourcestream: FlinkKafkaConsumer011[String] = new FlinkKafkaConsumer011[String]("Flink_jt_Consumer", new SimpleStringSchema(), properties)

    var env_source: DataStream[String] = env.addSource(sourcestream).map(source => {
      val maxwell_json: JSONObject = JSON.parseObject(source.toString)
      if (maxwell_json.getString("database") == "eduu_analysis" && (maxwell_json.getString("table") == "dm_cateNamesPrefer")) {
        maxwell_json.getString("data")
      }
      else
        null
    })

    //写入仿真环境
    //val kafka_sink = new FlinkKafkaProducer011[String](KAFKA_BROKER_JZB,"community.content.user.preference",new SimpleStringSchema())

    //写入测试环境
    val kafka_sink = new FlinkKafkaProducer011[String](KAFKA_BROKER_JZB,"Flink_jt",new SimpleStringSchema())

    env_source.addSink(kafka_sink).name("jzb_user_cateNamesPrefer")


    env.execute("jzb_shuju_cateNamesPrefer")
  }
}


#实现是flink处理过的最新数据重新发送到kafka中,只获取了data这个key对应的最新数据
 
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值