使用logstash6.5.4把mysql的数据拉取到具有kerberos认证的kafka的集群上

需求把mysql的数据拉取到具有kerberos认证的kafka集群上,并且生成id字段和其他的字段进行重命名
1.创建的topic的命令
kafka-topics --create --zookeeper node96:2181/kafka1 --replication-factor 2 --partitions 3 --topic test_task
2.下载logstash的安装包和准备相应的jar和认证文件
2.1 下载logstash的安装包
cd /opt/
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.tar.gz
tar xf logstash-6.5.4.tar.gz
mv logstash-6.5.4 logstash
2.2 下载相应的mysql-connector-java的jar包
例如我的操作为:下载的是mysql-connector-java-5.1.47.jar
mkdir -pv /home/logstash/bin/mysql
把mysql-connector-java-5.1.47.jar的包放到/home/logstash/bin/mysql的目录下
2.3 准备相应的kafka的相关的认证文件
jaas文件:/etc/kafka/conf/kafka_sink_jaas.conf
krb文件:/etc/krb5.conf
3.添加配置kafka.conf的配置文件
mysql的表名: mysql_test
topic名称为: test_task	
切换到达logstash的目录下:
cd logstash
cat kafka.conf
kafka.conf的配置文件内容如下:

input {

jdbc {

jdbc_connection_string => "jdbc:mysql://15.208.17.110:3306/demo"

jdbc_user => "root"

jdbc_password => "111111"

jdbc_driver_library => "/home/logstash/bin/mysql/mysql-connector-java-5.1.47.jar"

jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "1000"


statement => "select DATE_FORMAT(q.altdate,'%Y-%m-%d') as  altdate,q.altitem,q.altbe,q.altaf,q.recid,q.openo,q.pripid,round(q.alttime) as alttime,q.remark,DATE_FORMAT(q.s_ext_timestamp,'%Y-%m-%d') as  s_ext_timestamp,q.dataflag,q.cxstatus,q.id as dfid,MD5(concat(d.entname,q.altitem,q.altdate)) as id from mysql_test q  left join mysql_test_task d on q.pripid=d.pripid"
    }
}


filter {
 mutate {
remove_field => ["@timestamp", "@version","gather_time","publish_time"]
 }

 mutate {
rename => {
    "altdate" => "changeTime"
    "altitem" => "changeItem"
    "altbe" => "contentBefore"
    "altaf" => "contentAfter"
    "recid" => "recordNo"
    "openo" => "businessNo"
    "pripid" => "pripId"
    "alttime" => "alterTimeNum"
    "remark" => "remark"
    "s_ext_timestamp" => "extDate"
    "dataflag" => "dataFlag"
    "cxstatus" => "cxStatus"
    "dfid" => "dfId"
}
}
}

output {


kafka {
topic_id => "test_task"
bootstrap_servers => "15.208.17.100:9092,15.208.17.101:9092,15.208.17.102:9092"
security_protocol => "SASL_PLAINTEXT"
jaas_path => "/etc/kafka/conf/kafka_sink_jaas.conf"
kerberos_config => "/etc/krb5.conf"
sasl_kerberos_service_name => "kafka"
compression_type => "none"
acks => "1"
codec => json_lines
   }

}
4.启动logstash
bin/logstash -f kafka.conf
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值