ClickHouse连接SASL_SSL认证的Kafka

一.kafka配置sasl_ssl认证 

之前配置过ssl在这个基础上配置sasl_ssl

ClickHouse连接SSL认证的Kafka_万山数据_Spring的博客-CSDN博客

在config目录下创建sasl_ssl目录复制 server.properties一份到sasl_ssl下命名为server_sasl_ssl.properties修改文件

listeners=PLAINTEXT://0.0.0.0:9092,SASL_SSL://:9093
advertised.listeners=PLAINTEXT://172.29.128.71:9092,SASL_SSL://172.29.128.71:9093
ssl.truststore.location=/usr/local/soft/kafka_2.12-3.2.3/config/ssl2/server.truststore.jks
ssl.truststore.password=123456
ssl.keystore.location=/usr/local/soft/kafka_2.12-3.2.3/config/ssl2/server.keystore.jks
ssl.keystore.password=123456
ssl.key.password=123456
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
ssl.endpoint.identification.algorithm=
sasl.enabled.mechanisms=PLAIN
sasl.mechanism=PLAIN
listener.name.sasl_ssl.ssl.client.auth=required
 

添加client.properties

security.protocol=SASL_SSL
sasl.mechanism=PLAIN
ssl.truststore.location=/usr/local/soft/kafka_2.12-3.2.3/config/ssl2/server.truststore.jks
ssl.truststore.password=123456
ssl.keystore.location=/usr/local/soft/kafka_2.12-3.2.3/config/ssl2/server.keystore.jks
ssl.keystore.password=123456
ssl.key.password=123456
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
ssl.endpoint.identification.algorithm=

添加密码验证文件kafka_server_jaas.conf

KafkaServer{
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";};

复制 脚本

cp kafka-topics.sh kafka-topics-sasl.sh
cp kafka-console-producer.sh kafka-console-producer-sasl.sh
cp kafka-server-start.sh kafka-server-start-sasl.sh
cp kafka-console-consumer.sh kafka-console-consumer-sasl.sh

分别在复制后带sasl的sh下倒数第二行添加 

export KAFKA_OPTS="-Djava.security.auth.login.config=/usr/local/soft/kafka_2.12-3.2.3/config/sasl_ssl/kafka_server_jaas.conf"

 启动

#启动zk

./bin/zookeeper-server-start.sh ./config/zookeeper.properties

#启动kafka

./bin/kafka-server-start-sasl.sh ./config/sasl_ssl/server_sasl_ssl.properties

创建主题

  ./bin/kafka-topics-sasl.sh --create --topic sasl_ssl --partitions 1 --replication-factor 1 --bootstrap-server localhost:9093 --command-config ./config/sasl

启动生产者消费者

  ./bin/kafka-console-producer-sasl.sh --broker-list 172.29.128.71:9093 --topic sasl_ssl --producer.config ./config/sasl_ssl/client.properties

 

 ./bin/kafka-console-consumer-sasl.sh --bootstrap-server 172.29.128.71:9093 --topic sasl_ssl --from-beginning --consumer.config ./config/sasl_ssl/client.properties 

 二.clickchouse配置连接kafka

同样修改/etc/clickhouse-server/config.d下的metrika.xml 

vi metrika.xml修改为

 <yandex>
    <kafka>
        <max_poll_interval_ms>60000</max_poll_interval_ms>
        <session_timeout_ms>60000</session_timeout_ms>
        <heartbeat_interval_ms>10000</heartbeat_interval_ms>
        <reconnect_backoff_ms>5000</reconnect_backoff_ms>
        <reconnect_backoff_max_ms>60000</reconnect_backoff_max_ms>
        <request_timeout_ms>20000</request_timeout_ms>
        <retry_backoff_ms>500</retry_backoff_ms>
        <message_max_bytes>20971520</message_max_bytes>
        <debug>all</debug><!-- only to get the errors -->
        <security_protocol>SASL_SSL</security_protocol>
        <sasl_mechanism>PLAIN</sasl_mechanism>
        <sasl_username>admin</sasl_username>
        <sasl_password>admin-secret</sasl_password>
        <ssl_ca_location>/etc/clickhouse-server/ssl2/server.crt</ssl_ca_location>
        <ssl_certificate_location>/etc/clickhouse-server/ssl2/server.pem</ssl_certificate_location>
        <ssl_key_location>/etc/clickhouse-server/ssl2/server.key</ssl_key_location>
        <ssl_key_password>123456</ssl_key_password>
    </kafka>
</yandex>

重启clickhouse

systemctl restart clickhouse-server.service

#创建kafka引擎表

CREATE TABLE kafka_test.log_kafka
(
    `CONTENT` String
)
ENGINE = Kafka
SETTINGS kafka_broker_list = '172.29.128.71:9093', kafka_topic_list = 'sasl_ssl', kafka_group_name = 'consumer-group1', kafka_format = 'TabSeparated', kafka_num_consumers = 1

#创建物化视图

CREATE MATERIALIZED VIEW kafka_test.log_content
(
    `CONTENT` Nullable(String),
    `addTime` DateTime
)
ENGINE = MergeTree
ORDER BY addTime
SETTINGS index_granularity = 8192 AS
SELECT
    CONTENT,
    now() AS addTime
FROM kafka_test.log_kafka

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值