kfka学习笔记二:使用Python操作Kafka

1、准备工作

使用python操作kafka目前比较常用的库是kafka-python库,但是在安装这个库的时候需要依赖setuptools库和six库,下面就要分别来下载这几个库

1、下载setuptools

打开这个网址会弹出类似下面的额下载窗口,选择保存文件,点击确定可以下载到setuptools-0.6c11-py2.6.egg


2、下载kafka-python

打开http://pipy.python.org,在搜索框里面输入kafka-python,然后点击【search】就打开如下图所示的界面。里面列出了对python版本的要求,但是根据测试,这个版本在Python 2.6.6下面也是可以正常运行的。

点击Download打开下面的界面

选择 kafka-python-1.3.5.tar.gz (md5) 开始下载

3、下载six

打开http://pipy.python.org,在搜索框里面输入six,然后点击【search】就打开如下图所示的界面。

打开six1.11.0

点击红色方框的链接,会下载到six-1.11.0.tar.gz

2、安装相关python库

在上一步里面我们已经下载了好相关的包,下面开始具体安装,首先创建一个/opt/package/python_lib,然后把这几个包文件上传到这里


1、安装setuptools

执行sh setuptools-0.6c11-py2.6.egg

执行结果如下:


setuptools安装成功。

2、安装six

1)解压

执行tar -zxvf six-1.11.0.tar.gz

解压之后会产生six-1.11.0文件夹

2)安装

cd six-1.11.0

ll

然后执行python setup.py install


3、安装kafka-python

执行tar -zxvf kafka-python-1.3.4.tar.gz解压安装包,会产生kafka-python-1.3.4文件夹,进入到该文件夹

执行python setup.py install

[root@node2 kafka-python-1.3.4]# python setup.py install
running install
running bdist_egg
running egg_info
creating kafka_python.egg-info
writing kafka_python.egg-info/PKG-INFO
writing top-level names to kafka_python.egg-info/top_level.txt
writing dependency_links to kafka_python.egg-info/dependency_links.txt
writing manifest file 'kafka_python.egg-info/SOURCES.txt'
reading manifest file 'kafka_python.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'kafka_python.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib
creating build/lib/kafka
copying kafka/future.py -> build/lib/kafka
copying kafka/client_async.py -> build/lib/kafka
copying kafka/errors.py -> build/lib/kafka
copying kafka/__init__.py -> build/lib/kafka
copying kafka/structs.py -> build/lib/kafka
copying kafka/context.py -> build/lib/kafka
copying kafka/cluster.py -> build/lib/kafka
copying kafka/conn.py -> build/lib/kafka
copying kafka/version.py -> build/lib/kafka
copying kafka/client.py -> build/lib/kafka
copying kafka/codec.py -> build/lib/kafka
copying kafka/util.py -> build/lib/kafka
copying kafka/common.py -> build/lib/kafka
creating build/lib/kafka/serializer
copying kafka/serializer/__init__.py -> build/lib/kafka/serializer
copying kafka/serializer/abstract.py -> build/lib/kafka/serializer
creating build/lib/kafka/partitioner
copying kafka/partitioner/hashed.py -> build/lib/kafka/partitioner
copying kafka/partitioner/roundrobin.py -> build/lib/kafka/partitioner
copying kafka/partitioner/__init__.py -> build/lib/kafka/partitioner
copying kafka/partitioner/base.py -> build/lib/kafka/partitioner
copying kafka/partitioner/default.py -> build/lib/kafka/partitioner
creating build/lib/kafka/consumer
copying kafka/consumer/__init__.py -> build/lib/kafka/consumer
copying kafka/consumer/base.py -> build/lib/kafka/consumer
copying kafka/consumer/group.py -> build/lib/kafka/consumer
copying kafka/consumer/simple.py -> build/lib/kafka/consumer
copying kafka/consumer/subscription_state.py -> build/lib/kafka/consumer
copying kafka/consumer/fetcher.py -> build/lib/kafka/consumer
copying kafka/consumer/multiprocess.py -> build/lib/kafka/consumer
creating build/lib/kafka/producer
copying kafka/producer/future.py -> build/lib/kafka/producer
copying kafka/producer/__init__.py -> build/lib/kafka/producer
copying kafka/producer/buffer.py -> build/lib/kafka/producer
copying kafka/producer/base.py -> build/lib/kafka/producer
copying kafka/producer/record_accumulator.py -> build/lib/kafka/producer
copying kafka/producer/simple.py -> build/lib/kafka/producer
copying kafka/producer/kafka.py -> build/lib/kafka/producer
copying kafka/producer/sender.py -> build/lib/kafka/producer
copying kafka/producer/keyed.py -> build/lib/kafka/producer
creating build/lib/kafka/vendor
copying kafka/vendor/socketpair.py -> build/lib/kafka/vendor
copying kafka/vendor/__init__.py -> build/lib/kafka/vendor
copying kafka/vendor/six.py -> build/lib/kafka/vendor
copying kafka/vendor/selectors34.py -> build/lib/kafka/vendor
creating build/lib/kafka/protocol
copying kafka/protocol/legacy.py -> build/lib/kafka/protocol
copying kafka/protocol/pickle.py -> build/lib/kafka/protocol
copying kafka/protocol/admin.py -> build/lib/kafka/protocol
copying kafka/protocol/struct.py -> build/lib/kafka/protocol
copying kafka/protocol/message.py -> build/lib/kafka/protocol
copying kafka/protocol/__init__.py -> build/lib/kafka/protocol
copying kafka/protocol/offset.py -> build/lib/kafka/protocol
copying kafka/protocol/metadata.py -> build/lib/kafka/protocol
copying kafka/protocol/fetch.py -> build/lib/kafka/protocol
copying kafka/protocol/commit.py -> build/lib/kafka/protocol
copying kafka/protocol/group.py -> build/lib/kafka/protocol
copying kafka/protocol/abstract.py -> build/lib/kafka/protocol
copying kafka/protocol/produce.py -> build/lib/kafka/protocol
copying kafka/protocol/api.py -> build/lib/kafka/protocol
copying kafka/protocol/types.py -> build/lib/kafka/protocol
creating build/lib/kafka/metrics
copying kafka/metrics/quota.py -> build/lib/kafka/metrics
copying kafka/metrics/kafka_metric.py -> build/lib/kafka/metrics
copying kafka/metrics/measurable.py -> build/lib/kafka/metrics
copying kafka/metrics/

  • 8
    点赞
  • 78
    收藏
    觉得还不错? 一键收藏
  • 7
    评论
Kafka是一个分布式流处理平台,常被用作高吞吐量的分布式消息系统。关于Kafka的常见面试题有以下几个方面: 1. Kafka的读写分离: 自Kafka 2.4版本开始,Kafka提供了有限度的读写分离功能。也就是说,Follower副本可以对外提供读服务。这样可以减轻Leader副本的负载,提高系统的可伸缩性和容错性。 2. Kafka数据一致性原理: Kafka使用分布式复制机制来保证数据的一致性和可靠性。Kafka采用了多副本的方式来存储数据,每个分区都有一个Leader副本和若干个Follower副本。Kafka使用基于日志的复制方式,将消息以日志的形式持久化到磁盘上,并将复制的操作作为一次原子操作。通过Leader副本将消息复制到其他Follower副本,确保所有副本的数据一致性。 3. Kafka消息的幂等性: 幂等性是指对于同一条消息的多次处理操作,应该保证最终结果与一次处理操作相同。对于消息队列来说,重复消息的问题常常存在。为了保证消息的幂等性,Kafka建议在业务层进行数据的一致性幂等校验。即在业务逻辑中,判断接收到的消息是否已经处理过,如果已经处理过,则忽略该消息,避免重复处理。 综上所述,Kafka的常见面试题主要涉及到读写分离、数据一致性原理和消息的幂等性。这些问题都是Kafka分布式消息系统的核心概念,对于理解和应用Kafka系统非常重要。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [java被问过的面试题-初中级-记录1](https://blog.csdn.net/m0_56169170/article/details/129498345)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *2* *3* [Kafka 面试题(2022)](https://blog.csdn.net/weixin_53597801/article/details/126897297)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值