前言
书接前文,这一篇笔记记录一下Kafka如何配置(总体而言)。本篇文章主要是结合Kafka的quickstart的文章来理解,打算中英文混排--尽管这样做,是很多如何学好英语的建议里面所极力反对的--这样做,是为了简化书写,抓住重点进行记录。
正文
Download the 2.1.0 release and un-tar it.
-
>
tar
-xzf
kafka_2
.11-2
.1
.0
.tgz
-
-
>
cd
kafka_2
.11-2
.1
.0
Kafka uses ZooKeeper so you need to first start a ZooKeeper server if you don't already have one. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node ZooKeeper instance.
-
> bin/zookeeper-
server-start.sh config/zookeeper.properties
-
-
[
2013
-04
-22
15:
01:
37,
495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.
server.quorum.QuorumPeerConfig)
-
-
...
这里要使用到ZooKeeper,这是一个单节点的ZooKeeper启动实例。刚好借这个机会,研究一下这个脚本,把以前有些一直没有搞明白的问题搞明白。
-
if [
$# -lt 1 ];
-
then
-
echo
"USAGE: $0 [-daemon] zookeeper.properties"
-
exit 1
-
fi
-
# $#的意思是所有参数的数目,参考https://unix.stackexchange.com/questions/122343/what-does-mean-in-shell
-
-
base_dir=$(dirname
$0)
-
# 获取运行命令的目录作为base
-
-
if [
"x$KAFKA_LOG4J_OPTS" =
"x" ];
then
-
export KAFKA_LOG4J_OPTS=
"-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
-
fi
-
# 配置KAFKA_LOG4J_OPTS参数。这种前面加“x”的用法是shell的一种技巧,判断KAFKA_LOG4J_OPTS参数是否为空。
-
-
if [
"x$KAFKA_HEAP_OPTS" =
"x" ];
then
-
export KAFKA_HEAP_OPTS=
"-Xmx512M -Xms512M"
-
fi
-
# 配置KAFKA_HEAP_OPTS参数。
-
-
EXTRA_ARGS=
${EXTRA_ARGS-'-name zookeeper -loggc'}
-
# 这里有点不太理解
-
-
COMMAND=
$1
-
case
$COMMAND
in
-
-daemon)
-
|EXTRA_ARGS=
"-daemon "
$EXTRA_ARGS
-
|
shift
-
|;;
-
*)
-
|;;
-
esac
-
# 判断第一个参数是不是“daemon”,如果是的话,编入EXTRA_ARGS变量,下面要送给新的命令,然后使用shift从入参栈中移除掉这个参数,后面会使用“$@”来使用剩余的入参
-
-
exec
$base_dir/kafka-run-class.sh
$EXTRA_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain
"$@"
-
# 启动kafka-run-class.sh脚本,把上述参数送进去
Now start the Kafka server:
-
> bin/kafka-
server-start.sh config/
server.properties
-
-
[
2013
-04
-22
15:
01:
47,
028] INFO Verifying properties (kafka.utils.VerifiableProperties)
-
-
[
2013
-04
-22
15:
01:
47,
051] INFO
Property socket.send.buffer.bytes
is overridden
to
1048576 (kafka.utils.VerifiableProperties)
-
-
...
Let's create a topic named "test" with a single partition and only one replica:
1 |
|
We can now see that topic if we run the list topic command:
1 |
Kafka comes with a command line client that will take input from a file or from standard input and send it out as messages to the Kafka cluster. By default, each line will be sent as a separate message.
Run the producer and then type a few messages into the console to send to the server.
1 |
Kafka also has a command line consumer that will dump out messages to standard output.
1 |
All of the command line tools have additional options; running the command with no arguments will display usage information documenting them in more detail.
上面是一个简单的启动流程,没有做太多的配置,语言也比较简单。下面要进行多broker的集群配置。Kafka的server被称为broker,经纪人。
Step 6: Setting up a multi-broker cluster
So far we have been running against a single broker, but that's no fun. For Kafka, a single broker is just a cluster of size one, so nothing much changes other than starting a few more broker instances. But just to get feel for it, let's expand our cluster to three nodes (still all on our local machine).
First we make a config file for each of the brokers (on Windows use the copy
command instead):
1 |
1 |
broker.id这个属性,在这个集群中必须是唯一并且持久的名字,来标识这个节点。
We already have Zookeeper and our single node started, so we just need to start the two new nodes:
1 |
1 |
|
Okay but now that we have a cluster how can we know which broker is doing what? To see that run the "describe topics" command:
1 |
对上面的输出做一下解释。第一行给出了所有分区的概要,后面增加的行给出了关于每一个分区的信息,我们只有一个分区,所以只有一行。
- "leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
- “leader”是负责对于给定的分区来进行所有的读写操作的。每一个节点都被分区的挑选出来的部分来随机地挑选成为leader。
- "replicas" is the list of nodes that the log for this partition regardless of whether they are the leader or even if they are currently alive.
- “replicas”是来记录这个分区的节点列表(备份节点),无论他们是否是leader甚至它们当前是否存活。
- "isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.
- “isr”是"in-sync"的replica的集合。这是replicas的子集,是那些当前或者的,并且赶上leader的replica。
Note that in my example node 1 is the leader for the only partition of the topic.
We can run the same command on the original topic we created to see where it is:
1 |
Let's publish a few messages to our new topic:
1 |
1 |
1 |
1 |
1 |
1 |
Writing data from the console and writing it back to the console is a convenient place to start, but you'll probably want to use data from other sources or export data from Kafka to other systems. For many systems, instead of writing custom integration code you can use Kafka Connect to import or export data.
Kafka Connect is a tool included with Kafka that imports and exports data to Kafka. It is an extensible tool that runs connectors, which implement the custom logic for interacting with an external system. In this quickstart we'll see how to run Kafka Connect with simple connectors that import data from a file to a Kafka topic and export data from a Kafka topic to a file.
First, we'll start by creating some seed data to test with:
1 |
|
Or on Windows:
1 |
1 |
|
These sample configuration files, included with Kafka, use the default local cluster configuration you started earlier and create two connectors: the first is a source connector that reads lines from an input file and produces each to a Kafka topic and the second is a sink connector that reads messages from a Kafka topic and produces each as a line in an output file.
During startup you'll see a number of log messages, including some indicating that the connectors are being instantiated. Once the Kafka Connect process has started, the source connector should start reading lines from test.txt
and producing them to the topic connect-test
, and the sink connector should start reading messages from the topic connect-test
and write them to the file test.sink.txt
. We can verify the data has been delivered through the entire pipeline by examining the contents of the output file:
1 |
1 |
1 |
|
You should see the line appear in the console consumer output and in the sink file.
这里讲了Kafka的连接功能,可以从别的源导入数据,或者导出数据到别的地方。
Step 8: Use Kafka Streams to process data
Kafka Streams is a client library for building mission-critical real-time applications and microservices, where the input and/or output data is stored in Kafka clusters. Kafka Streams combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's server-side cluster technology to make these applications highly scalable, elastic, fault-tolerant, distributed, and much more. This quickstart example will demonstrate how to run a streaming application coded in this library.
这里很简单的讲了Kafka的流化处理能力。
总结
这里主要是介绍了如何快速搭建Kafka的方法,比较简单,同时总结了一下Shell的一些用法,关于这些用法,一直用的模模糊糊,这次好好总结一下,不要让它一直像夹生饭一样存在。
参考
https://kafka.apache.org/quickstart
https://unix.stackexchange.com/questions/254494/how-does-bash-differentiate-between-brace-expansion-and-command-grouping 介绍了Shell下大括号的使用
https://unix.stackexchange.com/questions/174566/what-is-the-purpose-of-using-shift-in-shell-scripts 介绍了shift的用法
https://unix.stackexchange.com/questions/122343/what-does-mean-in-shell 介绍了$#的用法
转载:https://blog.csdn.net/chaiyu2002/article/details/86523602