kafak
Kafka安装,运行以及连接到springboot
kafka安装和运行
1.正常安装在虚拟机安装kafka
下载kafka,通过传送文件可以把Kafka的包传到虚拟机,
windows可以使用MobaXtem,mac Royal TSX
http://kafka.apache.org/downloads
因为zookeeper启动需要基于jdk,下载jdk,需要的jdk版本可以在这里看
http://kafka.apache.org/26/documentation.html#java
2.之后解压Kafka和jdk包并进行安装
tar zxvf /root/kafka/kafka_2.12-2.7.0.tgz
mv kafka_2.12-2.7.0 /home/hby/kafka
tar zxvf /root/jdk-11.0.10_linux-x64_bin.tar.gz
mv jdk-11.0.10_linux-x64_bin.tar.gz /home/hby/jdk
vim /etc/profile
source /etc/profile
ln -s /home/hby/jdk/jdk-11.0.10_linux-x64_bin/jdk-11.0.10/bin/java /usr/bin/java
3.更改kafka的配置文件
broker.id=0 #不同的服务器要用不同的id
#虚拟机的ip地址
listeners=PLAINTEXT://172.18.2.251:9092
#修改日志存储的地址
log.dirs=/home/hby/kafka/data
zookeeper.connect=localhost:2181
4.打开到安装kafka对应的文件夹,可以启动和关闭zookeeper和kafka,后台启动,前台启动ctrl c之后就退出了。
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
bin/zookeeper-server-stop.sh
bin/kafka-server-start.sh -daemon config/server.properties
bin/kafka-server-stop.sh
如果关不上Kafka(参考其他作者的文章)
在输入bin/kafka-server-stop.sh指令是, 实际上默认使用的是 信号名为SIGTERM 这个信号。需要注意的是这个信号不保证进程一定会被终止。修改 kafka-server-stop.sh 脚本注释掉原来的那一行,增加新的一行kill -s KILL $PIDS
5.之后通过jps可以看到启动的进程
[root@CentOShby config]# jps
56528 Kafka
57124 QuorumPeerMain
61878 ConsoleConsumer
122703 Jps
6.创建topic
bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic --partitions 2 --replication-factor 1
–zookeeper:指定了kafka所连接的Zookeeper服务地址
–topic:指定了多要创建主题的名称
–partitions:指定了分区个数
–replication-factor:指定了副本因子
–create:创建主题的动作指令
bin/kafka-topics.sh --zookeeper 172.18.2.251:2181 --create --topic hby --partitions 2 --replication-factor 1
bin/kafka-topics.sh --zookeeper 172.18.2.251:2181 --list
查看topic
查看topic详情
bin/kafka-topics.sh --zookeeper 172.18.2.251:2181 --describe --topic hby
打开consumer
bin/kafka-console-consumer.sh --bootstrap-server 172.18.2.251:9092 --topic hby
在新的终端打开producer
bin/kafka-console-producer.sh --broker-list 172.18.2.251:9092 --topic hby
docker安装kafka
1.安装了compose
sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
2.查看docker-compose是否安装成功
docker-compose -v
3.docker compose 可以通过一个.yml文件来启动多个容器,通过一条命令来启动zookeeper和kafka
cd docker_kafka
vim docker-compose.yml
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper ## 镜像
ports:
- "2181:2181" ## 对外暴露的端口号
kafka:
image: wurstmeister/kafka ## 镜像
volumes:
- /etc/localtime:/etc/localtime ## 挂载位置(kafka镜像和宿主机器之间时间保持一直)
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 172.18.2.251 ## 修改:宿主机IP
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 ## 卡夫卡运行是基于zookeeper的
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: 120
KAFKA_MESSAGE_MAX_BYTES: 10000000
KAFKA_REPLICA_FETCH_MAX_BYTES: 10000000
KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS: 60000
KAFKA_NUM_PARTITIONS: 3
KAFKA_DELETE_RETENTION_MS: 1000
4.启动kafka
docker-compose up -d
5.查看容器
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a62bb26dcc39 wurstmeister/kafka "start-kafka.sh" 45 minutes ago Up 45 minutes 0.0.0.0:9092->9092/tcp docker_kafka_kafka_1
73f0a448af12 wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 45 minutes ago Up 45 minutes 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp docker_kafka_zookeeper_1
6.创建producer和consumer
producer
docker exec -it a62bb26dcc39 bash
bash-4.4# kafka-topics.sh --create --zookeeper 172.18.2.251:2181 --replication-factor 1 --partitions 1 --topic my_log
WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
Created topic my_log.
bash-4.4# kafka-topics.sh --list --zookeeper 172.18.2.251:2181
my_log
test
bash-4.4# kafka-console-producer.sh --broker-list 172.18.2.251:9092 --topic my_log
>abs
>
>hby
consumer
[hby@CentOShby /]$ docker exec -it a62bb26dcc39 bash
bash-4.4# kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my_log
abs
hello,kafka
hello,kafka
hello,kafka
连接springboot
1.加入依赖
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
2.创建Consumer和Producer
import java.time.Duration;
import java.util.Collections;
import java.util.Properties;
public class Consumer {
private static final String brokeList="172.18.2.251:9092";
private static final String topic="my_log";
private static final String groupId="group.demo";
public static void main(String[] args) throws ClassNotFoundException {
Properties properties=new Properties();
//设置key的序列化器
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class.getName());
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,brokeList);
properties.put(ConsumerConfig.GROUP_ID_CONFIG,groupId);
KafkaConsumer<String,String> consumer=new KafkaConsumer<String, String>(properties);
consumer.subscribe(Collections.singletonList(topic));
while (true){
ConsumerRecords<String, String> records=consumer.poll(Duration.ofMillis(10000));
for(ConsumerRecord<String,String> record:records){
System.out.println(record.value());
}
}
}
}
Producer
package com.hby.demo.chapter1;
import org.apache.kafka.clients.producer.*;
import java.util.Properties;
public class Producer {
private static final String brokeList="172.18.2.251:9092";
private static final String topic="my_log";
public static void main(String[] args) throws ClassNotFoundException {
Properties properties=new Properties();
//设置key的序列化器
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,Class.forName("org.apache.kafka.common.serialization.StringSerializer"));
//设置重试次数
properties.put(ProducerConfig.RETRIES_CONFIG,10);
//设置值序列化器
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,Class.forName("org.apache.kafka.common.serialization.StringSerializer"));
//设置集群地址
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,brokeList);
properties.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG,ProducerInterceptorPrefix.class.getName());
properties.put(ProducerConfig.ACKS_CONFIG,"0");
KafkaProducer<String, String> producer = new KafkaProducer<String,String>(properties);
ProducerRecord<String,String> record=new ProducerRecord<>(topic,"kafka-demo","hello,HBY");
try {
/* Future<RecordMetadata> send=producer.send(record);
RecordMetadata recordMetadata=send.get();
System.out.println("topic"+recordMetadata.topic());
System.out.println("partition"+recordMetadata.partition());
System.out.println("offset"+recordMetadata.offset());*/
//异步的发送
producer.send(record, new Callback() {
@Override
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if(e == null){
System.out.println("topic"+recordMetadata.topic());
System.out.println("partition"+recordMetadata.partition());
System.out.println("offset"+recordMetadata.offset());
}
}
});
}catch(Exception e){
e.printStackTrace();
}
producer.close();}
}
3.自定义Producer
创建一个company对象
import lombok.Builder;
import lombok.Data;
@Data
@Builder
public class Company {
String name;
String address;
}
package com.hby.demo.chapter1;
import org.apache.kafka.clients.producer.*;
import java.util.Properties;
import java.util.concurrent.Future;
public class CompanyProducer {
private static final String brokeList="172.18.2.251:9092";
private static final String topic="my_log";
public static void main(String[] args) throws ClassNotFoundException {
Properties properties=new Properties();
//设置key的序列化器
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,Class.forName("org.apache.kafka.common.serialization.StringSerializer"));
//设置重试次数
properties.put(ProducerConfig.RETRIES_CONFIG,10);
//设置值序列化器
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,CompantSerialize.class.getName());
//设置集群地址
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,brokeList);
KafkaProducer<String,Company> producer = new KafkaProducer<>(properties);
Company company=Company.builder().address("kafks").name("长春").build();
ProducerRecord<String,Company> record=new ProducerRecord<>(topic,company);
try {
Future<RecordMetadata> send=producer.send(record);
RecordMetadata recordMetadata=send.get();
System.out.println("topic"+recordMetadata.topic());
System.out.println("partition"+recordMetadata.partition());
System.out.println("offset"+recordMetadata.offset());
//异步的发送
/*producer.send(record, new Callback() {
@Override
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if(e == null){
System.out.println("topic"+recordMetadata.topic());
System.out.println("partition"+recordMetadata.partition());
System.out.println("offset"+recordMetadata.offset());
}
}
});*/
}catch(Exception e){
e.printStackTrace();
}
producer.close();}
}
4.加入拦截器,发送的数据自己会带前缀
public class ProducerInterceptorPrefix implements ProducerInterceptor<String,String> {
private volatile long sendSuccess=0;
private volatile long sendFailure=0;
@Override
public ProducerRecord<String,String> onSend(
ProducerRecord<String,String> record){
String modifiedValue = "predix1-"+record.value();
return new ProducerRecord<String,String>(record.topic(), record.partition(), record.timestamp(),record.key(),record.value(),record.headers());
}
@Override
public void onAcknowledgement(RecordMetadata recordMetadata,Exception e){
if(e==null){
sendSuccess++;
}else{
sendFailure++;
}
}
@Override
public void close() {
}
@Override
public void configure(Map<String,?> map){}
}
Linux 命令学习
**
1. chmod
u 表示该文件的拥有者
g 表示与该文件的拥有者属于同一个群体(group)者
o 表示其他以外的人,
a 表示这三者皆是。
+表示增加权限、
- 表示取消权限、
- = 表示唯一设定权限。
r 表示可读取,w 表示可写入,x 表示可执行,X 表示只有当该文件是个子目录或者该文件已经被设定过为可执行。
打不开这个文件
[root@CentOShby ~]# chmod a-r /home/hby/redis/test1/a.test
[root@CentOShby ~]#chmod a+r /home/hby/redis/test1/a.test
2.chown
chown 需要超级用户 root 的权限才能执行此命令。
只有超级用户和属于组的文件所有者才能变更文件关联组。非超级用户如需要设置关联组可能需要使用 chgrp 命令。
3. rm 删除文件夹
-i 删除前逐一询问确认。
-f 即使原档案属性设为唯读,亦直接删除,无需逐一确认。
-r 将目录及以下之档案亦逐一删除。
rm -r jdk-11.0.10 需要一步步确认
rm -rf jdk-11.0.10直接删除文件夹不用确认
rm -rf jdk-11.0.10
4.mkdir创建文件夹
mkdir /home/hby/test
5.ps 查看详情
ps [options] [–help]
ps 的参数非常多, 在此仅列出几个常用的参数并大略介绍含义
-a 列出所有的进程
-w 显示加宽可以显示较多的资讯
-au 显示较详细的资讯
-aux 显示所有包含其他使用者的行程
[root@CentOShby ~]# ps -ef //显示所有命令,连带命令行
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 Mar03 ? 00:00:05 /usr/lib/systemd/systemd --switched-root --system --deserialize
root 2 0 0 Mar03 ? 00:00:00 [kthreadd]
root 3 2 0 Mar03 ? 00:00:00 [rcu_gp]
root 4 2 0 Mar03 ? 00:00:00 [rcu_par_gp]
root 6 2 0 Mar03 ? 00:00:00 [kworker/0:0H-kblockd]
root 9 2 0 Mar03 ? 00:00:00 [mm_percpu_wq]
root 10 2 0 Mar03 ? 00:00:02 [ksoftirqd/0]
root 11 2 0 Mar03 ? 00:00:03 [rcu_sched]
[root@CentOShby ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a62bb26dcc39 wurstmeister/kafka "start-kafka.sh" 31 hours ago Up 31 hours 0.0.0.0:9092->9092/tcp docker_kafka_kafka_1
73f0a448af12 wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 31 hours ago Up 31 hours 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp docker_kafka_zookeeper_1
242810d19a63 redis "docker-entrypoint.s…" 4 days ago Exited (255) 2 days ago 0.0.0.0:6380->6379/tcp redis80
7f9686d3f5a9 redis "docker-entrypoint.s…" 4 days ago Exited (255) 2 days ago 0.0.0.0:6379->6379/tcp redis
6.find查找文件
find 根据下列规则判断,在命令- ( ) 之前的部份为 path,之后的是 expression。如果 path 是空字串则使用目前路径,如果 expression 是空字串则使用 -print 为预设 expression。
find path -option [ -print ] [ -exec -ok command ] {} \;
查找.properties格式的文件
[root@CentOShby config]# find . -name "*.properties"
./consumer.properties
./connect-mirror-maker.properties
./zookeeper.properties
./producer.properties
./connect-console-sink.properties
./connect-log4j.properties
./connect-standalone.properties
./connect-file-source.properties
./connect-console-source.properties
./connect-distributed.properties
./tools-log4j.properties
./connect-file-sink.properties
./log4j.properties
查找今天更新的文件
[root@CentOShby config]# find . -ctime -1
.
./server.properties
7.mv 移动文件夹
把test改成test1的名字
mv test test1
把test1文件夹放到test里面
[root@CentOShby /]# cd /home/hby
[root@CentOShby hby]# mkdir test
[root@CentOShby hby]# mv test1/ test
将文件夹内所有文件移动到当前目录,test1移动到了redis文件夹里面,test文件夹里面就没有数据了
[root@CentOShby hby]# cd redis
[root@CentOShby redis]# mv /home/hby/test/* .
8.cp 复制
将test1里面的所有文件放到test文件夹内
cp -r test1 /home/hby/test
9.cat
a.text数据粘贴到b.text里面,并且覆盖,b.text里面原来的数据不保留。
-n 或 --number:由 1 开始对所有输出的行数编号。
-b 或 --number-nonblank:和 -n 相似,只不过对于空白行不编号。
cat -n a.txt>b.txt
是后面的文字追加,并不覆盖文件
cat -b a.txt>>b.txt
清空数据
cat /dev/null > /home/hby/redis/test1/a.txt
10.tail
用来查日志
[root@CentOShby hby-0]# tail 00000000000000000000.log
C�2�w��nPw��nP��������������"hello worldM�S�w��w����������������6kafka-demohello,kafkaM�7"Nw�홶w�홶��������������6kafka-demohello,kafka[root@CentOShby hby-0]#
查文件的增长情况
tail -f notes.log
11.free
用于显示内存的状况
[root@CentOShby /]# free
total used free shared buff/cache available
Mem: 1832972 1464912 94540 21848 273520 185796
Swap: 2097148 1543184 553964**12.top**
以总和状态显示存储状况
[root@CentOShby /]# free -t
total used free shared buff/cache available
Mem: 1832972 1464792 94660 21848 273520 185916
Swap: 2097148 1543184 553964
Total: 3930120 3007976 648624
详细https://blog.csdn.net/u012359618/article/details/53520949/
13.df
显示目前在 Linux 系统上的文件系统磁盘使用情况统计
-h选项,通过它可以产生可读的格式df命令的输出:
[root@CentOShby /]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 866M 0 866M 0% /dev
tmpfs 896M 0 896M 0% /dev/shm
tmpfs 896M 9.9M 886M 2% /run
tmpfs 896M 0 896M 0% /sys/fs/cgroup
/dev/mapper/cl-root 17G 11G 6.7G 62% /
/dev/sda1 1014M 240M 775M 24% /boot
tmpfs 179M 1.2M 178M 1% /run/user/42
tmpfs 179M 6.9M 173M 4% /run/user/1000
/dev/sr0 8.7G 8.7G 0 100% /run/media/hby/CentOS-8-3-2011-x86_64-dvd
tmpfs 179M 0 179M 0% /run/user/0
overlay 17G 11G 6.7G 62% /var/lib/docker/overlay2/999533b2f132a9754e5c72246b4a8af023d2f6af47a3c361d007b0343d65e982/merged
overlay 17G 11G 6.7G 62% /var/lib/docker/overlay2/87e93df4c23a925b97b53cd120f5a683b728f24ca4429c87b7332381677da2e8/merged
shm 64M 0 64M 0% /var/lib/docker/containers/a62bb26dcc390e763bc49f284132d6d8a26559399ffc7f9d38c113c1f00bb3ce/mounts/shm
shm 64M 0 64M 0% /var/lib/docker/containers/73f0a448af12b8aa0c0eb6544d765d3ec4833971ea2148127b4499c80534ad3d/mounts/shm
14.grep
1、在当前目录中,查找前缀有b字样的文件中包含bc字符串的文件,并打印出该字符串的行。
[root@CentOShby test1]# grep bc b*
1 bc
1 bc
1 bc
2.以递归方式查询/home/hby/redis 文件夹里面所有含有“bc”的文件
[root@CentOShby ~]# grep -r bc /home/hby/redis
/home/hby/redis/redis.conf:rdbcompression yes
/home/hby/redis/redis.conf:rdbchecksum yes
/home/hby/redis/redis.conf:# +<command>|subcommand Allow a specific subcommand of an otherwise
/home/hby/redis/redis79.conf:rdbcompression yes
/home/hby/redis/redis79.conf:rdbchecksum yes
/home/hby/redis/redis79.conf:# +<command>|subcommand Allow a specific subcommand of an otherwise
/home/hby/redis/redis80.conf:rdbcompression yes
/home/hby/redis/redis80.conf:rdbchecksum yes
/home/hby/redis/redis80.conf:# +<command>|subcommand Allow a specific subcommand of an otherwise
/home/hby/redis/test1/a.test:abc
/home/hby/redis/test1/b.text: 1 bc
/home/hby/redis/test1/b.text: 1 bc
/home/hby/redis/test1/b.text: 1 bc
15.ifconfig/ip a 查看ip
16.crontab
当安装完成操作系统之后,默认便会启动此任务调度命令。crond 命令每分锺会定期检查是否有要执行的工作,如果有要执行的工作便会自动执行该工作。
[root@CentOShby ~]# crontab -e
no crontab for root - using an empty one
crontab: installing new crontab
每过一分钟写一次时间
* * * * * date >> /home/hby/redis/test1/b.text
17.systemctl start/stop/restart xxx
启动服务
restart = stop+ start
文章参考了很多网上文件,只用于学习交流,并非有意窃取别人成果啊。