JMS:
java message service
p2p:一对一模式
point to point
peer to peer
p-s:一对多模式
publish subscribe
kafka:
p2p + ps = 消费者组模式
中间件:
不含有业务的技术组件
kafka是什么?
//实时数据处理
==========================
分布式消息系统
分布式数据库
分布式缓存
kafka起先由领英公司:linkedin创建
kafka版本介绍:
kafka_2.11-1.1.0 //2.11是scala版本
//java语言脚本化
//1.1.0是kafka版本
体验kafka:
=======================================
本地模式:
解压
符号链接
环境变量
生效环境变量
配置文件: /soft/kafka/config/server.properties
------------------------------
listeners=PLAINTEXT://s101:9092
log.dirs=/home/centos/kafka/logs
zookeeper.connect=s102:2181,s103:2181,s104:2181
启动kafka服务:
-------------------------------
kafka-server-start.sh -daemon /soft/kafka/config/server.properties
停止kafka服务:
---------------------------------
kafka-server-stop.sh
创建主题: //消息保管者
---------------------------------
kafka-topics.sh --zookeeper s102:2181 --create --partitions 2 --replication-factor 1 --topic t1
启动生产者 //消息发送方
---------------------------------
kafka-console-producer.sh --broker-list s101:9092 --topic t1
启动消费者 //消息接收方
---------------------------------
kafka-console-consumer.sh --zookeeper s102:2181 --topic t1 --from-beginning
完全分布式:
1、将kafka安装包,发送到其他节点
s102-s104
2、分发环境变量,并使其生效
3、修改配置文件
s102> nano /soft/kafka/config/server.properties //修改broker.id=102
//listeners=PLAINTEXT://s102:9092
s103> nano /soft/kafka/config/server.properties //修改broker.id=103
//listeners=PLAINTEXT://s103:9092
s104> nano /soft/kafka/config/server.properties //修改broker.id=104
//listeners=PLAINTEXT://s104:9092
3.5、进入到ZooInspector,删除zk的kafka节点数据
/controller /brokers /admin /controller_epoch /consumers /latest_producer_id_block /config /isr_change_notification /cluster /log_dir_event_notification
4、分别启动s102-s104的kafka
kafka-server-start.sh -daemon /soft/kafka/config/server.properties
5、查看主题列表
kafka-topics.sh --zookeeper s102:2181 --list
创建主题
kafka-topics.sh --zookeeper s102:2181 --create --partitions 2 --replication-factor 1 --topic t2
6、开启控制台生产者
kafka-console-producer.sh --broker-list s102:9092 --topic t2
7、
kafka-console-consumer.sh --zookeeper s102:2181 --topic t2 --from-beginning
xkafka脚本编写: /usr/local/bin/xkafka.sh,注意添加执行权限
===================================================
#!/bin/bash
if [ $# -ne 1 ] ; then echo 无效参数,参数必须唯一 ; exit ; fi
cmd=$1
for (( i=102 ; i<=104 ; i++ )) ; do
tput setaf 2
echo ========================== s$i $cmd ==========================
tput setaf 9
case $cmd in
start ) ssh s$i "source /etc/profile ; kafka-server-start.sh -daemon /soft/kafka/config/server.properties" ;;
stop ) ssh s$i "source /etc/profile ; kafka-server-stop.sh" ;;
* ) echo illegal param ; exit ;;
esac
done
kafka实现发布订阅模式:
======================================
s102启动生产者
s103-s104启动消费者
kafka-console-consumer.sh --zookeeper s102:2181 --topic t2
kafka实现点对点模式:
======================================
s102启动生产者
s103-s104启动消费者
kafka-console-consumer.sh --zookeeper s102:2181 --topic t2 --group g1
kafka的API实现生产消费
============================================
kafka生产者
--------------------------------------
public class TestProducer {
public static void main(String[] args) throws Exception {
Properties props = new Properties();
//设置代理\服务器地址
props.put("metadata.broker.list", "s102:9092, s103:9092, s104:9092");
//序列化类,string
props.put("serializer.class", "kafka.serializer.StringEncoder");
//包装java的prop,包装成ProducerConfig
ProducerConfig config = new ProducerConfig(props);
//使用producerConfig初始化producer
//<String, String> 中第一个为key类型(未接触到),第二个是value类型,真实数据
Producer<String, String> producer = new Producer<String, String>(config);
String topic = "t3";
for (int i = 1000; i < 2000; i++) {
KeyedMessage<String, String> data = new KeyedMessage<String, String>(topic, "tom" + i);
producer.send(data);
Thread.sleep(500);
}
producer.close();
}
}
消费者
----------------------------------------------
public class TestConsumer {
public static void main(String[] args) {
Properties props = new Properties();
props.put("zookeeper.connect", "s102:2181,s103:2181,s104:2181");
props.put("group.id", "g2");
props.put("zookeeper.session.timeout.ms", "500");
props.put("zookeeper.sync.time.ms", "250");
props.put("auto.commit.interval.ms", "1000");
ConsumerConfig conf = new ConsumerConfig(props);
ConsumerConnector consumer = kafka.consumer.Consumer
.createJavaConsumerConnector(conf);
Map<String, Integer> topicMap = new HashMap<String, Integer>();
topicMap.put("t3", new Integer(1));
//通过consumer和topic获取到数据流
//Map中,参数一:topic,参数二:消息列表
Map<String, List<KafkaStream<byte[], byte[]>>> consumerStreamsMap =
consumer.createMessageStreams(topicMap);
//通过topic返回所有消息列表
List<KafkaStream<byte[], byte[]>> streamList = consumerStreamsMap.get("t3");
//迭代所有list,通过迭代器获取消息流中的k-v
for (final KafkaStream<byte[], byte[]> stream : streamList) {
ConsumerIterator<byte[], byte[]> consumerIte = stream.iterator();
while (consumerIte.hasNext())
System.out.println("Message from Single Topic :: "
+ new String(consumerIte.next().message()));
}
}
}