一、Kafka概述
核心组件:
Topic:消息分类,接收到的消息按照Topic进行分类。
Producer:发送消息。
Consumer:接收消息。
broker:每个Kafka实例。
zookeeper:依赖集群保存meta信息
集群模型
二、Kafka常用命令
1、创建topic
bin/kafka-topics.sh --create --zookeeper server:2181 --replication-factor1 --partitions1 --topic
test
2、kafka生产者客户端命令
bin/kafka-console-producer.sh --broker-list server:9092 --topic test
3、kafka消费者客户端命令
bin/kafka-console-consumer.sh -zookeeper server:2181 --topic test
4、kafka服务启动
bin/kafka-server-start.sh config/server.properties
5、Kafka服务停用
bin/kafka-server-stop.sh
三、Kafka Java API
见总结5:KafkaAPI
四、Flume & Kafka 完成监测日志
实时检测1:flume监测,Kafka消费者输出
1
private static final String topic = "itcast";
2
private static final Integer threads = 2;
3
4
public static void main(String[] args) {
5
6
Properties props = new Properties();
7
props.put("zookeeper.connect", "server:2181,server02:2181,server03:2181");
8
props.put("group.id", "vvvvv");
9
props.put("auto.offset.reset", "smallest");
10
11
//创建消费者连接器
12
ConsumerConfig config = new ConsumerConfig(props);
13
ConsumerConnector consumer =Consumer.createJavaConsumerConnector(config);
14
15
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
16
topicCountMap.put(topic, threads);
17
//Map下是字符串和一个List,List下是Kafka字节流
18
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
19
//获取某Topic下的消费者流
20
List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topic);
21
22
//遍历,读取
23
for(final KafkaStream<byte[], byte[]> kafkaStream : streams){
24
new Thread(new Runnable() {
25
@Override
26
public void run() {
27
for(MessageAndMetadata<byte[], byte[]> mm : kafkaStream){
28
String msg = new String(mm.message());
29
System.out.println(msg);
30
}
31
}
32
}).start();
33
}
34
}
五、Kafka Producer-->>--KafkaSpout-->>--Bolt-->>--Redis
①Kafka Producer:适用于其他任何形式的生产者
1
public static void main(String[] args) {
2
String TOPIC = "ccc";
3
Properties props = new Properties();
4
props.put("serializer.class", "kafka.serializer.StringEncoder");
5
props.put("metadata.broker.list", "server:9092,server02:9092,server03:9092");
6
props.put("request.required.acks", "1");
7
props.put("partitioner.class", "kafka.producer.DefaultPartitioner");
8
9
Producer<String, String> producer = new Producer<String, String>(new ProducerConfig(props));
10
//每隔0.1秒发送一条订单信息
11
for (int messageNo = 1; messageNo < 10000; messageNo++) {
12
producer.send(new KeyedMessage<String, String>(TOPIC, messageNo + " ",new OrderInfo().random() ));
13
try {
14
Thread.sleep(100);
15
} catch (InterruptedException e) {
16
e.printStackTrace();
17
}
18
}
19
}
20
②Kafka2StormTopologyMain
1
public static void main(String[] args) throws Exception{
2
TopologyBuilder topologyBuilder = new TopologyBuilder();
3
//设定Spout为KafkaSpout
4
topologyBuilder.setSpout("kafkaSpout",new KafkaSpout(new SpoutConfig(new ZkHosts("server:2181,server02:2181,server03:2181"),
5
"ccc","/myKafka","kafkaSpout")),1);
6
//设定Bolt,该Bolt完成到redis的数据存储
7
topologyBuilder.setBolt("mybolt1",new ParserOrderMqBolt(),1).shuffleGrouping("kafkaSpout");
8
9
Config config = new Config();
10
config.setNumWorkers(1);
11
12
//3、提交任务 -----两种模式 本地模式和集群模式
13
if (args.length>0) {
14
StormSubmitter.submitTopology(args[0], config, topologyBuilder.createTopology());
15
}else {
16
LocalCluster localCluster = new LocalCluster();
17
localCluster.submitTopology("storm2kafka", config, topologyBuilder.createTopology());
18
}
19
}
③Bolt:缓存到Redis
1
JedisPoolConfig config = new JedisPoolConfig();
2
3
config.setMaxIdle(5);
4
config.setMaxTotal(1000 * 100);
5
config.setMaxWaitMillis(30);
6
config.setTestOnBorrow(true);
7
config.setTestOnReturn(true);
8
/**
9
*如果你遇到 java.net.SocketTimeoutException: Read timed out exception的异常信息
10
*请尝试在构造JedisPool的时候设置自己的超时值. JedisPool默认的超时时间是2秒(单位毫秒)
11
*/
12
pool = new JedisPool(config, "127.0.0.1", 6379); //设置JedisPool,获得JedisPool对象
13
14
15
public void execute(Tuple input) {
16
17
//获取发送过来的数据,是一个json
18
String string = new String((byte[]) input.getValue(0));
19
//解析json
20
OrderInfo orderInfo = (OrderInfo) new Gson().fromJson(string, OrderInfo.class);
21
22
//获得jedis实例
23
Jedis jedis = pool.getResource();
24
25
String bid = getBubyProductId(orderInfo.getProductId(), "c");//获取产品ID,
26
jedis.incrBy(bid + "total Amount", orderInfo.getProductPrice());//将本商品ID的销售额累加!在原来的value基础上增加新订单的销售额
27
28
String bid1 = getBubyProductId(orderInfo.getProductId(), "b");
29
jedis.incrBy(bid1 + "total Amount", orderInfo.getProductPrice());
30
31
String bid2 = getBubyProductId(orderInfo.getProductId(), "s");
32
jedis.incrBy(bid2 + "total Amount", orderInfo.getProductPrice());
33
34
String bid3 = getBubyProductId(orderInfo.getProductId(), "p");
35
jedis.incrBy(bid3 + "_total Amount", orderInfo.getProductPrice());
36
37
jedis.close();
38
}
④
在本地执行redis-server,然后启动Kafka生产者,启动Topology将信息加载到本地的Redis中。
然后打开redis-cli ,执行 get id+total Amount 获得该商品的销售额。