kafka
瓦力冫
喜欢看点书,跑跑步,热爱游戏编程
展开
-
kafka 单机环境搭建
1. 安装zookeeperkafka要先安装zookepper1.1下载解压zookeeper1.2 conf/zoo_sample.cfg 复制为zoo.cfgdataDir=/Users/walle/data/zookeeper 修改下dataDir 目录即可1.3 运行zookeeperzkServer.sh start 2. 下载解压kafka3. 修改 config/server....原创 2018-06-16 19:11:11 · 806 阅读 · 0 评论 -
Storm 整合kafka
1.依赖<dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-framework</artifactId> <version>${curator.version}</vers...原创 2018-06-30 20:30:32 · 643 阅读 · 1 评论 -
flume 收集日志到 kafka 整合
就是服务器B的Sink要换成kafka 的sink即可服务器A还是不变:# Define a memory channel called ch1 on agent1 agent1.channels.ch1.type = memory agent1.channels.ch1.capacity = 1000 agent1.channels.ch1.transactionCapacity = 100 ...原创 2018-06-19 17:48:14 · 354 阅读 · 0 评论 -
kafka 简单 java 生产消费API
1. KafkaProperties package com.immooc.spark.kafka; public class KafkaProperties { public static final String ZK = "localhost:2181"; public static final String TOPIC = "test"; public sta...原创 2018-06-19 17:50:06 · 265 阅读 · 0 评论 -
kafka 简单 java 生产消费API 1.1
1. KafkaPropertiespackage com.immooc.spark.kafka; public class KafkaProperties { public static final String ZK = "localhost:2181"; public static final String TOPIC = "test"; public stat...原创 2018-06-19 18:19:27 · 649 阅读 · 2 评论 -
spark streaming kafka 整合
package com.test.spark import org.apache.kafka.clients.consumer.ConsumerRecord import org.apache.kafka.common.serialization.StringDeserializer import org.apache.spark.SparkConf import org.apache.spark...原创 2018-06-19 18:20:47 · 304 阅读 · 0 评论 -
flume 从log4j 收集日志 到kafka
1. flume 配置# Define a memory channel called ch1 on agent1 agent1.channels.ch1.type = memory agent1.channels.ch1.capacity = 1000 agent1.channels.ch1.transactionCapacity = 100 agent1.sources.avro-sou...原创 2018-06-19 18:21:47 · 958 阅读 · 0 评论