离线数据分析平台——用户兴趣取向分析(3)导入数据到kafka

52 篇文章 2 订阅
9 篇文章 1 订阅

一、在kafka目录下创建topic

cd /usr/hdp/current/kafka-broker/

这里一共创建了八个topic(一个数据文件创建一个topic)

(user-topic)
(1)在kafka目录下执行以下代码来创建topic

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --create --topic users --partitions 1 --replication-factor 1

(2)设置users–topic的消息保存时间为一星期

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --alter --topic user --config retention.ms=604800000

其他七个topic操作类似,代码如下
(user_friends -topic)

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --create --topic user_friends --partitions 3 --replication-factor 1

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --alter --topic user_friends --config retention.ms=604800000

(events -topic)

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --create --topic events --partitions 3 --replication-factor 1

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --alter --topic events --config retention.ms=604800000

(event_attendees_raw -topic)

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --create --topic event_attendees_raw --partitions 1 --replication-factor 1

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --alter --topic event_attendees_raw --config retention.ms=604800000

(event_attendees -topic)

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --create --topic event_attendees --partitions 3 --replication-factor 1

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --alter --topic event_attendees --config retention.ms=604800000

(train -topic)

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --create --topic train --partitions 1 --replication-factor 1

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --alter --topic train --config retention.ms=604800000

(test -topic)

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --create --topic test --partitions 1 --replication-factor 1

bin/kafka-topics.sh --zookeeper sandbox-hdp.hortonworks.com:2181 --alter --topic test --config retention.ms=604800000

可以通过如下命令查看topic中的每一个partition中的消息数量

bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list sandbox-hdp.hortonworks.com:6667 --topic users -time -1 --offsets 1

在这里插入图片描述


二、创建目录

mkdir -p /var/flume/checkpoint/users
mkdir -p /var/flume/data/users
chmod 777 -R /var/flume
mkdir -p /events/input/intra/users
chmod 777 -R /events/

三、在flume中创建agent

(1)在如下位置编写配置文件
在这里插入图片描述
(2)配置代码如下
#(users agent’s)

#Deploy the following content into Flume
#Initialize agent's source, channel and sink
users.sources = usersSource
users.channels = usersChannel
users.sinks = usersSink

#Use a channel which buffes events in a directory

users.channels.usersChannel.type = file
users.Channels.usersChannel.checkpointDir = /var/flume/checkpoint/users
users.channel.usersChannel.dataDirs = /var/flume/data/users

#Setting the source to spool directory where the file exists

users.sources.usersSource.type = spooldir
users.sources.usersSource.deserializer = LINE
users.sources.usersSource.deserializer.maxLineLength = 6400
users.sources.usersSource.spoolDir = /events/input/intra/users
users.sources.usersSource.includePattern = users_[0-9][4]-[0-9][2]-[0-9][2].csv
users.sources.usersSource.interceptors = head_filter
users.sources.usersSource.interceptors.head_filter.type = regex_filter
users.sources.usersSource.interceptors.head_filter.regex = ^user_id,locale,birthyear,gender,joinedAt,location,timezone$
users.sources.usersSource.interceptors.head_filter.excludeEvents = true 
users.sources.usersSource.channels = usersChannel

#Define / Configure.sink

users.sinks.usersSink.type = org.apache.flume.sink.kafka.KafkaSink
users.sinks.usersSink.batchSize = 640
users.sinks.usersSink.brokerList = sandbox-hdp.hortonworks.com:6667
users.sinks.usersSink.topic = users
users.sinks.usersSink.channel = usersChannel

(3)保存后重新启动flume
成功后如下
在这里插入图片描述

四、导入文件

在csv文件所在位置自行以下命令

install -m 777 users.csv /events/input/intra/users/users_2021-01-25.csv

可以使用以下命令查看导入状态

 ll /events/input/intra/users/

在这里插入图片描述
其余七个不做过多介绍,只放代码

创建目录

mkdir -p /var/flume/checkpoint/events
mkdir -p /var/flume/data/events
chmod 777 -R /var/flume

#(events agent’s)

#Deploy the following content into Flume
#Initialize agent's source,channel and sink
events.sources = eventsSource
events.channels = eventsChannel
events.sinks = eventsSink1 eventsSink2 eventsSink3
events.sinkgroups = grpEvents
events.sinkgroups.grpEvents.sinks = eventsSink1 eventsSink2 eventsSink3
events.sinkgroups.grpEvents.processor.type = load_balance
events.sinkgroups.grpEvents.processor.backoff = true
events.sinkgroups.grpEvents.processor.selector = round_robin

#Use a channel which buffers events in a directory

events.channels.eventsChannel.type = file
events.channels.eventsChannel.checkpointDir = /var/flume/checkpoint/events
events.channels.eventsChannel.dataDirs = /var/flume/data/events
events.channels.eventsChannel.transactionCapacity = 5000

#Setting the source to spool directory where the file exists

events.sources.eventsSource.type = spooldir
events.sources.eventsSource.deserializer = LINE
events.sources.eventsSource.deserializer.maxLineLength = 32000
events.sources.eventsSource.spoolDir = /events/input/intra/events
events.sources.eventsSource.includePattern = events_[0-9][4]-[0-9][2]-[0-9][2].csv
events.sources.eventsSource.interceptors = head_filter
events.sources.eventsSource.interceptors.head_filter.type = regex_filter
events.sources.eventsSource.interceptors.head_filter.regex = ^event_id,user_id,start_time,city,state,zip,country,lat,lng,c_1,c_2,c_3,c_4,c_5,c_6,c_7,c_8,c_9,c_10,c_11,c_12,c_13,c_14,c_15,c_16,c_17,c_18,c_19,c_20,c_21,c_22,c_23,c_24,c_25,c_26,c_27,c_28,c_29,c_30,c_31,c_32,c_33,c_34,c_35,c_36,c_37,c_38,c_39,c_40,c_41,c_42,c_43,c_44,c_45,c_46,c_47,c_48,c_49,c_50,c_51,c_52,c_53,c_54,c_55,c_56,c_57,c_58,c_59,c_60,c_61,c_62,c_63,c_64,c_65,c_66,c_67,c_68,c_69,c_70,c_71,c_72,c_73,c_74,c_75,c_76,c_77,c_78,c_79,c_80,c_81,c_82,c_83,c_84,c_85,c_86,c_87,c_88,c_89,c_90,c_91,c_92,c_93,c_94,c_95,c_96,c_97,c_98,c_99,c_100,c_other$
events.sources.eventsSource.interceptors.head_filter.excludeEvents = true
events.sources.eventsSource.channels = eventsChannel

#Define / Configure sinks
events.sinks.eventsSink1.type = org.apache.flume.sink.kafka.KafkaSink
events.sinks.eventsSink1.batchSize = 1280
events.sinks.eventsSink1.brokerList = sandbox-hdp.hortonworks.com:6667
events.sinks.eventsSink1.topic = events
events.sinks.eventsSink1.channel = eventsChannel
events.sinks.eventsSink2.type = org.apache.flume.sink.kafka.KafkaSink
events.sinks.eventsSink2.batchSize = 1280
events.sinks.eventsSink2.brokerList = sandbox-hdp.hortonworks.com:6667
events.sinks.eventsSink2.topic = events
events.sinks.eventsSink2.channel = eventsChannel
events.sinks.eventsSink3.type = org.apache.flume.sink.kafka.KafkaSink
events.sinks.eventsSink3.batchSize = 1280
events.sinks.eventsSink3.brokerList = sandbox-hdp.hortonworks.com:6667
events.sinks.eventsSink3.topic = events
events.sinks.eventsSink3.channel = eventsChannel
install -m 777 events.csv /events/input/intra/events/events_2021-01-25.csv

创建目录

mkdir -p /var/flume/checkpoint/train
mkdir -p /var/flume/data/train

hdfs dfs -mkdir -p /user/events/driver

修改目录权限

chmod 777 -R /var/flume

hdfs dfs -chmod -R 777 /user/events/driver

#(train agent’s)

#Initialize agent's source ,channel and sink

train.sources = trainSource
train.channels = trainChannel driverChannel
train.sinks = trainSink driverSink

#Use a channel which buffers events in a directory

train.channels.trainChannel.type = file
train.channels.trainChannel.checkpointDir = /var/flume/checkpoint/train
train.channels.trainChannel.dataDirs = /var/flume/data/train

#Setting the channel to memory

train.channels.driverChannel.type = memory
train.channels.driverChannel.capacity = 64000
train.channels.driverChannel.transactioncapacity = 16000

#Setting the source to spool directory where the file exists

train.sources.trainSource.type = spooldir
train.sources.trainSource.deserializer = LINE
train.sources.trainSource.deserializer.maxLineLength = 3200
train.sources.trainSource.spoolDir = /events/input/intra/train
train.sources.trainSource.includePattern = train_[0-9][4]-[0-9][2]-[0-9][2].csv
train.sources.trainSource.interceptors = head_filter
train.sources.trainSource.interceptors.head_filter.type = regex_filter
train.sources.trainSource.interceptors.head_filter.regex = ^user,event,invited,timestamp,interested,not_interested$
train.sources.trainSource.interceptors.head_filter.excludeEvents = true 
train.sources.trainSource.channels = trainChannel driverChannel

#Dfine / Configure sink 

train.sinks.trainSink.type = org.apache.flume.sink.kafka.KafkaSink
train.sinks.trainSink.batchSize = 640
train.sinks.trainSink.brokerList = sandbox-hdp.hortonworks.com:6667
train.sinks.trainSink.topic = train
train.sinks.trainSink.channel = trainChannel

#Setting the sink to HDFS

train.sinks.driverSink.type = hdfs
train.sinks.driverSink.hdfs.fileType = DataStream
train.sinks.driverSink.hdfs.filePrefix = train
train.sinks.driverSink.hdfs.fileSuffix = .csv
train.sinks.driverSink.hdfs.path = /user/events/driver/%Y-%m-%d
train.sinks.driverSink.hdfs.useLocalTimeStamp = true
train.sinks.driverSink.hdfs.batchSize = 6400

#Number of events weitten to file before it rolled 

train.sinks.driverSink.hdfs.rollCount = 3200

#File size to trigger roll, in bytes

train.sinks.driverSink.hdfs.rollSize = 640000

#Number of seconds to wait before rolling current file 

train.sinks.driverSink.hdfs.rollInterval = 300
train.sinks.driverSink.channel = driverChannel
install -m 777 train.csv /events/input/intra/train/train_2021-01-25.csv

创建目录

mkdir -p /var/flume/checkpoint/test
mkdir -p /var/flume/data/test

chmod 777 -R /var/flume

#(test agent’s)

# Initialize agent's source, channel and sink
test.sources = testSource
test.channels = testChannel
test.sinks = testSink

# Use a channel which buffers events in a directory
test.channels.testChannel.type = file
test.channels.testChannel.checkpointDir = /var/flume/checkpoint/test
test.channels.testChannel.dataDirs = /var/flume/data/test

# Setting the source to spool directory where the file exists
test.sources.testSource.type = spooldir
test.sources.testSource.deserializer = LINE
test.sources.testSource.deserializer.maxLineLength = 6400
test.sources.testSource.spoolDir = /events/input/intra/test
test.sources.testSource.includePattern = test_[0-9][4]-[0-9][2]-[0-9][2].csv
test.sources.testSource.interceptors = head_filter
test.sources.testSource.interceptors.head_filter.type = regex_filter
test.sources.testSource.interceptors.head_filter.regex = ^user,event,invited,timestamp$
test.sources.testSource.interceptors.head_filter.excludeEvents = true
test.sources.testSource.channels = testChannel

# Define / Configure sink
test.sinks.testSink.type = org.apache.flume.sink.kafka.KafkaSink
test.sinks.testSink.batchSize = 640
test.sinks.testSink.brokerList = sandbox-hdp.hortonworks.com:6667
test.sinks.testSink.topic = test
test.sinks.testSink.channel = testChannel
install -m 777 test.csv /events/input/intra/test/test_2021-01-25.csv

创建目录

mkdir -p /var/flume/checkpoint/user_friends
mkdir -p /var/flume/data/user_friends

chmod 777 -R /var/flume

(user_friends agent’s)

# Initialize agent's source, channel and sink
user_friends.sources = user_friendsSource
user_friends.channels = user_friendsChannel
user_friends.sinks = user_friendsSink

# Use a channel which buffers events in a directory
user_friends.channels.user_friendsChannel.type = file
user_friends.channels.user_friendsChannel.checkpointDir = /var/flume/checkpoint/user_friends
user_friends.channels.user_friendsChannel.dataDirs = /var/flume/data/user_friends

# Setting the source to spool directory where the file exists
user_friends.sources.user_friendsSource.type = spooldir
user_friends.sources.user_friendsSource.deserializer = LINE
user_friends.sources.user_friendsSource.deserializer.maxLineLength = 128000
user_friends.sources.user_friendsSource.spoolDir = /events/input/intra/user_friends
user_friends.sources.user_friendsSource.includePattern = user_friends_[0-9][4]-[0-9][2]-[0-9][2].csv
user_friends.sources.user_friendsSource.interceptors = head_filter
user_friends.sources.user_friendsSource.interceptors.head_filter.type = regex_filter
user_friends.sources.user_friendsSource.interceptors.head_filter.regex = ^user,friends$
user_friends.sources.user_friendsSource.interceptors.head_filter.excludeEvents = true
user_friends.sources.user_friendsSource.channels = user_friendsChannel

# Define / Configure sink
user_friends.sinks.user_friendsSink.type = org.apache.flume.sink.kafka.KafkaSink
user_friends.sinks.user_friendsSink.batchSize = 640
user_friends.sinks.user_friendsSink.brokerList = sandbox-hdp.hortonworks.com:6667
user_friends.sinks.user_friendsSink.topic = user_friends_raw
user_friends.sinks.user_friendsSink.channel = user_friendsChannel
install -m 777 user_friends.csv /events/input/intra/user_friends/userFriends_2021-01-26.csv

创建目录

mkdir -p /var/flume/checkpoint/event_attendess
mkdir -p /var/flume/data/event_attendess

修改权限

chmod 777 -R /var/flume

(event_attendess agent’s)

#Initialize agent's source channel and sink 

event_attendees.sources = eventAttendeesSource
event_attendees.channels = eventAttendeesChannel
event_attendees.sinks = eventAttendeesSink

#Use a channel which buffers events in a directory

event_attendees.channels.eventAttendeesChannel.type = file
event_attendees.channels.eventAttendeesChannel.checkpointDir = /var/flume/checkpoint/event_attendees
event_attendees.channels.eventAttendeesChannel.dataDirs = /var/flume/data/event_attendees

#Setting the source to spool directory where the file exists

event_attendees.sources.eventAttendeesSource.type = spooldir
event_attendees.sources.eventAttendeesSource.deserializer = LINE
event_attendees.sources.eventAttendeesSource.deserializer.maxLineLength = 12800
event_attendees.sources.eventAttendeesSource.spoolDir = /events/input/intra/event_attendees
event_attendees.sources.eventAttendeesSource.includePattern = eventAttendees_[0-9][4]-[0-9][2]-[0-9][2].csv
event_attendees.sources.eventAttendeesSource.interceptors = head_filter
event_attendees.sources.eventAttendeesSource.interceptors.head_filter.type = regex_filter
event_attendees.sources.eventAttendeesSource.interceptors.head_filter.regex = ^event,yes,maybe,invited,no$
event_attendees.sources.eventAttendeesSource.interceptors.head_filter.excludeEvents = true
event_attendees.sources.eventAttendeesSource.channels = eventAttendeesChannel

#Define /Congigure sink
 
event_attendees.sinks.eventAttendeesSink.type = org.apache.flume.sink.kafka.KafkaSink
event_attendees.sinks.eventAttendeesSink.batchSize = 640
event_attendees.sinks.eventAttendeesSink.brokerList = sandbox-hdp.hortonworks.com:6667
event_attendees.sinks.eventAttendeesSink.topic = event_attendees_raw
event_attendees.sinks.eventAttendeesSink.channel = eventAttendeesChannel
install -m 777 event_attendees.csv /events/input/intra/event_attendees/eventAttendees_2021-01-26.csv
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值