Hadoop Flume&Sqoop

Flume

Overview

Using Flume to collect logfiles from a bank of web servers, then moving the log events from those files into new aggregated files in HDFS for processing. Flume is also flexible to write to other systems, like HBase or Solr. Using Flume is mainly a configuration exercise to wire different Agents together.

Flume Agent is a long-lived Java process that runs sources and sinks, connected by channels. A source in Flume produces events and delivers them to the channels, which stores the events until they are forwarded to sinks. source-channel-sink is the basic building blocks of Flume.

Agents on the edge systems collect data and forward it to Agents that is responsible for aggregating and storing the data in the final destination.

  • Running Flume Agent
%flume-ng agent \
 --conf-file agent_config.properties \
 --name agent_name \
 --conf $FLUME_HOME/conf \
 -Dflume.root.logger=INFO, console
  • Agent Configuration
# source, channle and sink declaration
agent_name.sources=source1 source2 ...
agent_name.sinks=sink1 sink2 ...
agent_name.channels=channel1 channel2 ...

# chaining source-channel-sink
agent_name.sources.source1.channel=channel1 channel2
agent_name.sinks.sink1.channel=channel1
agent_name.sinks.sink2.channel=channel2

# config particular source
agent_name.sources.source1.type=spooldir
agent_name.sources.source1.spoolDir=path

# config particular channel
agent_name.channels.channel1.type=memory
# file persist the message and remove it only after it's consumed
agent_name.channels.channel2.type=file

# config particular sink
agent_name.sinks.sink1.type=logger

agent_name.sinks.sink2.type=hdfs
agent_name.sinks.sink2.path=/tmp/flume
agent_name.sinks.sink2.filePreFix=events
agent_name.sinks.sink2.fileSufFix=.avro
agent_name.sinks.sink2.fileType=DataStream
agent_name.sinks.sink2.serializer=avro_event
agent_name.sinks.sink2.serializer.compressionCodec=snappy

  • Event Format: { header: {} body: { ...binary format... ...string format... }}
    • optional header
    • binary format and string format

Transaction and Reliability

Flume uses separate transactions to guarantee delivery from the source to the channel, and from the channel to the sink. If file channel is used, once an event has been written to the channel, it will never be lost, even if the agent restarts. However, using memory channel could lead to message loss in the event of channel restart, but it leads to a much higher throughput.

The overall effect is that every event produced by the source will reach the sink AT LEAST ONCE, that are duplicates is possible. The stronger semantics EXACTLY ONCE requires a two-phase commit protocol, which is expensive. Flume chooses the AT LEAST ONCE approach in order to gain high throughput, and duplicates can anyway be removed by the downstream processing.

HDFS Sink

Chaining

  • Fan Out: delivering events from one source to multiple channels, so they reach multiple sinks.
  • Agent Tiers: aggregating Flume events (from different agents) is achieved by having tiers of Flume agents. The first tier collects events from original sources (say web server) and sends them to a smaller set of 2nd tier agents, which aggregate events from different 1st tier agents before sending to HDFS. Tiers are constructed by using a special SINK that sends events over NETWORK, and a corresponding SOURCE that receives the event.
    • Avro SINK sends events to Avro SOURCE over Avro RPC. (nothing related to Avro file)
    • Thrift SINK sends events to Thrift SOURCE over Thrift RPC.
# 1st Tier Avro SINK : sending events
agent_name.sinks.sink1.type=avro
agent_name.sinks.sink1.hostname=ip_address
agent_name.sinks.sink1.port=10000

# 2nd Tier Avro SOURCE : receiving events
agent_name.sources.source1.type=avro
agent_name.sources.source1.bind=ip_address
agent_name.sources.source1.port=10000
  • Sink Group: allows multiple sinks to be treated as one, for failover or load-balancing purpose.
# declare a group
agent_name.sinkgroups=sinkgroup1

# configure particular group
agent_name.sinkgroups.sinkgroup1.sinks=sink1 sink2
agent_name.sinkgroups.sinkgroup1.processor.type=load_balance
agent_name.sinkgroups.sinkgroup1.processor.backoff=true

Application Integration

An Avro source is an RPC endpoint that accepts Flume events, making it possible to write an RPC client to send events to the endpoint.

  • Flume SDK is a module that provides a Java RpcClient class for sending Event objects to an Avro endpoint.
  • Flume Embedded Agent is cut-down Flume agent that runs in a Java application.

Sqoop

Connectors

Built-in connects that support MySQL, Postgresql, Oracle, DB2, SQLServer and Netezza. There is also generic JDBC connector for connecting to any database that supports JDBC protocol.

There are also various 3rd parties connectors that are available for data stores, ranging from enterprise data warehouse (such as Teradata) to NoSQL stores (such as CouchBase)

Import Commands

  • By default, the imported files are comma-delimited text files;
  • File format, delimitor, compression and other parameters can be configured as well.
    • Sequence files
    • Avro files
    • Parquet files
# -------------------------
# Sqoop Import
%sqoop import
# Connecting to datasource
 --connect jdbc:mysql://host/database \
# Source table
 --table tablename
# MapReduce tasks, default to 4
 --split-by column_name
 -m numberOfMapReduceTasks
# Incremental Reports
 --check-column columnname
 --lastvalue lastValue


# ------------------------
# To view the imported files
%hadoop fs -cat tablename/part-m-0000

Process

  • sqoop examines the target table and retrieves a list of all columns and their SQL types.
  • sqoop code-generator uses this information to generate the table-specific class, which will
    • hold a record extracted from the table during MapReduce processing.
    • JDBC execute query and return the ResultSet
    • DBInputFormat (interface) populate the table-specific class with the data from ResultSet
      • readFiles
      • write

转载于:https://my.oschina.net/u/3551123/blog/1484025

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值