大数据项目重温——电商数据仓库(二)数据采集模块(上)

9、数据采集模块

(一).Hadoop安装

集群规划:

服务器hadoop102服务器hadoop103服务器hadoop104
HDFS NameNode
DataNode
DataNodeDataNode
SecondaryNameNode
YarnNodeManagerResourcemanager
NodeManager
NodeManager
(1).项目经验之HDFS存储多目录
  • ① 确认HDFS的存储目录,保证存储在空间最大硬盘上
    -
  • ② 在hdfs-site.xml文件中配置多目录,最好提前配置好,否则更改目录需要重新启动集群
<property>
    <name>dfs.datanode.data.dir</name>
	<value>file:///${hadoop.tmp.dir}/dfs/data1,file:///hd2/dfs/data2,file:///hd3/dfs/data3,file:///hd4/dfs/data4</value>
</property>
(2).项目经验之支持LZO压缩配置
  • ① 先下载lzo的jar项目
    https://github.com/twitter/hadoop-lzo/archive/master.zip
  • ② 下载后的文件名是hadoop-lzo-master,它是一个zip格式的压缩包,先进行解压,然后用maven编译。生成hadoop-lzo-0.4.20.jar。
  • ③ 将编译好后的hadoop-lzo-0.4.20.jar 放入hadoop-2.7.2/share/hadoop/common/
[weiwei@hadoop102 common]$ pwd
/opt/module/hadoop-2.7.2/share/hadoop/common
[weiwei@hadoop102 common]$ ls
hadoop-lzo-0.4.20.jar
  • ④ 同步hadoop-lzo-0.4.20.jar到hadoop103、hadoop104
[weiwei@hadoop102 common]$ xsync hadoop-lzo-0.4.20.jar
  • ⑤ core-site.xml增加配置支持LZO压缩
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

	<property>
	<name>io.compression.codecs</name>
	<value>
	org.apache.hadoop.io.compress.GzipCodec,
	org.apache.hadoop.io.compress.DefaultCodec,
	org.apache.hadoop.io.compress.BZip2Codec,
	org.apache.hadoop.io.compress.SnappyCodec,
	com.hadoop.compression.lzo.LzoCodec,
	com.hadoop.compression.lzo.LzopCodec
	</value>
	</property>

	<property>
	    <name>io.compression.codec.lzo.class</name>
	    <value>com.hadoop.compression.lzo.LzoCodec</value>
	</property>

</configuration>
  • ⑥ 同步core-site.xml到hadoop103、hadoop104
[weiwei@hadoop102 hadoop]$ xsync core-site.xml
  • ⑦ 启动及查看集群
[weiwei@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh
[weiwei@hadoop103 hadoop-2.7.2]$ sbin/start-yarn.sh

1)web和进程查看
Web查看:http://hadoop102:50070
进程查看:jps查看各个节点状态。
2)当启动发生错误的时候:
查看日志:/home/weiwei/module/hadoop-2.7.2/logs
如果进入安全模式,可以通过 hdfs dfsadmin -safemode leave
停止所有进程,删除data和log文件夹,然后hdfs namenode -format 来格式化:

sbin/stop-dfs.sh
rm -rf data/ logs/ (所有集群都删 xcall脚本)
(3).项目经验之基准测试
  • 1) 测试HDFS写性能
    测试内容:向HDFS集群写10个128M的文件
[weiwei@hadoop102 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 128MB
19/05/02 11:55:42 INFO fs.TestDFSIO: TestDFSIO.1.8
19/05/02 11:55:42 INFO fs.TestDFSIO: nrFiles = 10
19/05/02 11:55:42 INFO fs.TestDFSIO: nrBytes (MB) = 128.0
19/05/02 11:55:42 INFO fs.TestDFSIO: bufferSize = 1000000
19/05/02 11:55:42 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
19/05/02 11:55:45 INFO fs.TestDFSIO: creating control file: 134217728 bytes, 10 files
19/05/02 11:55:47 INFO fs.TestDFSIO: created control files for: 10 files
19/05/02 11:55:47 INFO client.RMProxy: Connecting to ResourceManager at hadoop103/192.168.1.103:8032
19/05/02 11:55:48 INFO client.RMProxy: Connecting to ResourceManager at hadoop103/192.168.1.103:8032
19/05/02 11:55:49 INFO mapred.FileInputFormat: Total input paths to process : 10
19/05/02 11:55:49 INFO mapreduce.JobSubmitter: number of splits:10
19/05/02 11:55:49 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1556766549220_0004
19/05/02 11:55:50 INFO impl.YarnClientImpl: Submitted application application_1556766549220_0004
19/05/02 11:55:50 INFO mapreduce.Job: The url to track the job: http://hadoop103:8088/proxy/application_1556766549220_0004/
19/05/02 11:55:50 INFO mapreduce.Job: Running job: job_1556766549220_0004
19/05/02 11:56:04 INFO mapreduce.Job: Job job_1556766549220_0004 running in uber mode : false
19/05/02 11:56:04 INFO mapreduce.Job:  map 0% reduce 0%
19/05/02 11:56:24 INFO mapreduce.Job:  map 7% reduce 0%
19/05/02 11:56:27 INFO mapreduce.Job:  map 23% reduce 0%
19/05/02 11:56:28 INFO mapreduce.Job:  map 63% reduce 0%
19/05/02 11:56:29 INFO mapreduce.Job:  map 73% reduce 0%
19/05/02 11:56:30 INFO mapreduce.Job:  map 77% reduce 0%
19/05/02 11:56:31 INFO mapreduce.Job:  map 87% reduce 0%
19/05/02 11:56:32 INFO mapreduce.Job:  map 100% reduce 0%
19/05/02 11:56:35 INFO mapreduce.Job:  map 100% reduce 100%
19/05/02 11:56:36 INFO mapreduce.Job: Job job_1556766549220_0004 completed successfully
19/05/02 11:56:36 INFO mapreduce.Job: Counters: 51
 File System Counters
                FILE: Number of bytes read=856
                FILE: Number of bytes written=1304826
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=2350
                HDFS: Number of bytes written=1342177359
                HDFS: Number of read operations=43
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=12
        Job Counters 
                Killed map tasks=1
                Launched map tasks=10
                Launched reduce tasks=1
                Data-local map tasks=8
                Rack-local map tasks=2
                Total time spent by all maps in occupied slots (ms)=263635
                Total time spent by all reduces in occupied slots (ms)=9698
                Total time spent by all map tasks (ms)=263635
                Total time spent by all reduce tasks (ms)=9698
                Total vcore-milliseconds taken by all map tasks=263635
                Total vcore-milliseconds taken by all reduce tasks=9698
                Total megabyte-milliseconds taken by all map tasks=269962240
                Total megabyte-milliseconds taken by all reduce tasks=9930752
        Map-Reduce Framework
                Map input records=10
                Map output records=50
                Map output bytes=750
                Map output materialized bytes=910
                Input split bytes=1230
                Combine input records=0
                Combine output records=0
                Reduce input groups=5
                Reduce shuffle bytes=910
                Reduce input records=50
                Reduce output records=5
                Spilled Records=100
                Shuffled Maps =10
                Failed Shuffles=0
                Merged Map outputs=10
                GC time elapsed (ms)=17343
                CPU time spent (ms)=96930
                Physical memory (bytes) snapshot=2821341184
                Virtual memory (bytes) snapshot=23273218048
                Total committed heap usage (bytes)=2075656192
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=1120
        File Output Format Counters 
                Bytes Written=79
19/05/02 23:39:42 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
19/05/02 23:39:42 INFO fs.TestDFSIO:            Date & time: Thu May 02 23:39:42 CST 2019
19/05/02 23:39:42 INFO fs.TestDFSIO:        Number of files: 10
19/05/02 23:39:42 INFO fs.TestDFSIO: Total MBytes processed: 1280.0
19/05/02 23:39:42 INFO fs.TestDFSIO:      Throughput mb/sec: 10.69751115716984
19/05/02 23:39:42 INFO fs.TestDFSIO: Average IO rate mb/sec: 14.91699504852295
19/05/02 23:39:42 INFO fs.TestDFSIO:  IO rate std deviation: 11.160882132355928
19/05/02 23:39:42 INFO fs.TestDFSIO:     Test exec time sec: 66.231
  • 2)测试HDFS读性能

    测试内容:读取HDFS集群10个128M的文件

[weiwei@hadoop102 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar TestDFSIO -read -nrFiles 10 -fileSize 128MB

19/05/02 11:55:42 INFO fs.TestDFSIO: TestDFSIO.1.8
19/05/02 11:55:42 INFO fs.TestDFSIO: nrFiles = 10
19/05/02 11:55:42 INFO fs.TestDFSIO: nrBytes (MB) = 128.0
19/05/02 11:55:42 INFO fs.TestDFSIO: bufferSize = 1000000
19/05/02 11:55:42 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
19/05/02 11:55:45 INFO fs.TestDFSIO: creating control file: 134217728 bytes, 10 files
19/05/02 11:55:47 INFO fs.TestDFSIO: created control files for: 10 files
19/05/02 11:55:47 INFO client.RMProxy: Connecting to ResourceManager at hadoop103/192.168.1.103:8032
19/05/02 11:55:48 INFO client.RMProxy: Connecting to ResourceManager at hadoop103/192.168.1.103:8032
19/05/02 11:55:49 INFO mapred.FileInputFormat: Total input paths to process : 10
19/05/02 11:55:49 INFO mapreduce.JobSubmitter: number of splits:10
19/05/02 11:55:49 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1556766549220_0004
19/05/02 11:55:50 INFO impl.YarnClientImpl: Submitted application application_1556766549220_0004
19/05/02 11:55:50 INFO mapreduce.Job: The url to track the job: http://hadoop103:8088/proxy/application_1556766549220_0004/
19/05/02 11:55:50 INFO mapreduce.Job: Running job: job_1556766549220_0004
19/05/02 11:56:04 INFO mapreduce.Job: Job job_1556766549220_0004 running in uber mode : false
19/05/02 11:56:04 INFO mapreduce.Job:  map 0% reduce 0%
19/05/02 11:56:24 INFO mapreduce.Job:  map 7% reduce 0%
19/05/02 11:56:27 INFO mapreduce.Job:  map 23% reduce 0%
19/05/02 11:56:28 INFO mapreduce.Job:  map 63% reduce 0%
19/05/02 11:56:29 INFO mapreduce.Job:  map 73% reduce 0%
19/05/02 11:56:30 INFO mapreduce.Job:  map 77% reduce 0%
19/05/02 11:56:31 INFO mapreduce.Job:  map 87% reduce 0%
19/05/02 11:56:32 INFO mapreduce.Job:  map 100% reduce 0%
19/05/02 11:56:35 INFO mapreduce.Job:  map 100% reduce 100%
19/05/02 11:56:36 INFO mapreduce.Job: Job job_1556766549220_0004 completed successfully
19/05/02 11:56:36 INFO mapreduce.Job: Counters: 51
        File System Counters
                FILE: Number of bytes read=852
                FILE: Number of bytes written=1304796
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=1342179630
                HDFS: Number of bytes written=78
                HDFS: Number of read operations=53
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters 
                Killed map tasks=1
                Launched map tasks=10
                Launched reduce tasks=1
                Data-local map tasks=8
                Rack-local map tasks=2
                Total time spent by all maps in occupied slots (ms)=233690
                Total time spent by all reduces in occupied slots (ms)=7215
                Total time spent by all map tasks (ms)=233690
                Total time spent by all reduce tasks (ms)=7215
                Total vcore-milliseconds taken by all map tasks=233690
                Total vcore-milliseconds taken by all reduce tasks=7215
                Total megabyte-milliseconds taken by all map tasks=239298560
                Total megabyte-milliseconds taken by all reduce tasks=7388160
        Map-Reduce Framework
                Map input records=10
                Map output records=50
                Map output bytes=746
                Map output materialized bytes=906
                Input split bytes=1230
                Combine input records=0
                Combine output records=0
                Reduce input groups=5
                Reduce shuffle bytes=906
                Reduce input records=50
                Reduce output records=5
                Spilled Records=100
                Shuffled Maps =10
                Failed Shuffles=0
                Merged Map outputs=10
                GC time elapsed (ms)=6473
                CPU time spent (ms)=57610
                Physical memory (bytes) snapshot=2841436160
                Virtual memory (bytes) snapshot=23226683392
                Total committed heap usage (bytes)=2070413312
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=1120
        File Output Format Counters 
                Bytes Written=78
19/05/02 11:56:36 INFO fs.TestDFSIO: ----- TestDFSIO ----- : read
19/05/02 11:56:36 INFO fs.TestDFSIO:            Date & time: Thu May 02 11:56:36 CST 2019
19/05/02 11:56:36 INFO fs.TestDFSIO:        Number of files: 10
19/05/02 11:56:36 INFO fs.TestDFSIO: Total MBytes processed: 1280.0
19/05/02 11:56:36 INFO fs.TestDFSIO:      Throughput mb/sec: 16.001000062503905
19/05/02 11:56:36 INFO fs.TestDFSIO: Average IO rate mb/sec: 17.202795028686523
19/05/02 11:56:36 INFO fs.TestDFSIO:  IO rate std deviation: 4.881590515873911
19/05/02 11:56:36 INFO fs.TestDFSIO:     Test exec time sec: 49.116
19/05/02 11:56:36 INFO fs.TestDFSIO:
  • 3)删除测试生成数据
[weiwei@hadoop102 mapreduce]$ hadoop jar /opt/module/hadoop2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.2-tests.jar TestDFSIO -clean
  • 4)使用Sort程序评测MapReduce

① 使用RandomWriter来产生随机数,每个节点运行10个Map任务,每个Map产生大约1G大小的二进制随机数

[weiwei@hadoop102 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar randomwriter random-data

②执行Sort程序

[weiwei@hadoop102 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar sort random-data sorted-data

③验证数据是否真正排好序了

[weiwei@hadoop102 mapreduce]$ hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar testmapredsort -sortInput random-data -sortOutput sorted-data
(4).项目经验之Hadoop参数调优
  • 1)HDFS参数调优hdfs-site.xml
    ① dfs.namenode.handler.count=20 * log2(Cluster Size)比如集群规模为8台时此参数设置为60。

The number of Namenode RPC server threads that listen to requests from
clients. If dfs.namenode.servicerpc-address is not configured then
Namenode RPC server threads listen to requests from all nodes.
NameNode有一个工作线程池,用来处理不同DataNode的并发心跳以及客户端并发的元数据操作。对于大集群或者有大量客户端的集群来说,通常需要增大参数dfs.namenode.handler.count的默认值10。设置该值的一般原则是将其设置为集群大小的自然对数乘以20,即20logN,N为集群大小。

②编辑日志存储路径dfs.namenode.edits.dir设置与镜像文件存储路径dfs.namenode.name.dir尽量分开,达到最低写入延迟

  • 2)YARN参数调优yarn-site.xml
    ① ★★★情景描述:总共7台机器,每天几亿条数据,数据源->Flume->Kafka->HDFS->Hive★★★
    面临问题:数据统计主要用HiveSQL,没有数据倾斜,小文件已经做了合并处理,开启的JVM重用,而且IO没有阻塞,内存用了不到50%。但是还是跑的非常慢,而且数据量洪峰过来时,整个集群都会宕掉。基于这种情况有没有优化方案。
    ② 解决办法:
    内存利用率不够。这个一般是Yarn的2个配置造成的,单个任务可以申请的最大内存大小,和Hadoop单个节点可用内存大小。调节这两个参数能提高系统内存的利用率。
    (a)yarn.nodemanager.resource.memory-mb
    表示该节点上YARN可使用的物理内存总量,默认是8192(MB),注意,如果你的节点内存资源不够8GB,则需要调减小这个值,而YARN不会智能的探测节点的物理内存总量。
    (b)yarn.scheduler.maximum-allocation-mb
    单个任务可申请的最多物理内存量,默认是8192(MB)。
    ③ Hadoop宕机
    (a)如果MR造成系统宕机。此时要控制Yarn同时运行的任务数,和每个任务申请的最大内存。调整参数:yarn.scheduler.maximum-allocation-mb(单个任务可申请的最多物理内存量,默认是8192MB)
    (b)如果写入文件过量造成NameNode宕机。那么调高Kafka的存储大小,控制从Kafka到HDFS的写入速度。高峰期的时候用Kafka进行缓存,高峰期过去数据同步会自动跟上。

(二).Zookeeper安装

(1) 分布式安装部署

1.集群规划
在hadoop102、hadoop103和hadoop104三个节点上部署Zookeeper。
2.解压安装
(1)解压Zookeeper安装包到/opt/module/目录下

[weiwei@hadoop102 software]$ tar -zxvf zookeeper-3.4.10.tar.gz -C /opt/module/

(2)同步/opt/module/zookeeper-3.4.10目录内容到hadoop103、hadoop104

[weiwei@hadoop102 module]$ xsync zookeeper-3.4.10/

3.配置服务器编号
(1)在/opt/module/zookeeper-3.4.10/这个目录下创建zkData

[weiwei@hadoop102 zookeeper-3.4.10]$ mkdir zkData

(2)在/opt/module/zookeeper-3.4.10/zkData目录下创建一个myid的文件

[weiwei@hadoop102 zkData]$ touch myid

添加myid文件,注意一定要在linux里面创建,在notepad++里面很可能乱码
(3)编辑myid文件

[weiwei@hadoop102 zkData]$ vi myid

在文件中添加与server对应的编号:1

(4)拷贝配置好的zookeeper到其他机器上

[weiwei@hadoop102 zkData]$ xsync myid[weiwei@hadoop102 zkData]$ xsync myid

并分别在hadoop102、hadoop103上修改myid文件中内容为2、3
4.配置zoo.cfg文件
(1)重命名/opt/module/zookeeper-3.4.10/conf这个目录下的zoo_sample.cfg为zoo.cfg

[weiwei@hadoop102 conf]$ mv zoo_sample.cfg zoo.cfg

(2)打开zoo.cfg文件

[weiwei@hadoop102 conf]$ vim zoo.cfg
#修改数据存储路径配置
dataDir=/opt/module/zookeeper-3.4.10/zkData
#增加如下配置
#######################cluster##########################
server.1=hadoop102:2888:3888
server.2=hadoop103:2888:3888
server.3=hadoop104:2888:3888

(3)同步zoo.cfg配置文件

[weiwei@hadoop102 conf]$ xsync zoo.cfg
(2) ZK集群启动停止脚本

1)在hadoop102的/home/weiwei/bin目录下创建脚本

[weiwei@hadoop102 bin]$ vim zk.sh

在脚本中编写如下内容

#! /bin/bash

case $1 in
"start"){
        echo "================     正在启动Zookeeper               ==========="
         for i in hadoop102 hadoop103 hadoop104
        do
                echo ------------$i-----------
                ssh $i "/opt/module/zookeeper/bin/zkServer.sh start"
        done
};;
"stop"){
        echo "================     正在关闭Zookeeper               ==========="

         for i in hadoop102 hadoop103 hadoop104
               do
                echo ------------$i-----------
                ssh $i "/opt/module/zookeeper/bin/zkServer.sh stop"
        done
};;
"status"){
        for i in hadoop102 hadoop103 hadoop104
        do
                ssh $i "/opt/module/zookeeper/bin/zkServer.sh status"
        done
};;
esac

注意:如若出现这种错误:
在这里插入图片描述

 解决方法: 在/zookeeper/bin/zkEnv.sh的中开始位置添加export JAVA_HOME=/opt/module/jdk1.8.0_112

2)增加脚本执行权限

[weiwei@hadoop102 bin]$ chmod 777 zk.sh

3)Zookeeper集群启动脚本

[weiwei@hadoop102 module]$ zk.sh start

4)Zookeeper集群停止脚本

[weiwei@hadoop102 module]$ zk.sh stop

4)Zookeeper集群校验状态脚本

[weiwei@hadoop102 module]$ zk.sh status
  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值