第一天 hadoop的基本概念 伪分布式hadoop集群安装 hdfs mapreduce 演示
第二天 hdfs的原理和使用操作、编程
第三天 mapreduce的原理和编程
第四天 常见mr算法实现和shuffle的机制
第五天 hadoop2.x中HA机制的原理和全分布式集群安装部署及维护
第六天 hbase hive
第七天 storm+kafka
第八天 实战项目
hadoop cloudera
cloudera edh() enterprise data hub
数据众包
Resourcemanager Nodemanager Namenode Datanode
Hadoop 是什么
GFS
MapReduce
BigTable
hadoop 能做什么 日志分析
hive 日志分析
pig 高级数据处理 可能认识的人,推荐商品,垃圾邮件识别 过滤 用户特征建模
天猫 hibe mahout 机器学习领域经典算法
Mahout 是基于Hadoop的机器学习和数据挖掘的一个分布式框架。Mahout用MapReduce实现了部分数据挖掘算法,解决了并行挖掘的问题。
HDFS hadoop分布式文件系统 系统架构图
Yarn 资源管理 调度 haddop1.0 2.0
DFS 分布式文件系统 Distributed file system
既然是关于分布式文件系统的,就多说几句
1.GlusterFS 文件系统标准的posix接口支持,可以做分布式NAS,也有人HPC,甚至支持KVM的虚机卷;做分布式NAS最多,其他方面用的不多,很多互联网视频公司用GlusterFS来做片库;
2.ceph,支持块ceph RBD,对象ceph RGW,文件cephfs;ceph RBD和ceph RGW比较成熟,在openstack社区比较火,做虚机块存储用的很多,cephfs的前期bug比较多,社区目前也在解决这些问题;
3.Lustre,比较老牌的分布式文件系统,部署在多个san阵列上,不支持副本,支持分布式锁,主要做HPC高性能计算;
4.HDFS只支持追加写,设计中没有考虑修改写、截断写、稀疏写等复杂的posix语义,目的并不是通用的文件系统,一般作为hadoop ecosystem的存储引擎;
5.moosefs 比较接近GoogleFS的c++实现,通过fuse支持了标准的posix,算是通用的文件系统,可惜社区不是太活跃;
6.IBM的GPFS也是一个很老牌的分布式文件系统,非常强大,有两个分支,一个是通用文件系统,一个是兼容hadoop mapreduce,可惜没有开源,国内也没人买的起;
7.facebook Haystack是一个专有的图片存储系统的原型,适合小文件和worm场景(write once read many),本身并没有开源,github上已经有一个比较成熟的实现Terry-Mao/bfs(不是百度的BFS)
这里有一个混淆的概念,分布式文件系统vs分布式计算。
我看题目的描述,你需要分布式计算(音视频处理放在云端),所以你后来提到的GlusterFS等等不能解决你的问题。它们只是分布式文件系统。
分布式计算至少要求任务是可分解的,音视频要看你具体的文件格式,没有通用的解决方案。
传统的处理音频视频大文件的方法是SAN,用一台很贵的机器,接一个很贵的网,连上很贵的存储。
主要看你的具体业务和存储+访问场景,其实现在音视频比如制播之类用得多的还是类似于SAN之类的东西。
FastDFS 针对大量小文件存储有优势,这种场景嗯…没有用过。
hadoop的hdfs适合大文件存储,顺序读取类型的应用,你看看你们的应用场景是否适合,btw,hdfs随机访问延时挺大的. 顺序访问也要优化好才吞吐高啊。
————————————————
原文链接:https://blog.csdn.net/enweitech/article/details/82414361
存储区域网络(简称SAN)
SAN存储 (存储区域网络Storage Area Network)也即存储区域网络,这个是通过某种交换机(例如光纤交换机或者IB交换机等)连接存储阵列和服务器主机等设备,形成一个专用的存储网络。
网络连接存储(简称NAS)
网络储存设备 (Network Attached Storage,NAS),NAS是通过IP网络访问的文件系统,可以理解为硬盘+文件系统软件的组合。NAS存储设备可以直接连接在以太网中,之后在该网络域内的不同类型操作系统主机都可以实现对该设备的访问。
centos 7 安装hadoop 2.10.1 jkd1.8
虚拟机3中网络连接 桥接 nat
安装centos 7 打开网络 ,自动获得主机ip
service network restart
linux 图形界面
vi /etc/inittab
init 3
id:5:initdefault:
init一共分为7个级别,这7个级别的所代表的含义如下
0:停机或者关机(千万不能将initdefault设置为0)
1:单用户模式,只root用户进行维护
2:多用户模式,不能使用NFS(Net File System)
3:完全多用户模式(标准的运行级别)
4:安全模式
5:图形化(即图形界面)
6:重启(千万不要把initdefault设置为6)
systemctl get-default
systemctl set-default multi-user.target
系统 重启 shutdown top free ps aux startx
shutdown -r now
shutdown -h
top -o %MEM
free -mt
# ps axu | head -n 10
ps aux | sort -k4nr | head -n 10
ps aux | sort -k3nr | head -n 10
startx
host设置
sudo vi /etc/sysconfig/network
hostnamectl set-hostname cch128
hostnamectl set-hostname cch128
vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=cch128.com
/etc/rc.d/init.d/network restart
[root@localhost ~]# hostname
localhost.localdomain
sudo 用户目录
vi /etc/sudoers
java 进程pid rpm -qa|grep java
[cch@cch128 bin]$ java -version
openjdk version "1.7.0_75"
OpenJDK Runtime Environment (build 1.7.0_75-b13)
OpenJDK 64-Bit Server VM (build 24.75-b04, mixed mode)
echo $JAVA_HOME
重新安装jdk1.8
rpm -e --nodeps java-1.8.0-openjdk-1.8.0.131-11.b12.el7.x86_64
tar -zxvf dk-8u144-linux-x64.tar.gz
/etc/profile
export JAVA_HOME=/home/look/dev-software/jdk1.8.0_144
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
source /etc/profile
远程文件传输 scp myhistory.txt root@192.168.31.20:/root
sftp上传文件到服务器 SecureCRT
关闭防火墙
sudo service iptables stop
sudo service iptables status
systemctl stop firewalld.service
systemctl disable firewalld.service
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --reload
hadoop 配置
/etc/profile
export HADOOP_HOME=/home/cch/app/hadoop-2.4.1
hadoop namenode -format
jps
hadoop hdfs 命令
[hadoop@master ~]$ hadoop version
Hadoop 2.10.1
Subversion https://github.com/apache/hadoop -r 1827467c9a56f133025f28557bfc2c562d78e816
Compiled by centos on 2020-09-14T13:17Z
Compiled with protoc 2.5.0
From source with checksum 3114edef868f1f3824e7d0f68be03650
This command was run using /home/hadoop/app/hadoop-2.10.1/share/hadoop/common/hadoop-common-2.10.1.jar
hadoop fs -put jdk_ri-7u75-b13-linux-x64-18_dec_2014.tar.gz hdfs://cch128:9000/
hadoop 安装 hdfs namenode -format
which hadoop
start-all.sh
[hadoop@master ~]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/app/hadoop-2.10.1/logs/hadoop-hadoop-namenode-master.out
localhost: starting datanode, logging to /home/hadoop/app/hadoop-2.10.1/logs/hadoop-hadoop-datanode-master.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.10.1/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.10.1/logs/yarn-hadoop-resourcemanager-master.out
localhost: starting nodemanager, logging to /home/hadoop/app/hadoop-2.10.1/logs/yarn-hadoop-nodemanager-master.out
[hadoop@master ~]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/app/hadoop-2.10.1/logs/hadoop-hadoop-namenode-master.out
localhost: starting datanode, logging to /home/hadoop/app/hadoop-2.10.1/logs/hadoop-hadoop-datanode-master.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.10.1/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.10.1/logs/yarn-hadoop-resourcemanager-master.out
localhost: starting nodemanager, logging to /home/hadoop/app/hadoop-2.10.1/logs/yarn-hadoop-nodemanager-master.out
http://192.168.25.129:50070/
hadoop 安装成功
http://192.168.25.129:50070/explorer.html#/
http://192.168.25.129:8088/cluster
jar hadoop-mapreduce-examples-2.10.1.jar pi 2 2
[hadoop@master mapreduce]$ pwd
/home/hadoop/app/hadoop-2.10.1/share/hadoop/mapreduce
[hadoop@master mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.10.1.jar pi 2 2
Number of Maps = 2
Samples per Map = 2
Wrote input for Map #0
Wrote input for Map #1
Starting Job
21/10/21 18:40:07 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.25.129:8032
21/10/21 18:40:08 INFO input.FileInputFormat: Total input files to process : 2
21/10/21 18:40:09 INFO mapreduce.JobSubmitter: number of splits:2
21/10/21 18:40:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634812594012_0001
21/10/21 18:40:10 INFO conf.Configuration: resource-types.xml not found
21/10/21 18:40:10 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
21/10/21 18:40:10 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
21/10/21 18:40:10 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE
21/10/21 18:40:10 INFO impl.YarnClientImpl: Submitted application application_1634812594012_0001
21/10/21 18:40:10 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1634812594012_0001/
21/10/21 18:40:10 INFO mapreduce.Job: Running job: job_1634812594012_0001
21/10/21 18:40:18 INFO mapreduce.Job: Job job_1634812594012_0001 running in uber mode : false
21/10/21 18:40:18 INFO mapreduce.Job: map 0% reduce 0%
21/10/21 18:40:23 INFO mapreduce.Job: map 50% reduce 0%
21/10/21 18:40:26 INFO mapreduce.Job: map 100% reduce 0%
21/10/21 18:40:32 INFO mapreduce.Job: map 100% reduce 100%
21/10/21 18:40:33 INFO mapreduce.Job: Job job_1634812594012_0001 completed successfully
21/10/21 18:40:33 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=50
FILE: Number of bytes written=629943
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=526
HDFS: Number of bytes written=215
HDFS: Number of read operations=11
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=2
Launched reduce tasks=1
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=4835
Total time spent by all reduces in occupied slots (ms)=3949
Total time spent by all map tasks (ms)=4835
Total time spent by all reduce tasks (ms)=3949
Total vcore-milliseconds taken by all map tasks=4835
Total vcore-milliseconds taken by all reduce tasks=3949
Total megabyte-milliseconds taken by all map tasks=4951040
Total megabyte-milliseconds taken by all reduce tasks=4043776
Map-Reduce Framework
Map input records=2
Map output records=4
Map output bytes=36
Map output materialized bytes=56
Input split bytes=290
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=56
Reduce input records=4
Reduce output records=0
Spilled Records=8
Shuffled Maps =2
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=239
CPU time spent (ms)=1490
Physical memory (bytes) snapshot=801165312
Virtual memory (bytes) snapshot=6371180544
Total committed heap usage (bytes)=493355008
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=236
File Output Format Counters
Bytes Written=97
Job Finished in 25.23 seconds
Estimated value of Pi is 4.00000000000000000000
[hadoop@master mapreduce]$
RPC 远程过程调用 ClientProtocal 接口协议 底层机制
rpc hadoop 动态代理 proxy socket
LoginServiceInterface
public interface LoginServiceInterface {
public static final long versionID=1L;
public String login(String username,String password);
}
LoginServiceImpl
public class LoginServiceImpl implements LoginServiceInterface {
@Override
public String login(String username, String password) {
return username + " logged in successfully!";
}
}
Starter 服务端 RPC.Builder server.start();
public class Starter {
public static void main(String[] args) throws HadoopIllegalArgumentException, IOException {
Builder builder = new RPC.Builder(new Configuration());
builder.setBindAddress("cch")
.setPort(10096)
.setProtocol(LoginServiceInterface.class)
.setInstance(new LoginServiceImpl());
//builder.setSecretManager(new TokenIdentifier)
Server server = builder.build();
server.start();
}
}
调用端
public class LoginController RPC.getProxy proxy.login("mijie", "123456")
public static void main(String[] args) throws Exception {
//ClientProtocol
//DFSClient
LoginServiceInterface proxy = RPC.getProxy(LoginServiceInterface.class,
1L,
new InetSocketAddress("cch", 10096),
new Configuration());
String result = proxy.login("mijie", "123456");
System.out.println(result);
RPC.stopProxy(proxy);
}
}
安装镜像下载 https://mirrors.tuna.tsinghua.edu.cn/apache/
hadoop 使用命令
/home/java/java-se-8u41-ri/bin
hadoop fs -put word.txt /wordcount/input
hadoop jar app/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /input /output
export HADOOP_ROOT_LOGGER=DEBUG,console
hdfs dfsadmin -safemode leave
stop-all.sh
start-all.sh
hadoop fs -mkdir /wordcount/input
hadoop fs -rm -r /wordcount/output
hadoop fs -chmod -R 777 /
hadoop fs -df -h /wordcount
hadoop fs -du -s -h hdfs://master:9000/*
hadoop fs -rm -r /..
./hdfs dfs -chmod -R 755 /tmp
mapreduce 卡在job
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>3072</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>2</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>256</value>
</property>
hdfs文件存储
文件上传写入流程
ssh 公钥 私钥 登录过程
ssh master
ssh-keygen -t rsa
/home/hadoop/.ssh/id_rsa
cd /home/hadoop/.ssh/
ll -a
touch authorized_keys
chmod 600 authorized_keys
cat id_rsa.pub >> authorized_keys
ssh master
添加用户到sudoers
现在要让jack用户获得sudo使用权
1.切换到超级用户root
$su root
2.查看/etc/sudoers权限,可以看到当前权限为440
$ ls -all /etc/sudoers
-r--r----- 1 root root744 6月 8 10:29/etc/sudoers
3.更改权限为777
$chmod 777/etc/sudoers
4.编辑/etc/sudoers
$vi /etc/sudoers
5.在root ALL=(ALL:ALL) ALL 下面添加一行
jack ALL=(ALL)ALL
然后保存退出。
第一个ALL是指网络中的主机,我们后面把它改成了主机名,它指明jack可以在此主机上执行后面的命令。
第二个括号里的ALL是指目标用户,也就是以谁的身份去执行命令。
最后一个ALL当然就是指命令名了。
具体这里不作说明
6.把/etc/sudoers权限改回440
$chmod 440 /etc/sudoers
7.操作完成,切换到jack用户测试一下
scp id_rsa.pub spark01:/home/hadoop
600 权限
NameNode SecondNameNode mataData \ edits log \ fsimage
NameNode主要是用来保存HDFS的元数据信息,比如命名空间信息,块信息等。当它运行的时候,这些信息是存在内存中的。但是这些信息也可以持久化到磁盘上。
SecondNameNode checkpoint
namenode 管理元数据 secondaryNM 持久化元数据
hdfs Client 向hdfs存数据以及 复制备份流程
’/home/hado/dfs/data/current/BP-1627168943-192.168.25.129-1633922507094/current/finalized/subdir0/subdir0
代码跟踪 临时数据
Eclipse 远程访问hdfs
winutils.exe
-DHADOOP_USER_NAME=hadoop
HdfsUtil FileSystem.get(conf)
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://master:9000/");
FileSystem fs = FileSystem.get(conf);
FSDataInputStream is = fs.open(new Path("/jdk-7u65-linux-i586.tar"));
FileOutputStream os = new FileOutputStream("D:/java/hadoop/jdk-7u65-linux-i586.tar");
IOUtils.copy(is, os);
}
FileSystem.get(conf);
FileSystem.get(conf) 调用流程
FileSystem.class
cache.ger()
createFileSystem(url,conf)
getFileSystemClass()
fs.init()
DistributeFileSystem.class
DFSClient dfs=new DFSClient()
DFSClient.class
ClientProtocal namenode
DFSClient(){
NameNodeProxies.createProxyWithLossyRetryHandler( namenodeurl,ClientProtocal.class)
}
Filesystem.get
clazz=createrFilieSystem
DistributedFileSystem.initialize
this.dfs=new DFSClient()
dfs.namenode
ClientProtocal namenode
<ClientProtocal > proxyInfo=NameNodeProxiesClient.createProxyWithLossyRetryHandler( ClientProtocol.class,;
fs.open打开流过程
FSDataInputStream is =fs.open(new Path("/jdk-7u65-linux-i586.tar"));
LocateBlock{BP-:blk_1032;blocksize()=13;corrupt=false;offset=0;locs=[192.200:50010]
BlockReader
DFSInputStream in=fs.open()
DistributedFileSystem.open()
(DFSClient)fs.dfs
fs.dfs.open(){
new DFSInputStream(this, src, verifyChecksum, null);
}
DFSInputStream(this, src, verifyChecksum, null){
openInfo(false);
}
openInfo(false){
fetchLocatedBlocksAndGetLastBlockLength()
}
fetchLocatedBlocksAndGetLastBlockLength(){
LocatedBlocks newInfo=dfsClient.getLocatedBlocks(src, 0);
}
getLocatedBlocks{
getLocatedBlocks(){
//ClientProtocal namenode
callGetBlockLocations(namenode, src, start, length){
namenode.getBlockLocations(src, start, length);
}
}
}
maven hadoop hdfs
hadoop-common
hadoop-hdfs
hadoop-mapreduce-client-core
hadoop-mapreduce-client-jobclient
hadoop-mapreduce-client-common
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.101</version>
</dependency>
MapReduce Yarn 流程 job resourcemanager nodemanager
job
runjar
container
MRAPPmaster
yarnchild(map.task,reduce.task)
ResourceManager NodeManager (节点) container
MapReduce MRAPPMaster --container – yarn child (动态)
job.waitforcompletion()
RunJar
RM -Job
RM -staging-dir
HDFS /yarn-staging-dir/jobid
RM - job quene
NM- 领取任务
RM-分配 container
RM -NM- MRAPPMaster (启动 注册)
MRAPPMaster-map task( yarn child )
MRAPPMaster-reduce task( yarn child )
MRAPPMaster (注销)
jps-RUNJAR-MRAPPMaster-YarnChild
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<final>true</final>
</property>
<property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
The url to track the job: http://master:8088/proxy/application_1633953034745_0004/
YarnClientImpl:303 - Submitted application application_1633953034745_0004
conf.set(“mapreduce.job.jar”,“wc.jar”);
YARNRunner.class
Submitting tokens for job: job_local1671978932_0001
15:22:48,637 INFO JobSubmitter:262 - Cleaning up the staging area file:/tmp/hadoop-华/mapred/staging/
yarn cluster
LocalJobRunner YARNRunner
public class YARNRunner implements ClientProtocol {
package org.apache.hadoop.mapreduce.protocol.ClientProtocol
yarn runner 调用流程
split切片 shuffle 清洗 map->reduce 数据传递
input->split->map->buffer->partition->merge->sort->merge->reduce->output
InputFormat OutputFormat 切片split代码跟踪
inputformat->splits
提交任务时 切片split流程
zookeeper
dubbo 服务注册 命名服务
Hadoop HA zooker集群 zkfc qjournalnode
zkfc federation
zkfc是什么? ZooKeeperFailoverController
它是什么?是Hadoop中通过ZK实现FC功能的一个实用工具。
主要作用:作为一个ZK集群的客户端,用来监控NN的状态信息。
谁会用它?每个运行NN的节点必须要运行一个zkfc
hadoop ha 部署
ssh-copy-id weekend02
ssh-keygen -t -rsa
scp -r /weekend/hadoop-2.4.1/ hadoop@weekend04:/weekend/
./zkServer.sh start
sbin/hadoop-daemon.sh start journalnode
hdfs namenode -format
scp -r tmp/ weekend02:/home/hadoop/app/hadoop-2.4.1/
hdfs zkfc -formatZK
sbin/start-dfs.sh
sbin/start-yarn.sh
less .og
pig
hive
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=metastore_db;create=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.50.56:3306/hive?nullCatalogMeansCurrent=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://weekend01:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.50.56:3306/hive?createDatabaseIfNotExist=true&nullCatalogMeansCurrent=true</value>
</property>
hdfs://master:9000/user/hive/warehouse
SHOW VARIABLES LIKE 'char%'
ALTER DATABASE hive CHARACTER SET latin1;
SELECT * FROM USER;
UPDATE USER SET HOST = '%' WHERE USER = 'root';
select count(*) mapreduce
<property>
<name>hive.metastore.schema.verification</name>
<value>true</value>
</property>
数据仓库 数据集市
edw odb adb
hive spark
hadoop ecosystem
分布式搜索引擎「Elasticsearch」、
分布式文件系统「HDFS」、
分布式消息队列「Kafka」、
缓存数据库「Redis」等等…
HBSE
hbase hadoop 版本
chown hadoop:hadoop -R
./hive --service metastore
./schematool -dbType mysql -initSchema
./hive --service metastore
hdfs namenode -format
create database wk110;
show databases;