大数据学习2—— 分布式文件系统HDFS

软件版本

hadoop-2.6.0-cdh5.7.0.tar.gz

目录:

1.分布式文件系统HDFS
2.HDFS优缺点
3.分布式文件系统的设计思想
4.HDFS架构图
5.Hadoop下载和JDK的安装
6.shh安装及HDFS文件参数配置
7.HDFS shell操作
8.HDFS Java API编程
9.HDFS读写数据流程
10.hadoop新特性

11.HDFS日志收集实战案例


1.分布式文件系统HDFS

 

1) dataset 量达到一定规模时,那么单机就没有办法处理

2) 把数据分布到各个独立的机器上(多台机器共同协作)

 

官网查找资料,hadoop.apache.org

 



 

 

介绍

HDFSHadoop Distributed File System)是一个分布式的文件系统,被设计运行在廉价的硬盘上,和目前一些分布式的文件系统有很大的相似点,但是比起其它的分布式文件系统也有很大不同。HDFS是个高容错(fault-tolerant)的系统,被部署在廉价的硬盘上。HEFS提供高吞吐量来访问应用数据,并且适合大数据的访问。HDFS放松了一些POSIX的要求,使其能流式的访问文件系统。HDFS最开始是Apache Nutch搜索引擎的基础设备。HDFSapache的一个核心项目(The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS was originally built as infrastructure for the Apache Nutch web search engine project. HDFS is part of the Apache Hadoop Core project. The project URL is http://hadoop.apache.org/

 

2.HDFS优缺点

分布式文件系统
1)dataset量达到一定规模时,那么单机就没办法进行处理
2)把数据分布到各个独立的机器上(多台机器共同协作)

优点 1.构建在廉价的机器上。 2.使用大数据处理。 3.高容错

官方说明

1. 硬件错误(Hardware Failure

硬件错误是一个常态而不是异常.一个HDFS有可能包含成百上千的服务机器,每一台服务器存储文件的一个部分。事实上包含很大的结构,每一部分很大可能出错,这意味着HDFS大部分是不能用的,因此,自动发现错误和快递自动的恢复他们是HDFS的一个目标。(Hardware failure is the norm rather than the exception. An HDFS instance may consist of hundreds or thousands of server machines, each storing part of the file system’s data. The fact that there are a huge number of components and that each component has a non-trivial probability of failure means that some component of HDFS is always non-functional. Therefore, detection of faults and quick, automatic recovery from them is a core architectural goal of HDFS.

2. 流式的访问数据(Streaming Data Access

HDFS上面的应用程序需要用流式的方式访问数据。这一点是他和普通的应用程序的区别.HDFS更多的是用于批处理,而不是用户的交互式使用。重点是数据访问的高吞吐量,而不是数据访问的低延迟。(Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements that are not needed for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates.

3. 大规模的数据集(large Data Sets

HDFS运行的应用程序具有大数据集,一个典型的HDFS可以是gb,也可以是tb.HDFS被调优以支持大文件,它应该为单个集群中的数百个节点提供高聚合的数据带宽和规模,能能在在一个HDFS上支持数以千计的文件。(Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It should provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance.

4. 简单的一致性模式(Simple Coherency Model

HDFS是一个一次写入多次读取的模式。一个文件一旦创建,写入,关闭,就不会被更改,除非是附加和结尾。它能够把你添加的内容附加到文件末尾,但是不能任意点更新。这种假设就解决的数据一致性问题,使得支持高吞吐量访问。MapReduce应用和爬虫应用就适合这种模型。(HDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed except for appends and truncates. Appending the content to the end of the files is supported but cannot be updated at arbitrary point. This assumption simplifies data coherency issues and enables high throughput data access. A MapReduce application or a web crawler application fits perfectly with this model.

5. 移动计算比移动数据更划算(Moving Computation is Cheaper than Moving Data”)

如果在其操作的数据附近执行,应用程序请求的计算效率会更高。当数据量非常大的时候,这一点非常重要,这将最小化网络拥塞,并增加系统的总体吞吐量。我们的假设是,将计算迁移到数据所在的位置,而不是将数据移动到应用程序运行的位置通常更好。(A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the application is running. HDFS provides interfaces for applications to move themselves closer to where the data is located.

6. 异构软硬件平台之间的可移植性(Portability Across Heterogeneous Hardware and Software Platforms

HDFS被设计成易于从一个平台移植到另一个平台。这促进了HDFS的广泛采用,作为一组大型应用程序的首选平台。

 

缺点

不适合低延迟数据访问。 2.不适合小文件存储

3.分布式文件系统的设计思想

普通分布式文件系统的设计思想

1.一个文件存储在一个节点上

2.每个文件多副本方式存储

3.负载难以均衡。

举例,文件1是0.5T,文件2是1T,文件3是20g,文件3是50g

如果第一个机器挂掉了,没关系,其它机器还有副本


缺点;每个节点的数据大小不一样。每个机器的利用率不一样。网络瓶颈会产生。不利于并行操作。

HDFS分布式文件系统的设计思想

1.将一个文件拆分成多个Block

2.每个block以多副本的方式存储在各个节点上

3.保存元数据映射关系

4.负载均衡

5.分布式并行计算


下面举例:一个文件是50G,拆分成多个block,每个block是128M



4.HDFS架构图


HDFS是一个master/slave的架构,HDFS集群由一个NameNode组成,它是管理文件系统名称空间的主服务器,并管理客户端对文件的访问。此外,还有许多datanode,通常是集群中的一个节点,负责管理存储。HDFS公开一个文件系统名称空间,并允许将用户数据存储在文件中。在内部,一个文件被分割成一个或多个块,这些块存储在一组datanode中。NameNode执行文件系统名称空间操作,如打开,关闭,重命名文件和目录。它还决定了块到datanode的映射。datanode负责从文件系统的客户端提供读和写请求。datanode也会在NameNode的指令下执行块创建、删除和复制。(HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is split into one or more blocks and these blocks are stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode.

 

NameNodeDataNode一般都运行在一些廉价的机器上。这些典型的机器运行在linux的操作系统上面,HDFS是用Java语言的,任意机器只要能运行java,就能运行,一个典型的部署有专门的机器,只运行NameNode软件。集群中的每台机器都运行DataNode软件的一个实例。但是在生产环境上不是这么干的。(The NameNode and DataNode are pieces of software designed to run on commodity machines. These machines typically run a GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case.)

 

HDFS文件副本存放

  

 

文件系统(The File System Namespace

HDFS支持传统的层级的文件组织。用户或应用程序可以在这些目录中创建目录和储存文件。文件系统名称空间层次结构类似于大多数其他现有的文件系统;一个人可以创造和快速眼动(HDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file.

NameNode维护文件系统名称空间。对文件系统名称空间或其属性的任何更改都由NameNode记录。一个应用程序可以指定应该由HDFS维护的文件的副本数量,应用程序可以指定应该由HDFS维护的文件的副本数量。文件的拷贝数称为该文件的副本因子。这些信息由NameNode存储(The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This information is stored by the NameNode.

 

数据复制(Data Replication

HDFS被设计成可靠地在大型集群中跨机器存储非常大的文件。它将每个文件存储为一个块的序列。文件的块被复制用于容错。块大小和复制因子是每个文件可配置的。(HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file.

一个blocks除了最后一个block,其它都一样。(All blocks in a file except the last block are the same size, while users can start a new block without filling out the last block to the configured block size after the support for variable length block was added to append and hsync

应用程序可以指定文件的副本系数(An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once (except for appends and truncates) and have strictly one writer at any time.

NameNode做出关于块复制的所有决定。它定期接收集群中每个datanode的心跳和阻塞报告。接收心跳意味着DataNode正常工作。(The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode.

 

自己总结:

MasterNameNode/NN)带NSlavesDataNode/DN

HDFS/YARN/HBase也是类似结构

 

1个文件会被拆分成多个Block

Blocksize:128m

130M ==> 2block: 128M2M

 

NN

1)负责客户端请求的响应

2)负责元数据(文件的名词,副本系数,Block存放的DN)的管理

 

DN

1)存储用户的文件对应的数据块

2)要定期向NN发送心跳信息,汇报本身及其所有的Block信息,健康状况

 

NameNode + NDataNode

建议:NNDN是部署在不同的节点上

Replication factor:副本系数,副本因子

All blocks in a file except the last block are the same size


5.Hadoop下载和JDK的安装

hadoop软件下载
http://archive.cloudera.com/cdh5/cdh/5/

环境:
hadoop/hadoop
hostname:hadoop000

software:存放所有安装的软件包
data:存放所有的测试数据
app:所有软件的安装目录
source:存放所有源码的目录

客户端:mobaxterm

修改hadoop的密码

[hadoop@bogon ~]$ su
密码:
[root@bogon hadoop]# passwd hadoop
更改用户 hadoop 的密码 。
新的 密码:
无效的密码: 密码少于 8 个字符
重新输入新的 密码:
passwd:所有的身份验证令牌已经成功更新。
修改hostname

临时修改

hostname hadoop000

永久修改,修改文件/etc/sysconfig/network,修改完后,重启虚拟机

NETWORKING=yes
HOSTNAME=hadoop000
下载hadoop的安装包
[hadoop@bogon ~]$ pwd
/home/hadoop
[hadoop@bogon ~]$ mkdir software
[hadoop@bogon ~]$ cd software/
[hadoop@bogon software]$ wget http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.7.0.tar.gz

下载jdk7

找到jdk官网http://www.oracle.com/technetwork/java/javase/overview/index.html


拉到最后找到“Java SE Site Map”,点击进入





下载需要注册账号

下载地址:https://download.csdn.net/download/huanyuminhao/10002982

下载完成后

/home/hadoop/software下面有两个包hadoop-2.6.0-cdh5.7.0.tar.gz  jdk-7u79-linux-x64.tar.gz

配置java环境

解压jar包:

tar -zxvf jdk-7u79-linux-x64.tar.gz -C ~/app

[hadoop@hadoop000 ~]$ vi ~/.bash_profile
[hadoop@hadoop000 ~]$ source ~/.bash_profile
[hadoop@hadoop000 ~]$ java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
[hadoop@hadoop000 ~]$ echo $JAVA_HOME
/home/hadoop/app/jdk1.7.0_79

~/.bash_profile的内容

export JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
export PATH=$JAVA_HOME/bin:$PATH

vi ~/.bash_profile   编辑文件内容
source ~/.bash_profile立即生效
java -versionjava版本号
echo $JAVA_HOME查看环境变量“JAVA_HOME”的值


6.shh安装及HDFS文件参数配置

目录:

安装ssh
机器参数配置
HDFS配置文件参数设置
hadoop-env.sh
core-site.xml
hdfs-site.xml
HDFS格式化及启停
HDFS简单测试


Prerequisites(先决条件)
Linux

Required Software(需要软件)
1.Java

2.ssh

安装ssh

[hadoop@hadoop000 ~]$ yum install ssh
已加载插件:fastestmirror, langpacks
您需要 root 权限执行此命令。
[hadoop@hadoop000 ~]$ sudo yum install ssh

我们信任您已经从系统管理员那里了解了日常注意事项。
总结起来无外乎这三点:

    #1) 尊重别人的隐私。
    #2) 输入前要先考虑(后果和风险)。
    #3) 权力越大,责任越大。

[sudo] hadoop 的密码:
对不起,请重试。
[sudo] hadoop 的密码:
已加载插件:fastestmirror, langpacks
base                                            | 3.6 kB     00:00
extras                                          | 3.4 kB     00:00
updates                                         | 3.4 kB     00:00
(1/4): extras/7/x86_64/primary_db                 | 125 kB   00:03
(2/4): updates/7/x86_64/primary_db                | 1.0 MB   00:04
(3/4): base/7/x86_64/primary_db                   | 5.9 MB   00:05
(4/4): base/7/x86_64/group_gz                     | 166 kB   00:06
Determining fastest mirrors
 * base: mirrors.nju.edu.cn
 * extras: mirrors.163.com
 * updates: mirrors.163.com
没有可用软件包 ssh。
错误:无须任何处理
[hadoop@hadoop000 ~]$ sudo service sshd start
Redirecting to /bin/systemctl start sshd.service
yum install ssh安装ssh
sudo yum install ssh安装ssh,需要root权限,所以前面要加sudo
sudo service sshd start启动sshd,hadoop需要是sshd必须能够运行。

机器参数配置

1.hostname:hadoop000

[hadoop@hadoop000 ~]$ cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hadoop000

2.设置ip和hostname的映射关系

[root@hadoop000 hadoop]# cat /etc/hosts
192.168.253.128 hadoop000
192.168.253.128 locahost

3.配置ssh免密码登录。

为什么要免密码那?因为datanode和namenode之间的通信是免密码的。

[hadoop@hadoop000 .ssh]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:9NFjxnyVM1wnvSnftj2zcHxYYeoYDMScSwZm70Hb2nM hadoop@hadoop000
The key's randomart image is:
+---[RSA 2048]----+
|       ++o.   o.*|
|      o +*o+   Bo|
|        +=o.B .o=|
|       ..o*+ +oo.|
|        So.= Eo o|
|            * .+o|
|           . o.++|
|              o+o|
|               .+|
+----[SHA256]-----+
[hadoop@hadoop000 .ssh]$ cd
[hadoop@hadoop000 ~]$ cd .ssh
[hadoop@hadoop000 .ssh]$ ls
id_rsa  id_rsa.pub
[hadoop@hadoop000 .ssh]$ cp id_rsa.pub authorized_keys
[hadoop@hadoop000 .ssh]$ cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDiQzqMiW9UF85eEHsGKRUu+4wBr6VvYbD                                                                             nzD+WDP4CIVnTMD7WQMWO+oD9fw1QNPGh2VXwe448/j/dKCSf3C+SYDSyOFiGGH85niV7cW                                                                             QbGAI/HO8Rqgl+maSYwD97Dfcp0YusANbHm1iru4S35M0jOFbQgLCXZ4IiHjxJ2D/a0x03Z                                                                             jkcSINCgoUp8hAqQCwA5Qjp6TA6S12h2D5MV353rjSBevFKyYBDN6Dixqadwc2HP3fRQrli                                                                             U4x6uMXWCVI+0QbWtPvM3citQpmyVJq5d7UO/U1HfG5PoXRDuur/HjFUo/4mbnpmVJPpttQ                                                                             rIOBFVZpG9wtp3/0kHJd62407 hadoop@hadoop000
[hadoop@hadoop000 .ssh]$ ssh localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is SHA256:HiKJS/Mdj2GgyqxbsMnSXSZYwc9TQ1eIIF/dXU4                                                                             2Ct4.
ECDSA key fingerprint is MD5:67:15:28:34:9d:16:b2:6f:de:65:80:f8:da:6f:                                                                             1a:a2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hos                                                                             ts.
Last login: Wed May 16 16:24:33 2018
[hadoop@hadoop000 ~]$ ssh localhost
Last login: Wed May 16 16:25:53 2018 from 127.0.0.1
[hadoop@hadoop000 ~]$ ssh hadoop000
The authenticity of host 'hadoop000 (192.168.253.128)' can't be establi                                                                             shed.
ECDSA key fingerprint is SHA256:HiKJS/Mdj2GgyqxbsMnSXSZYwc9TQ1eIIF/dXU4                                                                             2Ct4.
ECDSA key fingerprint is MD5:67:15:28:34:9d:16:b2:6f:de:65:80:f8:da:6f:                                                                             1a:a2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop000,192.168.253.128' (ECDSA) to the l                                                                             ist of known hosts.
Last login: Wed May 16 16:26:05 2018 from 127.0.0.1
[hadoop@hadoop000 ~]$ ssh hadoop000
Last login: Wed May 16 16:26:18 2018 from 192.168.253.128
ssh-keygen -t rsa生成密钥文件和私密文件,生成过程中所有都是回车生成完成后,会在~/.ssh文件夹下面产生两个文件
cp id_rsa.pub authorized_keysLinux互相认证
ssh localhost验证ssh免密码输入,初始输入命令,会提示输入yes.以后就不用了

HDFS配置文件参数设置

解压hadoop文件

[hadoop@hadoop000 software]$ tar -zxvf hadoop-2.6.0-cdh5.7.0.tar.gz -C ../app

分析解压后的hadoop文件目录

~/app/hadoop-2.6.0-cdh5.7.0/binbin文件夹中记录了一些命令。*.cmd都是windows命令,可以删除
~/app/hadoop-2.6.0-cdh5.7.0/sbin服务端的脚本,和服务有关的脚本
~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop所有hadoop配置文件
~/app/hadoop-2.6.0-cdh5.7.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar这里有需要的包

1. vi ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/hadoop-env.sh

把内容export JAVA_HOME=${JAVA_HOME}替换成

export JAVA_HOME=/home/hadoop/app/jdk1.7.0_79

2.~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/core-site.xml

<property>
	<!--设置fs的URI标识-->
	<name>fs.defaultFS</name>
	<value>hdfs://hadoop000:8020</value>
</property>
<property>
	<!--hadoop.tmp.dir:设置hdfs文件系统存储数据的目录。临时文件存放位置,如果没事设置,默认存放位置是~/tmp。机器每次重启的时候会清除。现在改成~/app/tmp-->
	<name>hadoop.tmp.dir</name>
	<value>/home/hadoop/app/tmp</value>
</property>

3.~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/hdfs-site.xml

<property>
	<!--设置集群block的副本系数,默认是3.因为我们只有一个机器,所以设置为1-->
	<name>dfs.replication</name>
	<value>1</value>
</property>

4.~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/slaves

localhost

这个文件不需要改,意思是datenode跑到哪些节点上面。因为是单机版的,只有本机。所以不用改

5.vi ~/.bash_profile

export JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
export PATH=$JAVA_HOME/bin:$PATH

export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
export PATH=$HADOOP_HOME/bin:$PATH

6.source ~/.bash_profile

环境变量生效

HDFS格式化及启停

1.启动集群
1)格式化文件系统
~/app/hadoop-2.6.0-cdh5.7.0/bin/hdfs namenode -format


注意:只执行一次,格式化之后,整个文件系统数据丢失


        效果是~/app/tmp文件夹下面,有个dfs


2)启动
~/app/hadoop-2.6.0-cdh5.7.0/sbin/start-dfs.sh


       如果在mobaxterm中不能执行该语句。就在虚拟机里面执行


3)验证
方式一.jps:
方式二.在浏览器中输入地址hadoop000:50070


查看目录:
~/app/tmp/dfs/data:存放数据
~/app/tmp/dfs/name:存放namenode
~/app/tmp/dfs/name/current: 存放具体数据




出错1:


执行命令~/app/hadoop-2.6.0-cdh5.7.0/sbin/start-dfs.sh
的时候报错:
the authenticity of host '0.0.0.0(0.0.0.0)' can't be established
解决方法:
sudo systemctl stop firewalld
具体的请看网址:https://blog.csdn.net/Post_Yuan/article/details/78603212

start-dfs.sh报错


做个 fdfs的测试
hadoop fs -put README.txt

查看
hadoop fs -ls

看内容
hadoop fs -cat README.txt


7.HDFS shell操作

ls

mkdir,mkdir -p
put,copyFromLocal
get,copyToLocal
rm,rm -r


hadoop的根目录下面的文件和文件夹
hadoop fs -ls /




新建文件夹input
hadoop fs -mkdir /input


一次创建多个目录
hadoop fs -mkdir -p /a/b


递归显示目录和文件
hadoop fs -ls -R /


把本地文件NOTICE.txt上传到input文件夹
hadoop fs -put NOTICE.txt /input/


查看内容
hadoop fs -cat /input/NOTICE.txt
hadoop fs -text /input/NOTICE.txt


同-put,复制文件,可以指定文件名
hadoop fs -copyFromLocal NOTICE.txt /input/a.txt




hadoop fs -moveFromLocal NOTICE.txt /input/a.txt




把hadoop的文件拿到本地
hadoop fs -get /input/NOTICE.txt 


删除文件
hadoop fs - rm /input/a.txt


hadoop fs -rm /input


递归删除
hadoop fs -rm -R /input


8.HDFS Java API编程

windows访问linux服务器。

C:\Windows\System32\drivers\etc\host里面增加内容

192.168.32.134 hadoop000

在浏览器访问不到

hadoop000:50070

解决方法。在linux中关闭防火墙。

解决办法4:是否关闭linux系统的防火墙

复制代码
复制代码

[root@djt002 ~]# service iptables status
[root@djt002 ~]# chkconfig iptables off
//永久关闭防火墙
[root@djt002 ~]# service iptables stop     //临时关闭防火墙
[root@djt002 ~]# service iptables status
iptables: Firewall is not running.
//查看防火墙状态

出现错误

org.apache.hadoop.security.AccessControlException: Permission denied: user=Administrator, access=WRITE, inode="/":hadoop:supergroup:drwxr-xr-x

解决方法:没有权限,给文件赋权限

hadoop fs -mkdir /hdfsapi

hadoop fs -chmod 777 /hdfsapi

还有一个解决方法

fileSystem = FileSystem.get(new URI(HDFS_PATH),configuration,"hadoop");//指定用户hadoop

pom.xml

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.kgc.hadoop</groupId>
  <artifactId>kgc-hadoop-train</artifactId>
  <version>1.0-SNAPSHOT</version>

  <name>kgc-hadoop-train</name>
  <!-- FIXME change it to the project's website -->
  <url>http://www.example.com</url>

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <hadoop.version>2.6.0-cdh5.7.0</hadoop.version>
    <!-- log4j日志文件管理包版本 -->
     <slf4j.version>1.7.7</slf4j.version>
     <log4j.version>1.2.17</log4j.version>

  </properties>

  <!--配置maven的远程仓库-->
  <repositories>
    <repository>
      <id>cloudera</id>
      <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
    </repository>
  </repositories>

  <dependencies>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>${hadoop.version}</version>
    </dependency>

    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.10</version>
      <scope>test</scope>
    </dependency>

    <!-- 日志文件管理包 -->
        <dependency>
              <groupId>log4j</groupId>
               <artifactId>log4j</artifactId>
               <version>${log4j.version}</version>
         </dependency>
         <dependency>
                 <groupId>org.slf4j</groupId>
                 <artifactId>slf4j-api</artifactId>
                 <version>${slf4j.version}</version>
          </dependency>
           <dependency>
                 <groupId>org.slf4j</groupId>
                 <artifactId>slf4j-log4j12</artifactId>
                <version>${slf4j.version}</version>
         </dependency>

  </dependencies>
</project>

api源代码如下

package com.kgc.hadoop;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.util.Progressable;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;

/**
 * Hadoop HDFS Java API使用
 */
public class HDFSApp {

    public static final String HDFS_PATH = "hdfs://hadoop000:8020";

    FileSystem fileSystem = null;//文件系统
    Configuration configuration = null;//hadoop配置

    /**
     * 在单元测试之前,把资源加载好。
     */
    @Before
    public void setUp() throws URISyntaxException, IOException, InterruptedException {
        System.out.println("HDFSApp.setUp()");
        configuration = new Configuration();

        /**
         * 如果报错Permission denied: user=Administrator, access=WRITE, inode="/":hadoop:supergroup:drwxr-xr-x
         * 说明当前用户没有权限。解决方法,给它指定用户为hadoop
         */
        //fileSystem = FileSystem.get(new URI(HDFS_PATH),configuration);
        fileSystem = FileSystem.get(new URI(HDFS_PATH),configuration,"hadoop");



    }

    /**
     * 创建HDFS目录
     */
    @Test
    public void mkdir() throws IOException {
        fileSystem.mkdirs(new Path("/hdfsapi/test"));
    }

    /**
     * 创建文件
     * 查看内容使用命令:hadoop fs -cat /hdfsapi/test/a.txt
     * @throws IOException
     */
    @Test
    public void create() throws IOException {
        FSDataOutputStream output = fileSystem.create(new Path("/hdfsapi/test/a.txt"));

        output.write("helloworld".getBytes());
        output.flush();
        output.close();
    }

    /**
     * 读文件
     * @throws IOException
     */
    @Test
    public void cat() throws IOException {
        FSDataInputStream in = fileSystem.open(new Path("/hdfsapi/test/a.txt"));
        IOUtils.copyBytes(in,System.out,1024);
    }

    /**
     * 重命名。用命令
     * hadoop fs -lsr /
     * @throws IOException
     */
    @Test
   public void rename() throws IOException {
        Path newPath = new Path("/hdfsapi/test/b.txt");
        Path oldPath = new Path("/hdfsapi/test/a.txt");
        System.out.println(fileSystem.rename(oldPath,newPath));
   }

    /**
     * 把本地文件上传到hadoop,
     * 如果是在windows访问linux服务器,本地文件就是windows本地文件。
     * @throws IOException
     */
   @Test
   public void copyFromLocalFile() throws IOException {
       //Path src = new Path("/tmp/aa.txt");
       Path src = new Path("E:\\a.txt");
       Path dst = new Path("/hdfsapi/test");
       fileSystem.copyFromLocalFile(src,dst);


   }

    /**
     * 上传本地文件到HDFS,带有进度条
     * @throws IOException
     */
    @Test
    public void copyFromLocalFileWithProcess() throws IOException {
       // java.io.InputStream in = new java.io.BufferedInputStream(new FileInputStream(new java.io.File("E:\\BaiduNetdiskDownload\\linuxios\\CentOS-6.5-x86_64-bin-DVD1.iso")));
        java.io.InputStream in = new java.io.BufferedInputStream(new FileInputStream(new java.io.File("E:\\BaiduNetdiskDownload\\mysql考试.rar")));
        FSDataOutputStream out = fileSystem.create(
                new Path("/hdfsapi/test/mysql考试.rar"),
                new Progressable() {
                    public void progress() {
                        System.out.print(".");//带进度提醒
                    }
                });
        IOUtils.copyBytes(in,out,4096);



    }

    @Test
    public void copyToLocalFile() throws IOException {
        Path localPath = new Path("E:\\tmp\\a.txt");
        Path hdfsPath = new Path("/hdfsapi/test/aa.txt");

        /**
         * fileSystem.copyToLocalFile(hdfsPath,localPath);如果报空指针错误,就把最后一个参数(useRawLocalFileSystem)置为true,
         * useRawLocalFileSystem: whether to use RawLocalFileSystem as local file system or not(是否使用rawlocalfile作为本地文件系统)
         * RawLocalFileSystem: 支持有客户端校验和本地文件系统。带有校验和的本地系统文件在fs.RawLocalFileSystem中实现。
         */
        //fileSystem.copyToLocalFile(hdfsPath,localPath);
        fileSystem.copyToLocalFile(false,hdfsPath,localPath,true);
    }

    /**
     * 查看某个目录下的所有文件
     * @throws IOException
     */
    @Test
    public void listFiles() throws IOException {
        FileStatus[] fileStatus = fileSystem.listStatus(new Path("/hdfsapi/test"));

        for(FileStatus file : fileStatus){
            String isDir = file.isDirectory() ? "文件夹" : "文件";
            String permission = file.getPermission().toString(); //权限,比如rw-r--r--
            short replication = file.getReplication();//点击率
            long len = file.getLen();//文件长度
            String path = file.getPath().toString();//文件或者文件夹的路径

            System.out.println(isDir + "\t" +permission + "\t" + replication + "\t" + len +"\t" + path);
        }
    }

    /**
     * 删除
     * @throws IOException
     */
    @Test
    public void delete() throws IOException {
        fileSystem.delete(new Path("/hdfsapi/test"),true);
    }

    @After
    public void tearDown(){
        configuration = null;
        fileSystem = null;
        System.out.println("HDFSApp.tearDown()");
    }
}

9.HDFS读写数据流程



HDFS副本机制
HDFS中的文件是会被拆分成Block的,而且Block是以多副本的方式进行存储的,那么每个Block是如何选择节点进行存储数据的呢?这就是HDFS的副本机制。


block副本存放测量
1)在客户端相同的节点上存放第一个副本
如果客户端不在集群内呢?就随机挑选一个节点。
2)存放在与第一个不同的随机选择的机架上。
3)第三个副本:存放在与第二个副本相同机架但是不同的节点上

4)大于三的副本:被放置在集群中的随机节点上。


HDFS的写流程



HDFS的读流程





10.hadoop新特性

HDFS1.x是存在单点故障
Hadoop2.x的新特性1:NameNode HA(High Avaiable),通过主备NameNode,当主NameNode发生故障时,可以切换到备NN。
与Hadoop1.x对比,在使用方面是没有任何区别
hadoop fs -ls /


HDFS 1.x中NameNode:内存受限
Hadoop2.x的新特性2:NameNode Federation,多个NameNode都能对外提供服务,
水平扩展,支持多个NN,没有NN分管一部分目录:假设有几条业务线,每个业务线一个NN,
所有NN共享所有DN存储的资源。
存放的问题:
虽然有多个NN同时对外提供服务,但是每个NN还是存在单点故障的。
  ===>只需要给Federation节点配置一个备用的NN
所有的整个Hadoop2集群中可能存在的NN有:多个Federation的NN以及每个NN对应备用NN,
带来的好处:
单个NN内存和并发压力减小
NN彼此隔离,互不影响。


HDFS Snapshots
数据备份
放置用户误操作删除数据。
是一个只读的基于时间点的快照。


使用场景
1)备份
2)防止用户的误操作

3)容灾

命令说明:

hdfs dfsadmin -allowSnapshot /hdfsapi2 允许快照
hdfs dfs -createSnapshot /hdfsapi2 s0 创建快照
hadoop fs -touchz /hdfsapi2/f{1,2,3} 创建三个文件
hadoop fs -rm /hdfsapi2/f{1,2,3} 删除文件
hdfs dfs -cp -ptopax /hdfsapi2/.snapshot/s1/f3 /hdfsapi2 复制快照到原来的位置
hdfs dfs -renameSnapshot /hdfsapi2/ s0 sss0 重命名
hdfs lsSnapshottableDir 显示所有快照的目录
hdfs snapshotDiff /hdfsapi2 sss0 s1 对比快照的区别
hdfs dfs -deleteSnapshot /hdfsapi2 s1 删除快照

hdfs dfsadmin -disallowSnapshot /hdfsapi2 禁用快照

[hadoop@hadoop000 ~]$ hadoop fs -ls /hdfsapi2
18/07/07 14:45:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2018-07-07 08:44 /hdfsapi2/test
[hadoop@hadoop000 ~]$ hdfs dfsadmin -allowSnapshot /hdfsapi2
18/07/07 14:45:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Allowing snaphot on /hdfsapi2 succeeded
[hadoop@hadoop000 ~]$ hdfs dfs -createSnapshot /hdfsapi2 s0
18/07/07 14:47:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Created snapshot /hdfsapi2/.snapshot/s0
[hadoop@hadoop000 ~]$ hadoop fs -touchz /hdfsapi2/f{1,2,3}
18/07/07 14:48:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@hadoop000 ~]$ hadoop fs -ls /hdfsapi2
18/07/07 14:49:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 4 items
-rw-r--r--   1 hadoop supergroup          0 2018-07-07 14:48 /hdfsapi2/f1
-rw-r--r--   1 hadoop supergroup          0 2018-07-07 14:48 /hdfsapi2/f2
-rw-r--r--   1 hadoop supergroup          0 2018-07-07 14:48 /hdfsapi2/f3
drwxr-xr-x   - hadoop supergroup          0 2018-07-07 08:44 /hdfsapi2/test
[hadoop@hadoop000 ~]$ hdfs dfs -createSnapshot /hdfsapi2 s1
18/07/07 14:50:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Created snapshot /hdfsapi2/.snapshot/s1
[hadoop@hadoop000 ~]$ hadoop fs -ls /hdfsapi2/.snapshot/s1
18/07/07 14:51:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 4 items
-rw-r--r--   1 hadoop supergroup          0 2018-07-07 14:48 /hdfsapi2/.snapshot/s1/f1
-rw-r--r--   1 hadoop supergroup          0 2018-07-07 14:48 /hdfsapi2/.snapshot/s1/f2
-rw-r--r--   1 hadoop supergroup          0 2018-07-07 14:48 /hdfsapi2/.snapshot/s1/f3
drwxr-xr-x   - hadoop supergroup          0 2018-07-07 08:44 /hdfsapi2/.snapshot/s1/test
[hadoop@hadoop000 ~]$ hadoop fs -rm /hdfsapi2/f{1,2,3}
18/07/07 14:53:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Deleted /hdfsapi2/f1
Deleted /hdfsapi2/f2
Deleted /hdfsapi2/f3
[hadoop@hadoop000 ~]$ hadoop fs -ls /hdfsapi2
18/07/07 14:54:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2018-07-07 08:44 /hdfsapi2/test
[hadoop@hadoop000 ~]$ hdfs dfs -cp -ptopax /hdfsapi2/.snapshot/s1/f3 /hdfsapi2
18/07/07 14:57:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@hadoop000 ~]$ hadoop fs -ls /hdfsapi2
18/07/07 14:57:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r--   1 hadoop supergroup          0 2018-07-07 14:48 /hdfsapi2/f3
drwxr-xr-x   - hadoop supergroup          0 2018-07-07 08:44 /hdfsapi2/test
[hadoop@hadoop000 ~]$ hdfs dfs -cp -ptopax /hdfsapi2/.snapshot/s1/f{1,2,3} /hdfsapi2
18/07/07 14:57:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
cp: `/hdfsapi2/f3': File exists
[hadoop@hadoop000 ~]$ hadoop fs -ls /hdfsapi2
18/07/07 14:57:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 4 items
-rw-r--r--   1 hadoop supergroup          0 2018-07-07 14:48 /hdfsapi2/f1
-rw-r--r--   1 hadoop supergroup          0 2018-07-07 14:48 /hdfsapi2/f2
-rw-r--r--   1 hadoop supergroup          0 2018-07-07 14:48 /hdfsapi2/f3
drwxr-xr-x   - hadoop supergroup          0 2018-07-07 08:44 /hdfsapi2/test
[hadoop@hadoop000 ~]$ hdfs dfs -renameSnapshot /hdfsapi2/ s0 sss0
18/07/07 15:00:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@hadoop000 ~]$ hadoop fs -ls /hdfsapi2/.snapshot
18/07/07 15:00:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2018-07-07 14:50 /hdfsapi2/.snapshot/s1
drwxr-xr-x   - hadoop supergroup          0 2018-07-07 14:47 /hdfsapi2/.snapshot/sss0
[hadoop@hadoop000 ~]$ hdfs lsSnapshottableDir
18/07/07 15:01:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
drwxr-xr-x 0 hadoop supergroup 0 2018-07-07 14:57 2 65536 /hdfsapi2
[hadoop@hadoop000 ~]$ hdfs snapshotDiff /hdfsapi2 sss0 s1
18/07/07 15:02:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Difference between snapshot sss0 and snapshot s1 under directory /hdfsapi2:
M       .
+       ./f1
+       ./f2
+       ./f3

[hadoop@hadoop000 ~]$ hdfs dfs -deleteSnapshot /hdfsapi2 s1
18/07/07 15:03:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@hadoop000 ~]$ hadoop fs -ls /hdfsapi2/.snapshot
18/07/07 15:04:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2018-07-07 14:47 /hdfsapi2/.snapshot/sss0
[hadoop@hadoop000 ~]$ hdfs dfsadmin -disallowSnapshot /hdfsapi2
18/07/07 15:05:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
disallowSnapshot: The directory /hdfsapi2 has snapshot(s). Please redo the operation after removing all the snapshots.
[hadoop@hadoop000 ~]$


11.HDFS日志收集实战案例

需求:
1.结合log4j开发日志产生工具
2.定时将日志从本地文件上传到HDFS文件系统上


开发LogGenerator
添加log4j.properties 该文件是防置在resources目录下的
编译:mvn clean package -DskipTests


再把jar包上传到服务器。scp命令,当然也可以用工具上传文件到服务器。
在生产环境上如何运行我们的代码呢?
java -Djava.ext.dirs=../lib com.kgc.hadoop.LogGenerator


存在一个问题
1)上传完hdfs之后(成功),需要将本地的文件删除
2)采用linux的crontab进行调度.
https://tool.lu/crontab
0 2 * * *  /home/hadoop/shell/upload.sh


如果crontab -e 不能执行,就需要安装,执行命令 sudo yum install vixie-cron crontabs


log4j.properties

log4j.rootLogger=INFO,log

log4j.appender.log = org.apache.log4j.RollingFileAppender
log4j.appender.log.layout = org.apache.log4j.PatternLayout
log4j.appender.log.layout.ConversionPattern = [%-5p][%-22d{yyyy/MM/dd HH:mm:ssS}][%l]%n%m%n
log4j.appender.log.Threshold = INFO
log4j.appender.log.ImmediateFlush = TRUE
log4j.appender.log.Append = TRUE
#log4j.appender.log.File = /Users/rocky/tmp/logs/access.log
log4j.appender.log.File = logs/access.log

log4j.appender.log.MaxFileSize = 10KB
log4j.appender.log.MaxBackupIndex = 20

### 设置Logger输出级别和输出目的地 ###
#log4j.rootLogger=debug, stdout,logfile
#
#### 把日志信息输出到控制台 ###
#log4j.appender.stdout=org.apache.log4j.ConsoleAppender
#log4j.appender.stdout.Target=System.err
#log4j.appender.stdout.layout=org.apache.log4j.SimpleLayout
#
#### 把日志信息输出到文件:jbit.log ###
#log4j.appender.logfile=org.apache.log4j.FileAppender
#log4j.appender.logfile.File=logs/access.log
#log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
#log4j.appender.logfile.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %l %F %p %m%n
#log4j.appender.log.File = E:/logs/access.log
LogGenerator.java
package com.kgc.hadoop;

import org.apache.log4j.LogManager;
import org.apache.log4j.Logger;

import java.util.Date;

public class LogGenerator {

    public static void main(String[] args) throws InterruptedException {
        Logger logger = LogManager.getLogger("LogGenerator");
        int i = 0;
        while(true){
            logger.info(i +"``````" + new Date().toString() + "``````");
           // System.out.println("打印:"+i +"``````" + new Date().toString() + "``````");
            i++;
            Thread.sleep(500);
            if(i > 10000000){
                break;
            }
        }
    }
}



这里会出现一个错误,如果只有项目编译好的jar包,没有log4j的jar包,会出错如下。log4j-1.2.17.jar的下载地址http://vdisk.weibo.com/s/F3bS1iQJ7oXXL

[hadoop@hadoop000 lib]$ java -Djava.ext.dirs=../lib com.kgc.hadoop.LogGenerator
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/log4j/LogManager
        at com.kgc.hadoop.LogGenerator.main(LogGenerator.java:11)
Caused by: java.lang.ClassNotFoundException: org.apache.log4j.LogManager
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        ... 1 more


/home/hadoop/shell/upload.sh

#!/bin/bash

source_dir=/home/hadoop/lib/logs/
target_dir=/data/logs/20170617/

ls $source_dir|while read fileName

do
        if [[ "$fileName" == access.log.* ]]; then
                echo $source_dir$fileName $target_dir
                hadoop fs -put $source_dir$fileName $target_dir`hostname`_$fileName
        fi
done
~

执行脚本

[hadoop@hadoop000 shell]$ vi upload.sh
[hadoop@hadoop000 shell]$ hadoop fs -mkdir -p /data/logs/20170617/
[hadoop@hadoop000 shell]$ hadoop fs -lsr /data/logs
lsr: DEPRECATED: Please use 'ls -R' instead.
18/07/07 17:00:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
drwxr-xr-x   - hadoop supergroup          0 2018-07-07 16:58 /data/logs/20170617
[hadoop@hadoop000 shell]$ ./upload.sh
/home/hadoop/lib/logs/access.log.1 /data/logs/20170617/
18/07/07 17:00:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
/home/hadoop/lib/logs/access.log.2 /data/logs/20170617/
18/07/07 17:00:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
/home/hadoop/lib/logs/access.log.3 /data/logs/20170617/
18/07/07 17:00:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
/home/hadoop/lib/logs/access.log.4 /data/logs/20170617/
18/07/07 17:00:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
/home/hadoop/lib/logs/access.log.5 /data/logs/20170617/
18/07/07 17:00:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
/home/hadoop/lib/logs/access.log.6 /data/logs/20170617/
18/07/07 17:01:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
/home/hadoop/lib/logs/access.log.7 /data/logs/20170617/
18/07/07 17:01:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
/home/hadoop/lib/logs/access.log.8 /data/logs/20170617/
18/07/07 17:01:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
/home/hadoop/lib/logs/access.log.9 /data/logs/20170617/
18/07/07 17:01:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@hadoop000 shell]$

[hadoop@hadoop000 data]$ hadoop fs -ls /data/logs/20170617
18/07/07 17:01:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 8 items
-rw-r--r--   1 hadoop supergroup      10296 2018-07-07 17:00 /data/logs/20170617/hadoop000_access.log.1
-rw-r--r--   1 hadoop supergroup      10296 2018-07-07 17:00 /data/logs/20170617/hadoop000_access.log.2
-rw-r--r--   1 hadoop supergroup      10296 2018-07-07 17:00 /data/logs/20170617/hadoop000_access.log.3
-rw-r--r--   1 hadoop supergroup      10296 2018-07-07 17:00 /data/logs/20170617/hadoop000_access.log.4
-rw-r--r--   1 hadoop supergroup      10296 2018-07-07 17:01 /data/logs/20170617/hadoop000_access.log.5
-rw-r--r--   1 hadoop supergroup      10296 2018-07-07 17:01 /data/logs/20170617/hadoop000_access.log.6
-rw-r--r--   1 hadoop supergroup      10296 2018-07-07 17:01 /data/logs/20170617/hadoop000_access.log.7
-rw-r--r--   1 hadoop supergroup      10275 2018-07-07 17:01 /data/logs/20170617/hadoop000_access.log.8

使用命令

crontab -e

填写内容,保证定时执行脚本。就是定时把日志文件复制到hadoop上。

0 2 * * *  /home/hadoop/shell/upload.sh

效果是在下面这些时间执行脚本。



  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值