hadoop整理01

1.hdfs yarn常用命令整理

[hadoop@ruozedata001 hadoop]$ hdfs haadmin
Usage: DFSHAAdmin [-ns <nameserviceId>]
    [-transitionToActive <serviceId> [--forceactive]]
    [-transitionToStandby <serviceId>]
    
    [-failover [--forcefence] [--forceactive] <serviceId> <serviceId>]
    [-getServiceState <serviceId>]
    [-checkHealth <serviceId>]
    [-help <command>]

#获得节点的状态
hdfs haadmin -getServiceState nn1 nn2

#手动切换
hdfs haadmin -failover nn1 nn2

#切换为standby状态
hdfs haadmin -transitionToStandby -forcemanual nn2
[hadoop@ruozedata001 hadoop]$ hdfs fsck
Usage: DFSck <path> [-list-corruptfileblocks | [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]]] [-maintenance]
	<path>	start checking from this path
	-move	move corrupted files to /lost+found
	-delete	delete corrupted files
	-files	print out files being checked
	-openforwrite	print out files opened for write
	-includeSnapshots	include snapshot data if the given path indicates a snapshottable directory or there are snapshottable directories under it
	-list-corruptfileblocks	print out list of missing blocks and files they belong to
	-blocks	print out block report
	-locations	print out locations for every block
	-racks	print out network topology for data-node locations

	-maintenance	print out maintenance state node details
	-blockId	print out which file this blockId belongs to, locations (nodes, racks) of this block, and other diagnostics info (under replicated, corrupted or not, etc)

#健康检查
hdfs fsck /  

#只删除损坏文件
hdfs fsck / -delete

2.整理故障案例

3.预习 压缩哪几种,编译后的 执行 hadoop checknative 命令 输出结果是什么?为什么用snappy

Linux内核镜像有四种压缩模式:gzip,bzip2,lzma,lxo,一般默认为gzip。如果要用bzip2,lzma,lzo,要先安装相关解压缩工具。
在这里插入图片描述

4.ruozedata001 standby节点机器上, 能不能直接读 hdfs dfs -ls hdfs://ruozedata001:8020/ ? 能不能直接写 hdfs dfs -put xxx.log hdfs://ruozedata002:8020/ ?

  • 不能直接读
[hadoop@zuozedata001 zookeeper]$ hdfs dfs -ls hdfs://ruozedata001:8020/
ls: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
  • 可以写
#前提:ruozedata001为standby
[hadoop@zuozedata001 zookeeper]$ hdfs dfs -put ivy.xml /tmp
[hadoop@zuozedata001 zookeeper]$ 

在这里插入图片描述

5.什么是hdfs的安全模式?如何进,如何离开? 那么在安全模式下,能读文件吗 ?能写文件吗?

  • 安全模式是HDFS所处的一种特殊状态,在这种状态下,文件系统只接受读数据请求,而不接受删除、修改等变更请求。在NameNode主节点启动时,HDFS首先进入安全模式,DataNode在启动的时候会向namenode汇报可用的block等状态,当整个系统达到安全标准时,HDFS自动离开安全模式。如果HDFS出于安全模式下,则文件block不能进行任何的副本复制操作,因此达到最小的副本数量要求是基于datanode启动时的状态来判定的,启动时不会再做任何复制(从而达到最小副本数量要求)
hdfs dfsadmin -safemode get  #显示是否处于安全模式
hdfs dfsadmin -safemode wait  #一直等到某条命令到来前才退出安全模式

#以下可以随时进入或离开安全模式
hdfs dfsadmin -safemode enter #进入安全模式
hdfs dfsadmin -safemode leave #离开安全模式

6.hdfs ha启动过程中,那么多进程,先后顺序关系是什么? dn进程是最后启动吗?

  • 顺序: zookeeper dn nn jn zkfc rm nm
[hadoop@zuozedata001 zookeeper]$ bin/zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[hadoop@zuozedata001 zookeeper]$ start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [ruozedata001 ruozedata002]
ruozedata001: starting namenode, logging to /home/hadoop/app/hadoop/logs/hadoop-hadoop-namenode-zuozedata001.out
ruozedata002: starting namenode, logging to /home/hadoop/app/hadoop/logs/hadoop-hadoop-namenode-zuozedata002.out
ruozedata001: starting datanode, logging to /home/hadoop/app/hadoop/logs/hadoop-hadoop-datanode-zuozedata001.out
ruozedata002: starting datanode, logging to /home/hadoop/app/hadoop/logs/hadoop-hadoop-datanode-zuozedata002.out
ruozedata003: starting datanode, logging to /home/hadoop/app/hadoop/logs/hadoop-hadoop-datanode-zuozedata003.out
Starting journal nodes [ruozedata001 ruozedata002 ruozedata003]
ruozedata001: starting journalnode, logging to /home/hadoop/app/hadoop/logs/hadoop-hadoop-journalnode-zuozedata001.out
ruozedata003: starting journalnode, logging to /home/hadoop/app/hadoop/logs/hadoop-hadoop-journalnode-zuozedata003.out
ruozedata002: starting journalnode, logging to /home/hadoop/app/hadoop/logs/hadoop-hadoop-journalnode-zuozedata002.out
Starting ZK Failover Controllers on NN hosts [ruozedata001 ruozedata002]
ruozedata002: starting zkfc, logging to /home/hadoop/app/hadoop/logs/hadoop-hadoop-zkfc-zuozedata002.out
ruozedata001: starting zkfc, logging to /home/hadoop/app/hadoop/logs/hadoop-hadoop-zkfc-zuozedata001.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop/logs/yarn-hadoop-resourcemanager-zuozedata001.out
ruozedata003: starting nodemanager, logging to /home/hadoop/app/hadoop/logs/yarn-hadoop-nodemanager-zuozedata003.out
ruozedata002: starting nodemanager, logging to /home/hadoop/app/hadoop/logs/yarn-hadoop-nodemanager-zuozedata002.out
ruozedata001: starting nodemanager, logging to /home/hadoop/app/hadoop/logs/yarn-hadoop-nodemanager-zuozedata001.out
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值