hadoop 常见面试题

1. hdfs yarn常用命令整理

hdfs
新建文件路径:hdfs dfs -mkdir /input  
文件上传:hdfs dfs -put wordcount.txt /input/ 
查看文件系统:hdfs dfs -ls /
文件下载:hdfs dfs -get /input/wordcount.txt ~/aa
查看文本:hdfs dfs -text /input/wordcount.txt
查找文件:hdfs dfs -find /input wordcou
查看文本:hdfs dfs -cat /input/wordcount.txt
上传文件:hdfs dfs -copyFromLocal wordcount.txt /input/wordcount2.txt
下载文件:hdfs dfs -copyToLocal /input/wordcount2.txt wordcount2.txt
复制文件:hdfs dfs -cp /input/wordcount.txt /input/wordcount3.txt
显示磁盘分区上可以使用的磁盘空间:hdfs dfs -df /
显示每个文件和目录的磁盘使用空间:hdfs dfs -du / 
yarn
yarn application -list     列出所有 application 信息

yarn application -kill <Application ID>   杀死一个 application,需要指定一个 Application ID 
	示例:yarn  application -kill application_1526100291229_206393

yarn application -status <Application ID>  列出 某个application 的状态
	示例:yarn  application -status application_1526100291229_206393

2.压缩哪几种:gzip lzo snappy bzip2

[hadoop@ruozedata001 sbin]$ hadoop checknative
19/08/21 00:50:09 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
19/08/21 00:50:09 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop:  true /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/lib/native/libhadoop.so.1.0.0
zlib:    true /lib64/libz.so.1
snappy:  true /lib64/libsnappy.so.1
lz4:     true revision:10301
bzip2:   true /lib64/libbz2.so.1
openssl: false Cannot load libcrypto.so (libcrypto.so: cannot open shared object file: No such file or directory)!

3.ruozedata002 standby节点机器上,能不能直接读 hdfs dfs -ls hdfs://ruozedata002:8020/: 不能;能不能直接写 hdfs dfs -put xxx.log hdfs://ruozedata002:8020/ :不能

[hadoop@ruozedata002 hadoop]$ hdfs dfs -ls hdfs://ruozedata002:8020/
ls: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error

[hadoop@ruozedata002 hadoop]$  hdfs dfs -put slaves hdfs://ruozedata002:8020/
put: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error

[hadoop@ruozedata001 sbin]$ hdfs dfs -ls / hdfs://ruozedata002:8020/
ls: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
Found 1 items
drwxrwx---   - hadoop hadoop          0 2019-08-20 22:57 /tmp
[hadoop@ruozedata001 sbin]$ hdfs dfs -ls / hdfs://ruozedata001:8020/
Found 1 items
drwxrwx---   - hadoop hadoop          0 2019-08-20 22:57 /tmp
Found 1 items
drwxrwx---   - hadoop hadoop          0 2019-08-20 22:57 hdfs://ruozedata001:8020/tmp

[hadoop@ruozedata002 hadoop]$  hdfs dfs -ls / hdfs://ruozedata002:8020/
ls: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
Found 1 items
drwxrwx---   - hadoop hadoop          0 2019-08-20 22:57 /tmp

4.什么是hdfs的安全模式?如何进,如何离开?

那么在安全模式下,能读文件吗 ?能写文件吗?

hadoop dfsadmin -safemode leave   强制NameNode退出安全模式
hadoop dfsadmin -safemode enter   进入安全模式
hadoop dfsadmin -safemode get     查看安全模式状态
hadoop dfsadmin -safemode wait    等待一直到安全模式结束

能读文件不能写文件

[hadoop@ruozedata001 ~]$ hdfs dfs -cat /input/wordcount2.txt
腊神话里这样传诵:爱神出生时创造了玫瑰,因此玫瑰从那个时代起就成为了爱情的代名词。而在19世纪初,法国开始兴起花语,随即流行到英国与美国,主要是由一些作家所
[hadoop@ruozedata001 ~]$ hdfs dfs -cp /input/wordcount.txt /input/wordcount4.txt
cp: Cannot create file/input/wordcount4.txt._COPYING_. Name node is in safe mode.

5.hdfs ha启动过程中,那么多进程,先后顺序关系是什么? dn进程是最后启动吗?

namenodes -> datanode -> journalnode -> zkfc

Starting namenodes on [ruozedata001 ruozedata002]
ruozedata001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-namenode-ruozedata001.out
ruozedata002: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-namenode-ruozedata002.out
ruozedata001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-datanode-ruozedata001.out
ruozedata002: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-datanode-ruozedata002.out
ruozedata003: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-datanode-ruozedata003.out
Starting journal nodes [ruozedata001 ruozedata002 ruozedata003]
ruozedata002: journalnode running as process 22749. Stop it first.
ruozedata001: journalnode running as process 25521. Stop it first.
ruozedata003: journalnode running as process 19361. Stop it first.
Starting ZK Failover Controllers on NN hosts [ruozedata001 ruozedata002]
ruozedata002: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-zkfc-ruozedata002.out
ruozedata001: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-zkfc-ruozedata001.out
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值