hadoop 单机式配置 + 伪分布式配置

安装教程可以看我之前的博客

本教程在ubuntu manjaro fedora上均测试成功

非分布式配置(单机模式)

summary

# 把hadoop的所有配置xml文件作为输入,复制到新建的input文件夹里
mkdir ./input && cp /opt/software/hadoop-3.3.1/etc/hadoop/*.xml ./input
# 运行grep例子,将input文件中的所有文件作为输入,筛选符合表达式dfs[a-z.]+的单词并统计出现的次数,输出到output文件夹
hadoop jar /opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep ./input ./output 'dfs[a-z.]+'
# 查看结果
cat ./output/*
ls ./output/

执行成功后如下所示,输出了作业的相关信息,输出的结果是符合正则的单词 dfsadmin 出现了1次

bernard@aqua:~/test$ cat ./output/*  
1       dfsadmin

detail

bernard@aqua:~/test$ screenfetch # 展示一下系统信息
                          ./+o+-       bernard@aqua
                  yyyyy- -yyyyyy+      OS: Ubuntu 21.04 hirsute
               ://+//-yyyyyyo      Kernel: x86_64 Linux 5.11.0-22-generic
           .++ .:/++++++/-.+sss/`      Uptime: 2d 21h 33m
         .:++o:  /++++++++/:--:/-      Packages: 2117
        o:+o+:++.`..```.-/oo+++++/     Shell: bash 5.1.4
       .:+o:+o/.          `+sssoo+/    Resolution: 1366x768
  .++/+:+oo+o:`             /sssooo.   DE: KDE 5.80.0 / Plasma 5.21.4
 /+++//+:`oo+o               /::--:.   WM: KWin
 \+/+o+++`o++o               ++.   GTK Theme: Breeze [GTK2/3]
  .++.o+++oo+:`             /dddhhh.   Icon Theme: breeze
       .+.o+oo:.          `oddhhhh+    Disk: 17G / 147G (12%)
        \+.++o+o``-````.:ohdhhhhh+     CPU: Intel Core i3-4030U @ 4x 1.8GHz [46.0°C]
         `:o+++ `ohhhhhhhhyo++os:      GPU: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev 0b)
           .o:`.syhhhhhhh/.oo++o`      RAM: 2336MiB / 3834MiB
               /osyyyyyyo++ooo+++/    
                   `````+oo+++o\:    
                          `oo++.
bernard@aqua:~$ hadoop version # 查看你安装好了没 以及安装到文件路径
Hadoop 3.3.1
Source code repository https://github.com/apache/hadoop.git -r a3b9c37a397ad4188041dd80621bdeefc46885f2
Compiled by ubuntu on 2021-06-15T05:13Z
Compiled with protoc 3.7.1
From source with checksum 88a4ddb2299aca054416d6b7f81ca55
This command was run using /opt/software/hadoop-3.3.1/share/hadoop/common/hadoop-common-3.3.1.jar

bernard@aqua:~$ ls /opt/software/hadoop-3.3.1/share/hadoop/mapreduce/
hadoop-mapreduce-client-app-3.3.1.jar
hadoop-mapreduce-client-common-3.3.1.jar
hadoop-mapreduce-client-core-3.3.1.jar
hadoop-mapreduce-client-hs-3.3.1.jar
hadoop-mapreduce-client-hs-plugins-3.3.1.jar
hadoop-mapreduce-client-jobclient-3.3.1.jar
hadoop-mapreduce-client-jobclient-3.3.1-tests.jar
hadoop-mapreduce-client-nativetask-3.3.1.jar
hadoop-mapreduce-client-shuffle-3.3.1.jar
hadoop-mapreduce-client-uploader-3.3.1.jar
hadoop-mapreduce-examples-3.3.1.jar # 这个就是我们需要调用的测试用例
jdiff
lib-examples
sources
bernard@aqua:~$ hadoop jar /opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar 
# 一些详情
An example program must be given as the first argument.
Valid program names are:
  aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
  aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
  bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi.
  dbcount: An example job that count the pageview counts from a database.
  distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi.
  grep: A map/reduce program that counts the matches of a regex in the input.
  join: A job that effects a join over sorted, equally partitioned datasets
  multifilewc: A job that counts words from several files.
  pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
  pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method.
  randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
  randomwriter: A map/reduce program that writes 10GB of random data per node.
  secondarysort: An example defining a secondary sort to the reduce.
  sort: A map/reduce program that sorts the data written by the random writer.
  sudoku: A sudoku solver.
  teragen: Generate data for the terasort
  terasort: Run the terasort
  teravalidate: Checking results of terasort
  wordcount: A map/reduce program that counts the words in the input files.
  wordmean: A map/reduce program that counts the average length of the words in the input files.
  wordmedian: A map/reduce program that counts the median length of the words in the input files.
  wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.

到test文件夹里,把hadoop的所有配置xml文件作为输入,复制到新建的input文件夹里

bernard@aqua:~/test$ mkdir ./input && cp /opt/software/hadoop-3.3.1/etc/hadoop/*.xml ./input

运行grep例子,将input文件中的所有文件作为输入,筛选符合表达式dfs[a-z.]+的单词并统计出现的次数,输出到output文件夹

bernard@aqua:~/test$ hadoop jar /opt/software/hadoop-3.3.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep ./input ./output 'dfs[a-z.]+'
# 大量运行详情
bernard@aqua:~/test$ cat ./output/*  
1       dfsadmin
bernard@aqua:~/test$ ls ./output/
part-r-00000  _SUCCESS

伪分布式配置

summary

sudo apt-get install openssh-server
ssh localhost
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys

Hadoop 可以在单节点上以伪分布式的方式运行,Hadoop 进程以分离的 Java 进程来运行,节点既作为 NameNode 也作为 DataNode,同时,读取的是 HDFS 中的文件。

Hadoop 的配置文件位于 /hadoop/etc/hadoop/ 中,伪分布式需要修改2个配置文件 core-site.xmlhdfs-site.xml 。Hadoop的配置文件是 xml 格式,每个配置以声明 property 的 name 和 value 的方式来实现。

core-site.xml

<configuration>
</configuration>

替换为

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/opt/software/hadoop-3.3.1/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
</configuration>

替换为

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/opt/software/hadoop-3.3.1/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/opt/software/hadoop-3.3.1/tmp/dfs/data</value>
    </property>
</configuration>
cd /opt/software/hadoop-3.3.1
./bin/hdfs namenode -format
./sbin/start-dfs.sh

浏览器访问http://localhost:9870 如果能和我看到同一个结果,说明配置成功。你可以在这里查看 NameNode 和 Datanode 信息,还可以在线查看 HDFS 中的文件。

在这里插入图片描述
运行实例

./bin/hdfs dfs -mkdir -p /user/hadoop
./bin/hdfs dfs -mkdir input
./bin/hdfs dfs -put ./etc/hadoop/*.xml input
./bin/hdfs dfs -ls input 
./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep input output 'dfs[a-z.]+'
./bin/hdfs dfs -cat output/* 


   # Make the HDFS directories required to execute MapReduce jobs:

      $ bin/hdfs dfs -mkdir /user
      $ bin/hdfs dfs -mkdir /user/<username>

   # Copy the input files into the distributed filesystem:

      $ bin/hdfs dfs -mkdir input
      $ bin/hdfs dfs -put etc/hadoop/*.xml input

   # Run some of the examples provided:

      $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep input output 'dfs[a-z.]+'

   # Examine the output files: Copy the output files from the distributed filesystem to the local filesystem and examine them:

      $ bin/hdfs dfs -get output output
      $ cat output/*

   # or

   # View the output files on the distributed filesystem:

      $ bin/hdfs dfs -cat output/*

   # When you’re done, stop the daemons with:

      $ sbin/stop-dfs.sh


./sbin/stop-dfs.sh  # 关闭
./sbin/start-dfs.sh # 以后可以直接启动

Hadoop配置文件说明

Hadoop 的运行方式是由配置文件决定的(运行 Hadoop 时会读取配置文件),因此如果需要从伪分布式模式切换回非分布式模式,需要删除core-site.xml 中的配置项。

此外,伪分布式虽然只需要配置 fs.defaultFS 和 dfs.replication 就可以运行(官方教程如此),不过若没有配置hadoop.tmp.dir 参数,则默认使用的临时目录为/tmp/hadoo-hadoop,而这个目录在重启时有可能被系统清理掉,导致必须重新执行 format才行。所以我们进行了设置,同时也指定dfs.namenode.name.dir 和 dfs.datanode.data.dir,否则在接下来的步骤中可能会出错。

sudo chmod -R 777 ./logs
sudo chmod -R 777 ./tmp
# 如果遇到权限问题

detail

bernard@aqua:~/test$ cd /opt/software/hadoop-3.3.1/
bernard@aqua:/opt/software/hadoop-3.3.1$ ls ./etc/hadoop/
capacity-scheduler.xml            kms-log4j.properties
configuration.xsl                 kms-site.xml
container-executor.cfg            log4j.properties
core-site.xml                     mapred-env.cmd
hadoop-env.cmd                    mapred-env.sh
hadoop-env.sh                     mapred-queues.xml.template
hadoop-metrics2.properties        mapred-site.xml
hadoop-policy.xml                 shellprofile.d
hadoop-user-functions.sh.example  ssl-client.xml.example
hdfs-rbf-site.xml                 ssl-server.xml.example
hdfs-site.xml                     user_ec_policies.xml.template
httpfs-env.sh                     workers
httpfs-log4j.properties           yarn-env.cmd
httpfs-site.xml                   yarn-env.sh
kms-acls.xml                      yarnservice-log4j.properties
kms-env.sh                        yarn-site.xml
bernard@aqua:/opt/software/hadoop-3.3.1$ sudo vi ./etc/hadoop/core-site.xml
bernard@aqua:/opt/software/hadoop-3.3.1$ sudo vi ./etc/hadoop/hdfs-site.xml
bernard@aqua:/opt/software/hadoop-3.3.1$ hdfs namenode -format
2021-08-13 16:42:41,885 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = aqua/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.3.1
# 省略
2021-08-13 16:42:44,232 WARN namenode.NameNode: Encountered exception during format
java.io.IOException: Cannot create directory /usr/local/hadoop/tmp/dfs/name/current
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:447)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:189)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1276)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1724)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1832)
2021-08-13 16:42:44,354 INFO namenode.FSNamesystem: Stopping services started for active state
2021-08-13 16:42:44,355 INFO namenode.FSNamesystem: Stopping services started for standby state
2021-08-13 16:42:44,355 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot create directory /usr/local/hadoop/tmp/dfs/name/current
        at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:447)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:591)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:613)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:189)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1276)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1724)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1832)
2021-08-13 16:42:44,358 INFO util.ExitUtil: Exiting with status 1: java.io.IOException: Cannot create directory /usr/local/hadoop/tmp/dfs/name/current
2021-08-13 16:42:44,361 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at aqua/127.0.1.1
************************************************************/

hadoop@aqua:/opt/software/hadoop-3.3.1$ ./bin/hdfs namenode -format
2021-08-15 15:31:30,260 INFO namenode.NameNode: STARTUP_MSG:
2021-08-15 15:31:30,277 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2021-08-15 15:31:30,417 INFO namenode.NameNode: createNameNode [-format]
2021-08-15 15:31:31,069 INFO namenode.NameNode: Formatting using clusterid: CID-7e9f1be3-da1b-4281-a0d3-19b8dd4037e2
2021-08-15 15:31:31,143 INFO namenode.FSEditLog: Edit logging is async:true
2021-08-15 15:31:31,198 INFO namenode.FSNamesystem: KeyProvider: null
2021-08-15 15:31:31,201 INFO namenode.FSNamesystem: fsLock is fair: true
2021-08-15 15:31:31,202 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2021-08-15 15:31:31,210 INFO namenode.FSNamesystem: fsOwner                = hadoop (auth:SIMPLE)
2021-08-15 15:31:31,210 INFO namenode.FSNamesystem: supergroup             = supergroup
2021-08-15 15:31:31,210 INFO namenode.FSNamesystem: isPermissionEnabled    = true
2021-08-15 15:31:31,210 INFO namenode.FSNamesystem: isStoragePolicyEnabled = true
2021-08-15 15:31:31,210 INFO namenode.FSNamesystem: HA Enabled: false
2021-08-15 15:31:31,281 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2021-08-15 15:31:31,303 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2021-08-15 15:31:31,303 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2021-08-15 15:31:31,309 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2021-08-15 15:31:31,309 INFO blockmanagement.BlockManager: The block deletion will start around 2021 八月 15 15:31:31
2021-08-15 15:31:31,312 INFO util.GSet: Computing capacity for map BlocksMap
2021-08-15 15:31:31,312 INFO util.GSet: VM type       = 64-bit
2021-08-15 15:31:31,314 INFO util.GSet: 2.0% max memory 853.5 MB = 17.1 MB
2021-08-15 15:31:31,314 INFO util.GSet: capacity      = 2^21 = 2097152 entries
2021-08-15 15:31:31,325 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled
2021-08-15 15:31:31,325 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2021-08-15 15:31:31,338 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.999
2021-08-15 15:31:31,338 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2021-08-15 15:31:31,338 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2021-08-15 15:31:31,339 INFO blockmanagement.BlockManager: defaultReplication         = 1
2021-08-15 15:31:31,340 INFO blockmanagement.BlockManager: maxReplication             = 512
2021-08-15 15:31:31,340 INFO blockmanagement.BlockManager: minReplication             = 1
2021-08-15 15:31:31,340 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2021-08-15 15:31:31,340 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2021-08-15 15:31:31,340 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2021-08-15 15:31:31,340 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2021-08-15 15:31:31,399 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911
2021-08-15 15:31:31,399 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215
2021-08-15 15:31:31,399 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215
2021-08-15 15:31:31,399 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215
2021-08-15 15:31:31,421 INFO util.GSet: Computing capacity for map INodeMap
2021-08-15 15:31:31,422 INFO util.GSet: VM type       = 64-bit
2021-08-15 15:31:31,422 INFO util.GSet: 1.0% max memory 853.5 MB = 8.5 MB
2021-08-15 15:31:31,422 INFO util.GSet: capacity      = 2^20 = 1048576 entries
2021-08-15 15:31:31,423 INFO namenode.FSDirectory: ACLs enabled? true
2021-08-15 15:31:31,423 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2021-08-15 15:31:31,423 INFO namenode.FSDirectory: XAttrs enabled? true
2021-08-15 15:31:31,424 INFO namenode.NameNode: Caching file names occurring more than 10 times
2021-08-15 15:31:31,432 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2021-08-15 15:31:31,436 INFO snapshot.SnapshotManager: SkipList is disabled
2021-08-15 15:31:31,443 INFO util.GSet: Computing capacity for map cachedBlocks
2021-08-15 15:31:31,443 INFO util.GSet: VM type       = 64-bit
2021-08-15 15:31:31,443 INFO util.GSet: 0.25% max memory 853.5 MB = 2.1 MB
2021-08-15 15:31:31,444 INFO util.GSet: capacity      = 2^18 = 262144 entries
2021-08-15 15:31:31,456 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2021-08-15 15:31:31,456 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2021-08-15 15:31:31,456 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2021-08-15 15:31:31,466 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2021-08-15 15:31:31,466 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2021-08-15 15:31:31,470 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2021-08-15 15:31:31,470 INFO util.GSet: VM type       = 64-bit
2021-08-15 15:31:31,471 INFO util.GSet: 0.029999999329447746% max memory 853.5 MB = 262.2 KB
2021-08-15 15:31:31,471 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory root= /opt/software/hadoop-3.3.1/tmp/dfs/name; location= null ? (Y or N) Y
2021-08-15 15:31:34,269 INFO namenode.FSImage: Allocated new BlockPoolId: BP-821834158-127.0.1.1-1629012694253
2021-08-15 15:31:34,270 INFO common.Storage: Will remove files: [/opt/software/hadoop-3.3.1/tmp/dfs/name/current/VERSION, /opt/software/hadoop-3.3.1/tmp/dfs/name/current/fsimage_0000000000000000000, /opt/software/hadoop-3.3.1/tmp/dfs/name/current/fsimage_0000000000000000000.md5, /opt/software/hadoop-3.3.1/tmp/dfs/name/current/seen_txid]
2021-08-15 15:31:34,443 INFO common.Storage: Storage directory /opt/software/hadoop-3.3.1/tmp/dfs/name has been successfully formatted.
2021-08-15 15:31:34,500 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/software/hadoop-3.3.1/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2021-08-15 15:31:34,665 INFO namenode.FSImageFormatProtobuf: Image file /opt/software/hadoop-3.3.1/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 401 bytes saved in 0 seconds .
2021-08-15 15:31:34,718 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2021-08-15 15:31:34,761 INFO namenode.FSNamesystem: Stopping services started for active state
2021-08-15 15:31:34,761 INFO namenode.FSNamesystem: Stopping services started for standby state
2021-08-15 15:31:34,766 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
2021-08-15 15:31:34,767 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at aqua/127.0.1.1
************************************************************/
bernard@aqua:/opt/software/hadoop-3.3.1$ ./sbin/start-dfs.sh 
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [aqua]
bernard@aqua:/opt/software/hadoop-3.3.1$

运行实例

hadoop@aqua:/opt/software/hadoop-3.3.1$ ./bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@aqua:/opt/software/hadoop-3.3.1$ ./bin/hdfs dfs -mkdir input
hadoop@aqua:/opt/software/hadoop-3.3.1$ ./bin/hdfs dfs -put ./etc/hadoop/*.xml input
2021-08-15 15:50:06,834 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/input/capacity-scheduler.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
        at org.apache.hadoop.ipc.Client.call(Client.java:1519)
        at org.apache.hadoop.ipc.Client.call(Client.java:1416)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)
put: File /user/hadoop/input/capacity-scheduler.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
2021-08-15 15:50:06,913 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/input/core-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
        at org.apache.hadoop.ipc.Client.call(Client.java:1519)
        at org.apache.hadoop.ipc.Client.call(Client.java:1416)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)
put: File /user/hadoop/input/core-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
2021-08-15 15:50:06,961 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/input/hadoop-policy.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
        at org.apache.hadoop.ipc.Client.call(Client.java:1519)
        at org.apache.hadoop.ipc.Client.call(Client.java:1416)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)
put: File /user/hadoop/input/hadoop-policy.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
2021-08-15 15:50:07,011 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/input/hdfs-rbf-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
        at org.apache.hadoop.ipc.Client.call(Client.java:1519)
        at org.apache.hadoop.ipc.Client.call(Client.java:1416)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)
put: File /user/hadoop/input/hdfs-rbf-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
2021-08-15 15:50:07,056 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/input/hdfs-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
        at org.apache.hadoop.ipc.Client.call(Client.java:1519)
        at org.apache.hadoop.ipc.Client.call(Client.java:1416)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)
put: File /user/hadoop/input/hdfs-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
2021-08-15 15:50:07,101 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/input/httpfs-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
        at org.apache.hadoop.ipc.Client.call(Client.java:1519)
        at org.apache.hadoop.ipc.Client.call(Client.java:1416)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)
put: File /user/hadoop/input/httpfs-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
2021-08-15 15:50:07,146 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/input/kms-acls.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
        at org.apache.hadoop.ipc.Client.call(Client.java:1519)
        at org.apache.hadoop.ipc.Client.call(Client.java:1416)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)
put: File /user/hadoop/input/kms-acls.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
2021-08-15 15:50:07,188 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/input/kms-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
        at org.apache.hadoop.ipc.Client.call(Client.java:1519)
        at org.apache.hadoop.ipc.Client.call(Client.java:1416)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)
put: File /user/hadoop/input/kms-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
2021-08-15 15:50:07,233 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/input/mapred-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
        at org.apache.hadoop.ipc.Client.call(Client.java:1519)
        at org.apache.hadoop.ipc.Client.call(Client.java:1416)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)
put: File /user/hadoop/input/mapred-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
2021-08-15 15:50:07,279 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/input/yarn-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
        at org.apache.hadoop.ipc.Client.call(Client.java:1519)
        at org.apache.hadoop.ipc.Client.call(Client.java:1416)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)
put: File /user/hadoop/input/yarn-site.xml._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
hadoop@aqua:/opt/software/hadoop-3.3.1$ ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep input output 'dfs[a-z.]+'        
2021-08-15 15:51:03,421 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
2021-08-15 15:51:03,589 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2021-08-15 15:51:03,589 INFO impl.MetricsSystemImpl: JobTracker metrics system started
2021-08-15 15:51:04,077 INFO input.FileInputFormat: Total input files to process : 0
2021-08-15 15:51:04,094 INFO mapreduce.JobSubmitter: number of splits:0
2021-08-15 15:51:04,289 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1411199604_0001
2021-08-15 15:51:04,289 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-08-15 15:51:04,576 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
2021-08-15 15:51:04,578 INFO mapreduce.Job: Running job: job_local1411199604_0001
2021-08-15 15:51:04,580 INFO mapred.LocalJobRunner: OutputCommitter set in config null
2021-08-15 15:51:04,611 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-15 15:51:04,611 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-15 15:51:04,612 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2021-08-15 15:51:04,765 INFO mapred.LocalJobRunner: Waiting for map tasks
2021-08-15 15:51:04,765 INFO mapred.LocalJobRunner: map task executor complete.
2021-08-15 15:51:04,780 INFO mapred.LocalJobRunner: Waiting for reduce tasks
2021-08-15 15:51:04,782 INFO mapred.LocalJobRunner: Starting task: attempt_local1411199604_0001_r_000000_0
2021-08-15 15:51:04,899 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-15 15:51:04,899 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-15 15:51:04,954 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-15 15:51:04,980 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@79fbeb9a
2021-08-15 15:51:04,983 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized!
2021-08-15 15:51:05,049 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=626471744, maxSingleShuffleLimit=156617936, mergeThreshold=413471360, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2021-08-15 15:51:05,054 INFO reduce.EventFetcher: attempt_local1411199604_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2021-08-15 15:51:05,077 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
2021-08-15 15:51:05,084 INFO mapred.LocalJobRunner: 
2021-08-15 15:51:05,084 INFO reduce.MergeManagerImpl: finalMerge called with 0 in-memory map-outputs and 0 on-disk map-outputs
2021-08-15 15:51:05,086 INFO reduce.MergeManagerImpl: Merging 0 files, 0 bytes from disk
2021-08-15 15:51:05,087 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
2021-08-15 15:51:05,092 INFO mapred.Merger: Merging 0 sorted segments
2021-08-15 15:51:05,092 INFO mapred.Merger: Down to the last merge-pass, with 0 segments left of total size: 0 bytes
2021-08-15 15:51:05,093 INFO mapred.LocalJobRunner: 
2021-08-15 15:51:05,237 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2021-08-15 15:51:05,289 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/grep-temp-1563657175/_temporary/0/_temporary/attempt_local1411199604_0001_r_000000_0/part-r-00000 could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
        at org.apache.hadoop.ipc.Client.call(Client.java:1519)
        at org.apache.hadoop.ipc.Client.call(Client.java:1416)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)
2021-08-15 15:51:05,294 INFO mapred.LocalJobRunner: reduce task executor complete.
2021-08-15 15:51:05,320 WARN mapred.LocalJobRunner: job_local1411199604_0001
java.lang.Exception: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/grep-temp-1563657175/_temporary/0/_temporary/attempt_local1411199604_0001_r_000000_0/part-r-00000 could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:559)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/grep-temp-1563657175/_temporary/0/_temporary/attempt_local1411199604_0001_r_000000_0/part-r-00000 could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2329)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2942)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:915)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:600)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:568)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:552)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1035)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:963)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2966)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1573)
        at org.apache.hadoop.ipc.Client.call(Client.java:1519)
        at org.apache.hadoop.ipc.Client.call(Client.java:1416)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:530)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
        at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1898)
        at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1700)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:707)
2021-08-15 15:51:05,588 INFO mapreduce.Job: Job job_local1411199604_0001 running in uber mode : false
2021-08-15 15:51:05,589 INFO mapreduce.Job:  map 0% reduce 0%
2021-08-15 15:51:05,591 INFO mapreduce.Job: Job job_local1411199604_0001 failed with state FAILED due to: NA
2021-08-15 15:51:05,629 INFO mapreduce.Job: Counters: 17
        Map-Reduce Framework
                Combine input records=0
                Combine output records=0
                Reduce input groups=0
                Reduce shuffle bytes=0
                Reduce input records=0
                Reduce output records=0
                Spilled Records=0
                Shuffled Maps =0
                Failed Shuffles=0
                Merged Map outputs=0
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Output Format Counters 
                Bytes Written=0
2021-08-15 15:51:05,676 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized!
2021-08-15 15:51:05,712 INFO input.FileInputFormat: Total input files to process : 0
2021-08-15 15:51:05,716 INFO mapreduce.JobSubmitter: number of splits:0
2021-08-15 15:51:05,761 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local842662335_0002
2021-08-15 15:51:05,761 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-08-15 15:51:05,877 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
2021-08-15 15:51:05,877 INFO mapreduce.Job: Running job: job_local842662335_0002
2021-08-15 15:51:05,878 INFO mapred.LocalJobRunner: OutputCommitter set in config null
2021-08-15 15:51:05,879 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-15 15:51:05,879 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-15 15:51:05,879 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2021-08-15 15:51:05,939 INFO mapred.LocalJobRunner: Waiting for map tasks
2021-08-15 15:51:05,939 INFO mapred.LocalJobRunner: map task executor complete.
2021-08-15 15:51:05,941 INFO mapred.LocalJobRunner: Waiting for reduce tasks
2021-08-15 15:51:05,941 INFO mapred.LocalJobRunner: Starting task: attempt_local842662335_0002_r_000000_0
2021-08-15 15:51:05,943 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 2
2021-08-15 15:51:05,943 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-08-15 15:51:05,944 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
2021-08-15 15:51:05,944 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@54f0812a
2021-08-15 15:51:05,945 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized!
2021-08-15 15:51:05,946 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=626471744, maxSingleShuffleLimit=156617936, mergeThreshold=413471360, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2021-08-15 15:51:05,956 INFO reduce.EventFetcher: attempt_local842662335_0002_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2021-08-15 15:51:05,959 INFO mapred.LocalJobRunner: 
2021-08-15 15:51:05,959 INFO reduce.MergeManagerImpl: finalMerge called with 0 in-memory map-outputs and 0 on-disk map-outputs
2021-08-15 15:51:05,960 INFO reduce.MergeManagerImpl: Merging 0 files, 0 bytes from disk
2021-08-15 15:51:05,960 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
2021-08-15 15:51:05,960 INFO mapred.Merger: Merging 0 sorted segments
2021-08-15 15:51:05,961 INFO mapred.Merger: Down to the last merge-pass, with 0 segments left of total size: 0 bytes
2021-08-15 15:51:05,961 INFO mapred.LocalJobRunner: 
2021-08-15 15:51:06,019 INFO mapred.Task: Task:attempt_local842662335_0002_r_000000_0 is done. And is in the process of committing
2021-08-15 15:51:06,023 INFO mapred.LocalJobRunner: 
2021-08-15 15:51:06,023 INFO mapred.Task: Task attempt_local842662335_0002_r_000000_0 is allowed to commit now
2021-08-15 15:51:06,085 INFO output.FileOutputCommitter: Saved output of task 'attempt_local842662335_0002_r_000000_0' to hdfs://localhost:9000/user/hadoop/output
2021-08-15 15:51:06,087 INFO mapred.LocalJobRunner: reduce > reduce
2021-08-15 15:51:06,087 INFO mapred.Task: Task 'attempt_local842662335_0002_r_000000_0' done.
2021-08-15 15:51:06,089 INFO mapred.Task: Final Counters for attempt_local842662335_0002_r_000000_0: Counters: 30
        File System Counters
                FILE: Number of bytes read=562022
                FILE: Number of bytes written=1826724
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=0
                HDFS: Number of bytes written=86
                HDFS: Number of read operations=11
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=6
                HDFS: Number of bytes read erasure-coded=0
        Map-Reduce Framework
                Combine input records=0
                Combine output records=0
                Reduce input groups=0
                Reduce shuffle bytes=0
                Reduce input records=0
                Reduce output records=0
                Spilled Records=0
                Shuffled Maps =0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=161480704
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Output Format Counters 
                Bytes Written=0
2021-08-15 15:51:06,089 INFO mapred.LocalJobRunner: Finishing task: attempt_local842662335_0002_r_000000_0
2021-08-15 15:51:06,089 INFO mapred.LocalJobRunner: reduce task executor complete.
2021-08-15 15:51:06,878 INFO mapreduce.Job: Job job_local842662335_0002 running in uber mode : false
2021-08-15 15:51:06,878 INFO mapreduce.Job:  map 0% reduce 100%
2021-08-15 15:51:06,879 INFO mapreduce.Job: Job job_local842662335_0002 completed successfully
2021-08-15 15:51:06,888 INFO mapreduce.Job: Counters: 30
        File System Counters
                FILE: Number of bytes read=562022
                FILE: Number of bytes written=1826724
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=0
                HDFS: Number of bytes written=86
                HDFS: Number of read operations=11
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=6
                HDFS: Number of bytes read erasure-coded=0
        Map-Reduce Framework
                Combine input records=0
                Combine output records=0
                Reduce input groups=0
                Reduce shuffle bytes=0
                Reduce input records=0
                Reduce output records=0
                Spilled Records=0
                Shuffled Maps =0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=161480704
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Output Format Counters 
                Bytes Written=0

reference

http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
http://dblab.xmu.edu.cn/blog/2441-2/#more-2441

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Bernard5

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值