铲掉Hadoop伪分布式集群重新再搭建

 

[root@hadoop004 tmp]# pwd
/tmp
[root@hadoop004 tmp]# ls
Aegis-<Guid(5A2C30A2-A87D-490A-9281-6765EDAD7CBA)>
hadoop-hadoop
hadoop-hadoop-datanode.pid
hadoop-hadoop-namenode.pid
hadoop-hadoop-secondarynamenode.pid
hsperfdata_hadoop
hsperfdata_root
Jetty_0_0_0_0_50070_hdfs____w2cu08
Jetty_0_0_0_0_8042_node____19tj0x
Jetty_0_0_0_0_8088_cluster____u0rgz3
Jetty_hadoop004_50090_secondary____ntsv2n
Jetty_localhost_33763_datanode____476i0r
Jetty_localhost_36695_datanode____.9thzii
Jetty_localhost_38800_datanode____.je4w6e
Jetty_localhost_39467_datanode____1e8ppy
Jetty_localhost_43900_datanode____.w65tr
systemd-private-55d8cded3db44279a0ab9d57c72d8137-ntpd.service-dMsw96
systemd-private-65b7d7273fef4e59aa10d49069bc6909-ntpd.service-iyYhN7
systemd-private-c13948d3311942c8b62ada822de8f3f3-ntpd.service-dKMaKV
systemd-private-d92de6050d6047a4b011d98c85280f87-ntpd.service-VM6of8
yarn-hadoop-nodemanager.pid
yarn-hadoop-resourcemanager.pid

 hsperfdata_hadoop目录里面是文件

[root@hadoop004 hsperfdata_hadoop]# ls
11207  11333  11499  11649  11752

[root@hadoop004 tmp]# vim hadoop-hadoop-namenode.pid
11207

 

stop集群之后,/tmp目录 

[root@hadoop004 tmp]# ls
Aegis-<Guid(5A2C30A2-A87D-490A-9281-6765EDAD7CBA)>
hadoop-hadoop
hsperfdata_hadoop
hsperfdata_root
Jetty_0_0_0_0_50070_hdfs____w2cu08
Jetty_hadoop004_50090_secondary____ntsv2n
Jetty_localhost_33763_datanode____476i0r
Jetty_localhost_36695_datanode____.9thzii
Jetty_localhost_38800_datanode____.je4w6e
Jetty_localhost_39467_datanode____1e8ppy
Jetty_localhost_43900_datanode____.w65tr
systemd-private-55d8cded3db44279a0ab9d57c72d8137-ntpd.service-dMsw96
systemd-private-65b7d7273fef4e59aa10d49069bc6909-ntpd.service-iyYhN7
systemd-private-c13948d3311942c8b62ada822de8f3f3-ntpd.service-dKMaKV
systemd-private-d92de6050d6047a4b011d98c85280f87-ntpd.service-VM6of8

hsperfdata_hadoop目录里面没有东西了 

[root@hadoop004 hsperfdata_hadoop]# ls

好,现在开始把现有的Hadoop伪分布式集群铲掉

[hadoop@hadoop004 hadoop-2.6.0-cdh5.7.0]$ ps -ef|grep hadoop
root       695     1  0 18:37 ?        00:00:00 /sbin/dhclient -H hadoop004 -1 -q -lf /var/lib/dhclient/dhclient--eth0.lease -pf /var/run/dhclient-eth0.pid eth0
root     10544  9835  0 23:13 pts/1    00:00:00 su - hadoop
hadoop   10545 10544  0 23:13 pts/1    00:00:00 -bash
root     11071 11029  0 23:26 pts/1    00:00:00 su - hadoop
hadoop   11072 11071  0 23:26 pts/1    00:00:00 -bash
hadoop   14001 11072  0 23:36 pts/1    00:00:00 ps -ef
hadoop   14002 11072  0 23:36 pts/1    00:00:00 grep --color=auto hadoop

以root用户去删 

[root@hadoop004 tmp]# rm -rf /tmp/hadoop-* /tmp/hsperfdata_*
[hadoop@hadoop004 hadoop]$ vim core-site.xml

<configuration>
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://hadoop004:9000</value>
</property>

<!--指定hadoop临时目录, hadoop.tmp.dir 是hadoop文件系统依赖的基础配置,很多路径都依赖它。如果hdfs-site.xml中不配 置namenode和datanode的存放位置,默认就放在这>个路径中 -->
<property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/tmp</value>
</property>

<property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.GzipCodec,
        org.apache.hadoop.io.compress.DefaultCodec,
        org.apache.hadoop.io.compress.BZip2Codec,
        org.apache.hadoop.io.compress.SnappyCodec
    </value>
</property>
</configuration>

格式化 

[hadoop@hadoop004 hadoop]$ hdfs namenode -format
[hadoop@hadoop004 hadoop-2.6.0-cdh5.7.0]$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop004]
hadoop004: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop004.out
hadoop004: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop004.out
Starting secondary namenodes [hadoop004]
hadoop004: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop004.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop004.out
hadoop004: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop004.out
[hadoop@hadoop004 hadoop-2.6.0-cdh5.7.0]$ jps
14343 DataNode
15053 Jps
14217 NameNode
14756 NodeManager
14503 SecondaryNameNode
14653 ResourceManager
[hadoop@hadoop004 data]$ hdfs dfs -mkdir -p /data/wc/input
[hadoop@hadoop004 data]$ hdfs dfs -put wc.txt /data/wc/input/
[hadoop@hadoop004 data]$ hdfs dfs -text /data/wc/input/wc.txt
hello	word	hello
hello	hello	hi
son	boy	boy
[hadoop@hadoop004 data]$ hadoop jar /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar wordcount /data/wc/input/  /data/wc/output/
19/04/19 00:03:05 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
19/04/19 00:03:06 INFO input.FileInputFormat: Total input paths to process : 1
19/04/19 00:03:06 INFO mapreduce.JobSubmitter: number of splits:1
19/04/19 00:03:06 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1555603118811_0001
19/04/19 00:03:06 INFO impl.YarnClientImpl: Submitted application application_1555603118811_0001
19/04/19 00:03:07 INFO mapreduce.Job: The url to track the job: http://hadoop004:8088/proxy/application_1555603118811_0001/
19/04/19 00:03:07 INFO mapreduce.Job: Running job: job_1555603118811_0001
19/04/19 00:03:14 INFO mapreduce.Job: Job job_1555603118811_0001 running in uber mode : false
19/04/19 00:03:14 INFO mapreduce.Job:  map 0% reduce 0%
19/04/19 00:03:19 INFO mapreduce.Job:  map 100% reduce 0%
19/04/19 00:03:25 INFO mapreduce.Job:  map 100% reduce 100%
19/04/19 00:03:25 INFO mapreduce.Job: Job job_1555603118811_0001 completed successfully
19/04/19 00:03:25 INFO mapreduce.Job: Counters: 49
	File System Counters
		FILE: Number of bytes read=71
		FILE: Number of bytes written=223733
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=151
		HDFS: Number of bytes written=42
		HDFS: Number of read operations=6
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2

	Job Counters
		Launched map tasks=1
		Launched reduce tasks=1
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=3139
		Total time spent by all reduces in occupied slots (ms)=3389
		Total time spent by all map tasks (ms)=3139
		Total time spent by all reduce tasks (ms)=3389
		Total vcore-seconds taken by all map tasks=3139
		Total vcore-seconds taken by all reduce tasks=3389
		Total megabyte-seconds taken by all map tasks=3214336
		Total megabyte-seconds taken by all reduce tasks=3470336
	Map-Reduce Framework
		Map input records=3
		Map output records=9
		Map output bytes=80
		Map output materialized bytes=67
		Input split bytes=107
		Combine input records=9
		Combine output records=5
		Reduce input groups=5
		Reduce shuffle bytes=67
		Reduce input records=5
		Reduce output records=5
		Spilled Records=10
		Shuffled Maps =1
		Failed Shuffles=0
		Merged Map outputs=1
		GC time elapsed (ms)=59
		CPU time spent (ms)=1280
		Physical memory (bytes) snapshot=460390400
		Virtual memory (bytes) snapshot=3193577472
		Total committed heap usage (bytes)=319291392
    Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters
		Bytes Read=44
	File Output Format Counters
		Bytes Written=42

[hadoop@hadoop004 data]$ hdfs dfs -ls /data/wc/output/
Found 2 items
-rw-r--r--   1 hadoop supergroup          0 2019-04-19 00:03 /data/wc/output/_SUCCESS
-rw-r--r--   1 hadoop supergroup         42 2019-04-19 00:03 /data/wc/output/part-r-00000.snappy

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值