【填坑之旅-hadoop-01】hadoop2.10.1(基于2.4.1更新配置)伪分布式搭建 centos7 jdk1.8

1.准备Linux环境

1.0虚拟机网络设置(NAT模式,定义子网)

点击VMware快捷方式,右键打开文件所在位置 -> 双击vmnetcfg.exe -> VMnet1 host-only ->修改subnet ip 设置网段:192.168.1.0 子网掩码:255.255.255.0 -> apply -> ok
回到windows --> 打开网络和共享中心 -> 更改适配器设置 -> 右键VMnet1 -> 属性 -> 双击IPv4 -> 设置windows的IP:192.168.1.100 子网掩码:255.255.255.0 -> 点击确定
在虚拟软件上 --My Computer -> 选中虚拟机 -> 右键 -> settings -> network adapter -> host only -> ok
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
自己定义子网掩码
在这里插入图片描述
在这里插入图片描述

1.1修改主机名

vim /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=itcast    ###
 sudo vi /etc/hosts

在这里插入图片描述

1.2修改IP

	两种方式:
	第一种:通过Linux图形界面进行修改(强烈推荐)
		进入Linux图形界面 -> 右键点击右上方的两个小电脑 -> 点击Edit connections -> 选中当前网络System eth0 -> 点击edit按钮 -> 选择IPv4 -> method选择为manual -> 点击add按钮 -> 添加IP:192.168.1.101 子网掩码:255.255.255.0 网关:192.168.1.1 -> apply

	第二种:修改配置文件方式(屌丝程序猿专用)
		vim /etc/sysconfig/network-scripts/ifcfg-eth0
		
		DEVICE="eth0"
		BOOTPROTO="static"               ###
		HWADDR="00:0C:29:3C:BF:E7"
		IPV6INIT="yes"
		NM_CONTROLLED="yes"
		ONBOOT="yes"
		TYPE="Ethernet"
		UUID="ce22eeca-ecde-4536-8cc2-ef0dc36d4a8c"
		IPADDR="192.168.1.101"           ###
		NETMASK="255.255.255.0"          ###
		GATEWAY="192.168.1.1"            ###

1.3修改主机名和IP的映射关系

	vim /etc/hosts
		
	192.168.1.101	itcast

1.4关闭防火墙

	#查看防火墙状态
	service iptables status
	#关闭防火墙
	service iptables stop
	#查看防火墙开机启动状态
	chkconfig iptables --list
	#关闭防火墙开机启动
	chkconfig iptables off
sudo service iptables stop
sudo service iptables status

1.5重启Linux

	reboot

2.安装JDK

2.1上传alt+p 后出现sftp窗口,然后put d:\xxx\yy\ll\jdk-7u_65-i585.tar.gz

2.2解压jdk

	#创建文件夹
	mkdir /home/hadoop/app
	#解压
	tar -zxvf jdk-7u55-linux-i586.tar.gz -C /home/hadoop/app

2.3将java添加到环境变量中

	vim /etc/profile
	#在文件最后添加
	export JAVA_HOME=/home/hadoop/app/jdk-7u_65-i585
	export PATH=$PATH:$JAVA_HOME/bin

	#刷新配置
	source /etc/profile

3.安装hadoop2.4.1

先上传hadoop的安装包到服务器上去/home/hadoop/
注意:hadoop2.x的配置文件$HADOOP_HOME/etc/hadoop
伪分布式需要修改5个配置文件

3.1配置hadoop

第一个:hadoop-env.sh

	vim hadoop-env.sh
	#第27行
	export JAVA_HOME=/usr/java/jdk1.7.0_65
export JAVA_HOME=/home/java/java-se-8u41-ri

第二个:core-site.xml

	<!-- 指定HADOOP所使用的文件系统schema(URI),HDFS的老大(NameNode)的地址 -->
	<property>
		<name>fs.defaultFS</name>
		<value>hdfs://weekend-1206-01:9000</value>
	</property>
	<!-- 指定hadoop运行时产生文件的存储目录 -->
	<property>
		<name>hadoop.tmp.dir</name>
		<value>/home/hadoop/hadoop-2.4.1/tmp</value>
</property>
<configuration>

  <property>
        <name>fs.default.name</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/app/tmp</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131702</value>
    </property>

</configuration>		

第三个:hdfs-site.xml hdfs-default.xml (3)

	<!-- 指定HDFS副本的数量 -->
	<property>
		<name>dfs.replication</name>
		<value>1</value>
</property>
<configuration>

<property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///home/hadoop/app/hadoop-2.10.1/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///home/hadoop/app/hadoop-2.10.1/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master:50090</value>
    </property>
    <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
    </property>

</configuration>

第四个:mapred-site.xml (mv mapred-site.xml.template mapred-site.xml)

	mv mapred-site.xml.template mapred-site.xml
	vim mapred-site.xml
	<!-- 指定mr运行在yarn上 -->
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
</property>
<configuration>

<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
          <final>true</final>
    </property>
  <property>

 <property>
  <name>yarn.app.mapreduce.am.env</name>
  <value>HADOOP_MAPRED_HOME=/home/hadoop/app/hadoop-2.10.1</value>
 </property>
  
  <property>
  <name>mapreduce.map.env</name>
  <value>HADOOP_MAPRED_HOME=/home/hadoop/app/hadoop-2.10.1</value>
</property>

<property>
  <name>mapreduce.reduce.env</name>
  <value>HADOOP_MAPRED_HOME=/home/hadoop/app/hadoop-2.10.1</value>
</property>


    <name>mapreduce.jobtracker.http.address</name>
    <value>master:50030</value>
  </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>
    <property>
        <name>mapred.job.tracker</name>
        <value>http://master:9001</value>
    </property>

	<property>
   <name>mapreduce.application.classpath</name>
   <value>/home/hadoop/app/hadoop-2.10.1/share/hadoop/mapreduce/*, $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
</property>

<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>3072</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>2</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>256</value>
</property>

</configuration>

☆☆☆☆这三个参数需要指定,不然运行 hadoop jar 时,job会应为资源不足 卡住,无法继续往下执行

yarn.nodemanager.resource.memory-mb
yarn.nodemanager.resource.cpu-vcores
yarn.scheduler.minimum-allocation-mb

第五个:yarn-site.xml

	<!-- 指定YARN的老大(ResourceManager)的地址 -->
	<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>weekend-1206-01</value>
</property>
	<!-- reducer获取数据的方式 -->
<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
 </property>
<configuration>

<!-- Site specific YARN configuration properties -->


<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>      
	<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
      <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
      <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
    </property>
    <property>
      <name>yarn.resourcemanager.hostname</name>
      <value>master</value>
</property>

</configuration>

3.2将hadoop添加到环境变量

vim /etc/proflie
	export JAVA_HOME=/usr/java/jdk1.7.0_65
	export HADOOP_HOME=/itcast/hadoop-2.4.1
	export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

source /etc/profile
cat /etc/profile

export JAVA_HOME=/home/java/java-se-8u41-ri
export HADOOP_HOME=/home/hadoop/app/hadoop-2.10.1
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 

3.3格式化namenode(是对namenode进行初始化)

	hdfs namenode -format (hadoop namenode -format)

3.4启动hadoop

	先启动HDFS
	sbin/start-dfs.sh
	
	再启动YARN
	sbin/start-yarn.sh
[hadoop@master ~]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/app/hadoop-2.10.1/logs/hadoop-hadoop-namenode-master.out
localhost: starting datanode, logging to /home/hadoop/app/hadoop-2.10.1/logs/hadoop-hadoop-datanode-master.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.10.1/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.10.1/logs/yarn-hadoop-resourcemanager-master.out
localhost: starting nodemanager, logging to /home/hadoop/app/hadoop-2.10.1/logs/yarn-hadoop-nodemanager-master.out
[hadoop@master ~]$ jps
86577 SecondaryNameNode
98658 Jps
111015 QuorumPeerMain
86247 NameNode
110345 QuorumPeerMain
111097 QuorumPeerMain
86393 DataNode
86747 ResourceManager
86861 NodeManager

3.5验证是否启动成功

	使用jps命令验证
	27408 NameNode
	28218 Jps
	27643 SecondaryNameNode
	28066 NodeManager
	27803 ResourceManager
	27512 DataNode

	http://192.168.1.101:50070 (HDFS管理界面)
	http://192.168.1.101:8088 (MR管理界面)

http://192.168.25.129:50070/

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

http://192.168.25.129:8088/cluster

在这里插入图片描述
在这里插入图片描述

jar hadoop-mapreduce-examples-2.10.1.jar pi 2 2

[hadoop@master mapreduce]$ pwd
/home/hadoop/app/hadoop-2.10.1/share/hadoop/mapreduce
[hadoop@master mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.10.1.jar pi 2 2
Number of Maps  = 2
Samples per Map = 2
Wrote input for Map #0
Wrote input for Map #1
Starting Job
21/10/21 18:40:07 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.25.129:8032
21/10/21 18:40:08 INFO input.FileInputFormat: Total input files to process : 2
21/10/21 18:40:09 INFO mapreduce.JobSubmitter: number of splits:2
21/10/21 18:40:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634812594012_0001
21/10/21 18:40:10 INFO conf.Configuration: resource-types.xml not found
21/10/21 18:40:10 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
21/10/21 18:40:10 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
21/10/21 18:40:10 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE
21/10/21 18:40:10 INFO impl.YarnClientImpl: Submitted application application_1634812594012_0001
21/10/21 18:40:10 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1634812594012_0001/
21/10/21 18:40:10 INFO mapreduce.Job: Running job: job_1634812594012_0001
21/10/21 18:40:18 INFO mapreduce.Job: Job job_1634812594012_0001 running in uber mode : false
21/10/21 18:40:18 INFO mapreduce.Job:  map 0% reduce 0%
21/10/21 18:40:23 INFO mapreduce.Job:  map 50% reduce 0%
21/10/21 18:40:26 INFO mapreduce.Job:  map 100% reduce 0%
21/10/21 18:40:32 INFO mapreduce.Job:  map 100% reduce 100%
21/10/21 18:40:33 INFO mapreduce.Job: Job job_1634812594012_0001 completed successfully
21/10/21 18:40:33 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=50
                FILE: Number of bytes written=629943
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=526
                HDFS: Number of bytes written=215
                HDFS: Number of read operations=11
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=3
        Job Counters 
                Launched map tasks=2
                Launched reduce tasks=1
                Data-local map tasks=2
                Total time spent by all maps in occupied slots (ms)=4835
                Total time spent by all reduces in occupied slots (ms)=3949
                Total time spent by all map tasks (ms)=4835
                Total time spent by all reduce tasks (ms)=3949
                Total vcore-milliseconds taken by all map tasks=4835
                Total vcore-milliseconds taken by all reduce tasks=3949
                Total megabyte-milliseconds taken by all map tasks=4951040
                Total megabyte-milliseconds taken by all reduce tasks=4043776
        Map-Reduce Framework
                Map input records=2
                Map output records=4
                Map output bytes=36
                Map output materialized bytes=56
                Input split bytes=290
                Combine input records=0
                Combine output records=0
                Reduce input groups=2
                Reduce shuffle bytes=56
                Reduce input records=4
                Reduce output records=0
                Spilled Records=8
                Shuffled Maps =2
                Failed Shuffles=0
                Merged Map outputs=2
                GC time elapsed (ms)=239
                CPU time spent (ms)=1490
                Physical memory (bytes) snapshot=801165312
                Virtual memory (bytes) snapshot=6371180544
                Total committed heap usage (bytes)=493355008
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=236
        File Output Format Counters 
                Bytes Written=97
Job Finished in 25.23 seconds
Estimated value of Pi is 4.00000000000000000000
[hadoop@master mapreduce]$ 

4.配置ssh免登陆

#生成ssh免登陆密钥
#进入到我的home目录
cd ~/.ssh

ssh-keygen -t rsa (四个回车)
执行完这个命令后,会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥)
将公钥拷贝到要免登陆的机器上
ssh-copy-id localhost

在这里插入图片描述

ssh master 
ssh-keygen -t rsa
/home/hadoop/.ssh/id_rsa
cd /home/hadoop/.ssh/
ll -a
touch authorized_keys
chmod 600 authorized_keys
 cat id_rsa.pub >> authorized_keys 
ssh master 

5.添加用户到sudoers

现在要让jack用户获得sudo使用权
1.切换到超级用户root
   $su root
2.查看/etc/sudoers权限,可以看到当前权限为440
   $ ls -all /etc/sudoers
   -r--r----- 1 root root744  6月  8 10:29/etc/sudoers
3.更改权限为777
   $chmod 777/etc/sudoers
4.编辑/etc/sudoers
  $vi /etc/sudoers
5.在root   ALL=(ALL:ALL) ALL 下面添加一行
   jack   ALL=(ALL)ALL
   然后保存退出。
   第一个ALL是指网络中的主机,我们后面把它改成了主机名,它指明jack可以在此主机上执行后面的命令。
  第二个括号里的ALL是指目标用户,也就是以谁的身份去执行命令。
   最后一个ALL当然就是指命令名了。
   具体这里不作说明
6./etc/sudoers权限改回440
   $chmod 440 /etc/sudoers
7.操作完成,切换到jack用户测试一下

6.hadoop hdfs jar命令

/home/java/java-se-8u41-ri/bin

hadoop fs -put word.txt  /wordcount/input

hadoop jar app/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /input /output

export HADOOP_ROOT_LOGGER=DEBUG,console

hdfs dfsadmin -safemode leave 
stop-all.sh 
start-all.sh

hadoop fs -mkdir /wordcount/input

hadoop fs -rm -r /wordcount/output

hadoop fs -chmod -R 777 /

hadoop fs -df -h /wordcount
hadoop fs -du -s -h hdfs://master:9000/*

hadoop fs -rm -r /..

./hdfs dfs -chmod -R 755 /tmp

1.0查看帮助
	hadoop fs -help <cmd>
1.1上传
	hadoop fs -put <linux上文件> <hdfs上的路径>
1.2查看文件内容
	hadoop fs -cat <hdfs上的路径>
1.3查看文件列表
	hadoop fs -ls /
1.4下载文件
	hadoop fs -get <hdfs上的路径> <linux上文件>
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值