hadoop简介:
1.独立模式(standalone|local)单机模式;所有的产品都安装在一台机器上且本地磁盘和副本可以在接下来的xml文件中
nothing!
本地文件系统。
不需要启用单独进程。
2.pesudo(伪分布模式)
等同于完全分布式,只有一个节点。
SSH: //(Socket),
//public + private
//server : sshd ps -Af | grep sshd
//clint : ssh
//ssh-keygen:生成公私秘钥。
//authorized_keys 需要使用644
//ssh 192.168.231.201 yes
[配置文件]
1.core-site.xml //fs.defaultFS=hdfs://hdfs在那些机器进行部署/
2.hdfs-site.xml //replication=1(编写hdfs副本数)
3.mapred-site.xml //(maprudece配置文件以及安装部署)
4. yarn-site.xml //(resourcemanager和节点管理)
3.full distributed(完全分布式)
让命令行提示符显式完整路径
---------------------------
1.编辑profile文件,添加环境变量PS1
[/etc/profile]
export PS1='[\u@\h `pwd`]\$'
2.source
$>source /etc/profile
配置hadoop,使用符号连接的方式,让三种配置形态共存。
----------------------------------------------------
1.创建三个配置目录,内容等同于hadoop目录
${hadoop_home}/etc/local
${hadoop_home}/etc/pesudo
${hadoop_home}/etc/full
2.创建符号连接
$>ln -s pesudo hadoop
3.对hdfs进行格式化
$>hadoop namenode -format
4.修改hadoop配置文件,手动指定JAVA_HOME环境变量
[${hadoop_home}/etc/hadoop/hadoop-env.sh]
...
export JAVA_HOME=/root/soft/jdk
...
5.启动hadoop的所有进程
$>start-all.sh(start-hdf.sh+start-yarn.sh=start-all.sh)
6.启动完成后,出现以下进程
$>jps(查看Java的服务)
hadf管理以下进程:
1.33702 NameNode
2.33792 DataNode
3.33954 SecondaryNameNode
yarn管理的进程:
1.29041 ResourceManager
2.34191 NodeManager
7.查看hdfs文件系统
$>hdfs dfs -lsr /
8.创建目录
$>hdfs dfs -mkdir -p /user/centos/hadoop(自己制定文件路径)
9.通过webui查看hadoop的文件系统
http://localhost:50070/
10.停止hadoop所有进程
$>stop-all.sh
11.centos防火墙操作
[cnetos 6.5之前的版本]
$>sudo service firewalld stop //停止服务
$>sudo service firewalld start //启动服务
$>sudo service firewalld status //查看状态
[centos7]
$>sudo systemctl enable firewalld.service //"开机启动"启用
$>sudo systemctl disable firewalld.service //"开机自启"禁用
$>sudo systemctl start firewalld.service //启动防火墙
$>sudo systemctl stop firewalld.service //停止防火墙
$>sudo systemctl status firewalld.service //查看防火墙状态
[开机自启]
$>sudo chkconfig firewalld on //"开启自启"启用
$>sudo chkconfig firewalld off //"开启自启"禁用
hadoop的端口
-----------------
50070 //namenode http port
50075 //datanode http port
50090 //2namenode http port
8020 //namenode rpc port
50010 //datanode rpc port
hadoop四大模块
-------------------
common
hdfs //namenode + datanode + secondarynamenode
mapred
yarn //resourcemanager + nodemanager
启动脚本
-------------------
1.start-all.sh //启动所有进程
2.stop-all.sh //停止所有进程
3.start-dfs.sh //
4.start-yarn.sh
[hdfs] start-dfs.sh stop-dfs.sh
NN
DN
2NN
[yarn] start-yarn.sh stop-yarn.sh
RM
NM
修改主机名
-------------------
1./etc/hostname
s201
2./etc/hosts
127.0.0.1 localhost
192.168.231.201 s201
192.168.231.202 s202
192.168.231.203 s203
192.168.231.204 s204
完全分布式
--------------------
1.克隆3台client(centos7)
右键centos-7-->管理->克隆-> ... -> 完整克隆
2.启动client
3.启用客户机共享文件夹。
4.修改hostname和ip地址文件
[/etc/hostname]
s202
[/etc/sysconfig/network-scripts/ifcfg-ethxxxx]
...
IPADDR=..
5.重启网络服务
$>sudo service network restart
6.修改/etc/resolv.conf文件
nameserver 192.168.231.2
7.重复以上3 ~ 6过程.
准备完全分布式主机的ssh
-------------------------
1.删除所有主机上的/home/centos/.ssh/*
2.在s201主机上生成密钥对
$>ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
3.将s201的公钥文件id_rsa.pub远程复制到202 ~ 204主机上。
并放置/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub centos@s201:/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub centos@s202:/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub centos@s203:/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub centos@s204:/home/centos/.ssh/authorized_keys
4.配置完全分布式(${hadoop_home}/etc/hadoop/)
1) [core-site.xml]
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://s201/</value>
</property>
</configuration>
2) [hdfs-site.xml]
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
3) [mapred-site.xml]
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
4)[yarn-site.xml]
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>s201</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
Hadoop集群环境名称以及IP地址:
192.168.56.11 linux-node1.example.com
192.168.56.12 linux-node2.example.com
192.168.56.13 linux-node3.example.com
192.168.56.14 linux-node4.example.com
[hadoop-env.sh]
...
export JAVA_HOME=/soft/jdk
...
5.分发配置
$>cd /soft/hadoop/etc/
$>scp -r full centos@s202:/soft/hadoop/etc/
$>scp -r full centos@s203:/soft/hadoop/etc/
$>scp -r full centos@s204:/soft/hadoop/etc/
6.删除符号连接
$>cd /soft/hadoop/etc
$>rm hadoop
$>ssh s202 rm /soft/hadoop/etc/hadoop
$>ssh s203 rm /soft/hadoop/etc/hadoop
$>ssh s204 rm /soft/hadoop/etc/hadoop
7.创建符号连接
$>cd /soft/hadoop/etc/
$>ln -s full hadoop
$>ssh s202 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
$>ssh s203 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
$>ssh s204 ln -s /soft/hadoop/etc/full /soft/hadoop/etc/hadoop
8.删除临时目录文件
$>cd /tmp
$>rm -rf hadoop-centos
$>ssh s202 rm -rf /tmp/hadoop-centos
$>ssh s203 rm -rf /tmp/hadoop-centos
$>ssh s204 rm -rf /tmp/hadoop-centos
9.删除hadoop日志
$>cd /soft/hadoop/logs
$>rm -rf *
$>ssh s202 rm -rf /soft/hadoop/logs/*
$>ssh s203 rm -rf /soft/hadoop/logs/*
$>ssh s204 rm -rf /soft/hadoop/logs/*
10.格式化文件系统
$>hadoop namenode -format
11.启动hadoop进程
$>start-all.sh
rsync
------------------
四个机器均安装rsync命令。
远程同步.
$>sudo yum install rsync
将root用户实现无密登录
1.同编写脚本
1.xcall.sh
2.xsync.sh
xsync.sh /home/etc/a.txt
rsync -lr /home/etc/a.txt centos@s202:/home/etc