centos7 hadoop3.3单机部署流程

系统设置

  • 防火墙
systemctl stop firewalld && systemctl disable firewalld
  • SELINUX
# /etc/selinux/config
SELINUX=disabled
  • IP
/etc/sysconfig/network-scripts/ifcfg-eth0
  • 时钟、时区
yum install chrony -y
systemctl start chronyd && systemctl enable chronyd

timedatectl status
timedatectl list-timezones | grep Shanghai
timedatectl set-timezone Asia/Shanghai
  • 主机名、hosts解析
hostnamectl set-hostname <your hostname>

/etc/hosts
  • 用户权限
useradd hadoop
echo toor3412 | passwd hadoop --stdin

# vim /etc/sudoers
hadoop  ALL=(ALL)       NOPASSWD:ALL
  • SSH密钥认证
ssh-keygen
ssh-copy-id
  • 内核调优
/etc/sysctl.conf
/etc/security/limits.conf

部署JDK

yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel -y

环境变量

# vim /etc/profile
export JAVA_HOME=/usr/lib/jvm/java
export PATH=$PATH:$JAVA_HOME/bin

上传安装包

tar zxvf hadoop-3.3.0.tar.gz -C /usr/local
ln -s /usr/local/hadoop-3.3.0 /usr/local/hadoop

配置文件

文件路径:/usr/local/hadoop/etc/hadoop

  • hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java
export HADOOP_HOME=/usr/local/hadoop
  • core-site.xml
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop-1:8020</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/local/hadoop/data/tempdir</value>
    </property>
</configuration>
  • hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>
  • mapred-site.xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
  • yarn-site.xml
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>

格式化文件系统

普通用户

$ bin/hdfs namenode -format

......
2020-12-13 23:25:35,773 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1752870977-192.168.44.158-1607873135751
2020-12-13 23:25:35,796 INFO common.Storage: Storage directory /usr/local/hadoop/data/tempdir/dfs/name has been successfully formatted.
2020-12-13 23:25:35,835 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/data/tempdir/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2020-12-13 23:25:36,040 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/data/tempdir/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 399 bytes saved in 0 seconds .
2020-12-13 23:25:36,079 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2020-12-13 23:25:36,091 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.
2020-12-13 23:25:36,092 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop-1/192.168.44.158
************************************************************/

启动namenode、datanode

普通用户

$ ./sbin/start-dfs.sh
Starting namenodes on [hadoop-1]
Starting datanodes
Starting secondary namenodes [hadoop-1]

启动yarn

普通用户

$ ./sbin/start-yarn.sh
Starting resourcemanager
Starting nodemanagers

启动historyserver

$ ./bin/mapred --daemon start historyserver

$ ./bin/mapred --daemon stop historyserver

进程

$ jps
15957 DataNode
19494 Jps
19464 JobHistoryServer
16155 SecondaryNameNode
18413 NodeManager
15854 NameNode
18302 ResourceManager

端口

$ netstat -tlunp | grep java
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:37144           0.0.0.0:*               LISTEN      15356/java
tcp        0      0 0.0.0.0:8088            0.0.0.0:*               LISTEN      15251/java
tcp        0      0 0.0.0.0:13562           0.0.0.0:*               LISTEN      15356/java
tcp        0      0 0.0.0.0:8030            0.0.0.0:*               LISTEN      15251/java
tcp        0      0 0.0.0.0:8031            0.0.0.0:*               LISTEN      15251/java
tcp        0      0 0.0.0.0:8032            0.0.0.0:*               LISTEN      15251/java
tcp        0      0 0.0.0.0:8033            0.0.0.0:*               LISTEN      15251/java
tcp        0      0 127.0.0.1:45734         0.0.0.0:*               LISTEN      15957/java
tcp        0      0 0.0.0.0:9864            0.0.0.0:*               LISTEN      15957/java
tcp        0      0 0.0.0.0:8040            0.0.0.0:*               LISTEN      15356/java
tcp        0      0 0.0.0.0:9866            0.0.0.0:*               LISTEN      15957/java
tcp        0      0 0.0.0.0:8042            0.0.0.0:*               LISTEN      15356/java
tcp        0      0 0.0.0.0:9867            0.0.0.0:*               LISTEN      15957/java
tcp        0      0 0.0.0.0:9868            0.0.0.0:*               LISTEN      16155/java
tcp        0      0 0.0.0.0:9870            0.0.0.0:*               LISTEN      15854/java
tcp        0      0 192.168.44.158:8020     0.0.0.0:*               LISTEN      15854/java

创建目录

$ ./bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/hadoop

浏览器

NameNode - http://localhost:9870/
ResourceManager - http://localhost:8088/

执行mr测试

/usr/local/hadoop/bin/hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.0.jar pi 3 3
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值