大数据【一】hadoop3.2.1集群部署

环境说明:

操作系统:centos8

jdk:13

hadoop:3.2.1

以下命令可以一键执行:


1、系统配置
echo "192.168.1.154 hadoop1"  >> /etc/hosts
echo "192.168.1.155 hadoop2"  >> /etc/hosts
echo "192.168.1.156 hadoop3"  >> /etc/hosts
systemctl stop firewalld
systemctl disable firewalld

2、授信
参考:https://blog.csdn.net/jiangxuexuanshuang/article/details/103972658

3、解压配置hadoop:三台服务器完全一致的配置
mkdir -p /data/data/hadoop/{data,tmp,namenode,src}

cd /data/soft/
tar xzvf hadoop-3.2.1.tar.gz
mv hadoop-3.2.1 /data/app/

cd /data/app/hadoop-3.2.1
echo "export JAVA_HOME=/data/app/jdk-13.0.1" >> etc/hadoop/hadoop-env.sh

echo "export JAVA_HOME=/data/app/jdk-13.0.1" >> etc/hadoop/yarn-env.sh

echo "export JAVA_HOME=/data/app/jdk-13.0.1" >> etc/hadoop/mapred-env.sh

echo "hadoop1"  >> etc/hadoop/workers
echo "hadoop2"  >> etc/hadoop/workers
echo "hadoop3"  >> etc/hadoop/workers

cat > etc/hadoop/core-site.xml <<EOF
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://hadoop1:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/data/data/hadoop/tmp</value>
  </property>
</configuration>
EOF


cat > etc/hadoop/hdfs-site.xml <<EOF
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>hadoop1:9001</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/data/data/hadoop/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/data/data/hadoop/data</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
</configuration>
EOF

cat > etc/hadoop/yarn-site.xml <<EOF
<?xml version="1.0"?>
<configuration>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.address</name>
    <value>hadoop1:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>hadoop1:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>hadoop1:8035</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>hadoop1:8033</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>hadoop1:8088</value>
  </property>
</configuration>
EOF

cat > tempinfo <<EOF
HDFS_DATANODE_USER=root  
HDFS_DATANODE_SECURE_USER=hdfs  
HDFS_NAMENODE_USER=root  
HDFS_SECONDARYNAMENODE_USER=root
EOF

sed -i '2 r tempinfo' sbin/start-dfs.sh
sed -i '2 r tempinfo' sbin/stop-dfs.sh

cat > tempinfo <<EOF
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
EOF
sed -i '2 r tempinfo' sbin/start-yarn.sh
sed -i '2 r tempinfo' sbin/stop-yarn.sh
rm -f tempinfo

echo "export HADOOP_HOME=/data/app/hadoop-3.2.1/" >> /etc/profile
echo "export PATH=\$PATH:\$HADOOP_HOME/bin" >> /etc/profile
source /etc/profile

4、初始化集群节点:
hadoop  namenode  -format

5、启动
sbin/start-all.sh


6、测试
cd /tmp
echo "hello hadoop" > hadoop-test.txt
#上传
hadoop fs -put hadoop-test.txt /
#查询
hadoop fs -ls /
rm hadoop-test.txt

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值