物联网架构之 Hadoop

本文详细介绍了如何在Linux系统上进行Hadoop的安装、环境配置,包括修改hosts文件、安装JDK、设置用户权限、配置SSH免密登录,以及Hadoop各个组件的环境变量配置和分布式服务的启动停止。最后展示了HDFS命令的使用方法。
摘要由CSDN通过智能技术生成

修改/etc/hosts文件
192.168.107.197 node1
192.168.107.196 node2
192.168.107.195 node3

创建用户并加入组
groupadd hadoop
useradd -g hadoop hduser
passwd hduser
vim /etc/sudoers
hduser ALL=(ALL) ALL

安装JDK
rpm -ivh jdk-8u171-linux-x64.rpm

vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64
export CLASSPATH= J A V A H O M E / l i b : JAVA_HOME/lib: JAVAHOME/lib:CLASSPATH
export PATH= J A V A H O M E / b i n : JAVA_HOME/bin: JAVAHOME/bin:PATH

source /etc/profile

java -version

配置本机SSH免密码登录
ssh-keygen -t rsa
ssh-copy-id node1
ssh-copy-id node2
ssh-copy-id node3

hadoop完全分布式安装

cd /home/hduser
tar zxf hadoop-2.6.5.tar.gz
mv hadoop-2.6.5 hadoop

hadoop的环境变量
vim /etc/profile
#hadoop
export HADOOP_HOME=/home/hduser/hadoop
export PATH= H A D O O P H O M E / b i n : HADOOP_HOME/bin: HADOOPHOME/bin:PATH
source /etc/profile

配置Hadoop:
vim /home/hduser/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64

vim /home/hduser/hadoop/etc/hadoop/yarn-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64

vim /home/hduser/hadoop/etc/hadoop/slaves
node2
node3

vim /home/hduser/hadoop/etc/hadoop/core-site.xml


fs.defaultFS
hdfs://node1:9000


hadoop.tmp.dir
file:/home/hduser/hadoop/tmp

vim /home/hduser/hadoop/etc/hadoop/hdfs-site.xml


dfs.namenode.secondary.http-address
node1:50090


dfs.namenode.name.dir
file:/home/hduser/hadoop/dfs/name


dfs.datanode.data.dir
file:/home/hduser/hadoop/dfs/data


dfs.replication
2


dfs.webhdfs.enabled
true

vim /home/hduser/hadoop/etc/hadoop/mapred-site.xml


mapreduce.framework.name
yarn


mapreduce.jobhistory.address
node1:10020


mapreduce.jobhistory.webapp.address
node1:19888

vim /home/hduser/hadoop/etc/hadoop/yarn-site.xml


yarn.nodemanager.aux-services
mapreduce_shuffle


yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler


yarn.resourcemanager.address
node1:8032


yarn.resourcemanager.scheduler.address
node1:8030


yarn.resourcemanager.resource-tracker.address
node1:8035


yarn.resourcemanager.admin.address
node1:8033


yarn.resourcemanager.webapp.address
node1:8088

scp -r /home/hduser/hadoop node2:/home/hduser
scp -r /home/hduser/hadoop node3:/home/hduser

验证安装配置:
cd /home/hduser/hadoop
bin/hdfs namenode -format
sbin/start-dfs.sh

jps

sbin/start-yarn.sh

sbin/start-all.sh

bin/hdfs dfsadmin -report
http://192.168.107.197:50070
sbin/stop-all.sh

mkdir file
cd file
echo “Hello World hi HADOOP” > file1.txt
echo “Hello hadoop hi CHINA” > file2.txt
sbin/start-all
bin/hadoop fs -mkdir /input2
bin/hadoop fs -put file* /input2
bin/hadoop fs -ls /input2
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount /input2/ /output2/wordcount1
bin/hadoop fs -cat /output2/wordcount1/*

HDFS的相关命令:
hdfs fsck / -files -blocks
sbin/start-balancer.sh
hadoop fs -mkdir /user
hadoop fs -mkdir /user/hadoop/dir1 /user/hadoop/dir2
hadoop fs -ls /input2/file1.txt
hadoop fs -ls /input2/
hadoop fs -cat /input2/file1.txt /input2/file2.txt
文件转移
hadoop fs -put /home/hduser/file/file1.txt /input2
hadoop fs -put /home/hduser/file/file1.txt /home/hduser/file/file2.txt /input2
hadoop fs -get /input2/file1.txt $HOME/file.txt
hadoop fs -mv /input2/file1.txt /input2/file2.txt /user/hadoop/dir1
hadoop fs -cp /input2/file1.txt /input2/file2.txt /user/hadoop/dir1
hadoop fs -cp file:///file1.txt file:///file2.txt file:///tmp
hadoop fs -rm /input2/file3.txt
hadoop fs -rmr /input2#现在推荐使用 hadoop fs -rm -r /input2 命令
hadoop fs -test -e /input2/file3.txt
hadoop fs -test -z /input2/file1.txt

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

叮咚网工

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值