spark和zeppelin实践一:安装hadoop篇

一、安装JDK

1.7 JDK下载地址:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

下载后安装

rpm -ivh jdk-8u112-linux-x64.rpm

设置JDK环境变量

export JAVA_HOME=/usr/java/jdk1.8.0_112
export CLASSPATH=$JAVA_HOME/lib/tools.jar  
export PATH=$JAVA_HOME/bin:$PATH  

二、安装hadoop

1、DNS绑定
vi /etc/hosts,增加一行内容,如下(这里我的Master节点IP设置的为192.168.80.100):

192.168.80.100 IM-SJ01-Server18

2、SSH的免密码登录

cd /home/data/.ssh
ssh-keygen -t rsa
cat id_rsa.pub >> authorized_keys

3、安装Hadoop

#http://hadoop.apache.org/releases.html
#wget http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz

cd /home/ztgame/soft
tar zxvf hadoop-2.7.3.tar.gz
ln -s /home/ztgame/soft/hadoop-2.7.3 /home/ztgame/soft/hadoop

#4、配置

1) 设置Hadoop环境变量

vim ~/.bash_profile 或 /etc/profile
export HADOOP_HOME=/home/ztgame/soft/hadoop
export PATH=$HADOOP_HOME/bin:$PATH

echo $HADOOP_HOME

2)修改hadoop-env.sh

vim $HADOOP_HOME/etc/hadoop/hadoop-env.sh
export JAVA_HOME=${JAVA_HOME} 改为
export JAVA_HOME=/usr/java/jdk1.8.0_112
3)修改/etc/hosts

4)修改core-site.xml

cd $HADOOP_HOME
cp ./share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml ./etc/hadoop/core-site.xml
cp ./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml ./etc/hadoop/hdfs-site.xml
cp ./share/doc/hadoop/hadoop-yarn/hadoop-yarn-common/yarn-default.xml ./etc/hadoop/yarn-site.xml
cp ./share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml ./etc/hadoop/mapred-site.xml


vim $HADOOP_HOME/etc/hadoop/core-site.xml
<property>
  <name>fs.default.name</name>
  <value>hdfs://192.168.80.100:19000</value>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/ztgame/hadoop/tmp</value>
</property>
5)修改配置hdfs-site.xml

<property>
  <name>dfs.namenode.rpc-address</name>
  <value>192.168.80.100:19001</value>
</property>

<property>
  <name>dfs.namenode.http-address</name>
  <value>0.0.0.0:10070</value>
</property>
6)修改mapred-site.xml

cp mapred-site.xml.template mapred-site.xml
<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>

7)修改yarn-site.xml

  <property>
    <description>The http address of the RM web application.</description>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>${yarn.resourcemanager.hostname}:18088</value>
  </property>

5、启动

1)格式化NameNode

cd $HADOOP_HOME/bin
./hdfs namenode -format

#2)启动hdfs
/home/ztgame/soft/hadoop/sbin/start-dfs.sh
jps查看是否启动成功
16704 DataNode
16545 NameNode
16925 SecondaryNameNode

hdfs dfs -ls hdfs://192.168.80.100:19001/


#3) 启动yarn
/home/ztgame/hadoop-2.7.3/sbin/start-yarn.sh
[ztgame@IM-SJ01-Server18 sbin]$ jps
17427 NodeManager
19668 ResourceManager


yarn node -list
yarn node -status


#4)页面显示
192.168.80.100:10070
192.168.80.100:18088


#6、上传测试
hadoop fs -mkdir -p hdfs://192.168.80.100:19001/test/
hadoop fs -copyFromLocal ./test.txt hdfs://192.168.80.100:19001/test/
hadoop fs -ls hdfs://192.168.80.100:19001/


hadoop fs -put /opt/program/userall20140828 hdfs://localhost:9000/tmp/tvbox/

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值