Ubuntu:Spark安装

创建hadoop用户:

sudo user add -m Hadoop -s /bin/bash
sudo passed hadoop
sudo adduser Hadoop sudo

登陆hadoop用户

sudo apt-get update
sudo apt-get install vim
sudo apt-get install openssh-server
ssh localhost
exit
cd ~/.ssh/
ssh-keygen -t rsa
cat ./id_rsa.pub >> ./authorized_keys

安装java环境

sudo apt-get install openjdk-8-jre openjdk-8-jdk
vim ~/.bashrc

文件最后添加路径

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64

是文件生效

source ~/.bashrc

安装hadoop2
选择stable文件夹下不带src的tar包
下载链接
http://mirror.bit.edu.cn/apache/hadoop/common/或者
http://mirrors.cnnic.cn/apache/hadoop/common/

sudo tar -zxvf ~/Downloads/hadoop-2.9.0.tar.gz -C /usr/local
cd /usr/local/
sudo mv ./hadoop-2.6.0/ ./hadoop
sudo chown -R Hadoop ./hadoop
cd /usr/local/hadoop
./bin/hadoop version

到此已经安装完成


Hadoop单机配置

cd /usr/local/hadoop
mkdir ./input
cp ./etc/hadoop/*.xml ./input
./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep ./input ./output 'dfs[a-z.]+'
cat ./output/*
rm -r ./output

hadoop伪分布式配置
修改/usr/local/hadoop/etc/hadoop/core-site.xml

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

修改/usr/local/hadoop/etc/hadoop/hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/usr/local/hadoop/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop/tmp/dfs/data</value>
    </property>
</configuration>

执行NameNode初始化

./bin/hdfs namenode -format
./sbin/start-dfs.sh

成功启动后,可以Web访问http://localhost:50070
运行伪分布式实例

./bin/hdfs dfs -mkdir -p /user/hadoop
./bin/hdfs dfs -put ./etc/hadoop/*.xml input
./bin/hdfs dfs -ls input
./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep input output 'dfs[a-z.]+'
./bin/hdfs dfs -cat output/*
rm -r ./output
./bin/hdfs dfs -get output ./output
cat ./output/*
./bin/hdfs dfs -rm -r output
./sbin/stop-dfs.sh
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值