伪分布式hadoop,scala,spark

hadoop:

创建hadoop用户:

[root@master ~]# useradd hadoop

[root@master ~]# echo "1" |passwd --stdin hadoop

java安装


导入jdk安装包到/opt/software下

解压安装包

[root@master~]#tar -zxvf /opt/software/jdk-8u152-linux-x64.tar.gz –C /usr/local/src/

[root@master ~]# ls /usr/local/src/

jdk1.8.0_152

修改jdk配置文件:

[root@master ~]# vi /etc/profile

在文件的最后增加如下两行:

export JAVA_HOME=/usr/local/src/jdk1.8.0_152export

PATH=$PATH:$JAVA_HOME/bin

执行source /etc/profile使其生效

检查 JAVA 是否可用。

[root@master ~]# echo $JAVA_HOME/usr/local/src/jdk1.8.0_152

[root@master ~]# java -version

java version "1.8.0_152"

Hadoop平台环境配置

[root@master ~]# vi /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.47.140 master

192.168.47.141 slave1

192.168.47.142 slave2

              

[root@slave1 ~]# vi /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.47.140 master

192.168.47.141 slave1

192.168.47.142 slave2

[root@slave2 ~]# vi /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain428

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.47.140 master

192.168.47.141 slave1

192.168.47.142 slave2

SSH 无密码验证配置:

[root@master ~]# rpm -qa | grep openssh

openssh-server-7.4p1-11.el7.x86_64

openssh-7.4p1-11.el7.x86_64

openssh-clients-7.4p1-11.el7.x86_64

[root@master ~]# rpm -qa | grep rsync

rsync-3.1.2-11.el7_9.x86_64

查看有没有如上几个包,如果没有就下载

                    

将master,slave1,slave2都转到用户hadoop下

在master,slave1,slave2上都执行如下命令:

ssh-keygen  –t  rsa

查看~/下有没有".ssh"文件夹

[hadoop@master ~]$ ls  ~/.ssh/

id_rsa id_rsa.pub

在master,slave1,slave2上将 id_rsa.pub 追加到授权 key 文件中

[hadoop@master ~]$ cat  ~/.ssh/id_rsa.pub >>  ~/.ssh/authorized_keys

[hadoop@master ~]$ ls  ~/.ssh/

authorized_keys  id_rsa  id_rsa.pub

在master,slave1,slave2上修改文件权限

chmod 600 ~/.ssh/authorized_keys

在master,slave1,slave2上管理员root下:

[root@master ~]# vi /etc/ssh/sshd_config

PubkeyAuthentication yes       找到此行,并把#号注释删除。

重启 SSH 服务,并切换到hadoop用户下验证能否嵌套登录本机,若可以不输入密码登录,则本机通过密钥登录认证成功。

交换 SSH 密钥

[hadoop@master ~]$ scp ~/.ssh/id_rsa.pub hadoop@slave1:~/

[hadoop@master ~]$ scp ~/.ssh/id_rsa.pub hadoop@slave2:~/

[hadoop@slave1 ~]$ cat ~/id_rsa.pub >>~/.ssh/authorized_keys

[hadoop@slave1 ~]$ rm -rf ~/id_rsa.pub

[hadoop@slave2 ~]$ cat ~/id_rsa.pub >>~/.ssh/authorized_keys

[hadoop@slave2 ~]$ rm -rf ~/id_rsa.pub

将每个 Slave 节点的公钥保存到 Master

[hadoop@slave1 ~]$ scp ~/.ssh/id_rsa.pub hadoop@master:~/

[hadoop@master ~]$ cat ~/id_rsa.pub >>~/.ssh/authorized_keys

[hadoop@master ~]$ rm -rf ~/id_rsa.pub

想做以上几步,在做以下几步

[hadoop@slave2 ~]$ scp ~/.ssh/id_rsa.pub hadoop@master:~/

[hadoop@master ~]$ cat ~/id_rsa.pub >>~/.ssh/authorized_keys

[hadoop@master ~]$ rm -rf ~/id_rsa.pub

验证 SSH 无密码登录

[hadoop@master ~]$ cat ~/.ssh/authorized_keys

查看返回值中有没有master,slave1,slave2

[hadoop@slave1 ~]$ cat ~/.ssh/authorized_keys

查看返回值中有没有master,slave1

[hadoop@slave2 ~]$ cat ~/.ssh/authorized_keys

查看返回值中有没有master,slave2

配置四个文件:

配置 hdfs-site.xml 文件参数

[root@master hadoop]# vim hdfs-site.xml

#编辑以下内容

<configuration>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:/usr/local/src/hadoop/dfs/name</value>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>file:/usr/local/src/hadoop/dfs/data</value>

</property>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

</configuration>

配置 core-site.xml 文件参数

[root@master hadoop]# vim core-site.xml

#编辑以下内容

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://192.168.130.128:9000</value>  注意其中的ip地址

</property>

<property>

<name>io.file.buffer.size</name>

<value>131072</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>file:/usr/local/src/hadoop/tmp</value>

</property>

</configuration>

配置 mapred-site.xml

[root@master hadoop]# pwd

/usr/local/src/hadoop/etc/hadoop

[root@master hadoop]# cp mapred-site.xml.template mapred-site.xml

[root@master hadoop]# vim mapred-site.xml

#添加以下配置

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>master:10020</value>

</property>

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>master:19888</value>

</property>

</configuration>

配置 yarn-site.xml

[root@master hadoop]# vim yarn-site.xml

#添加以下配置

<configuration>

<!-- Site specific YARN configuration properties -->

<property>

<name>yarn.resourcemanager.address</name>

<value>master:8032</value>

</property>

<property>

<name>yarn.resourcemanager.scheduler.address</name>

<value>master:8030</value>

</property>

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>master:8031</value>

</property>

<property>

<name>yarn.resourcemanager.admin.address</name

><value>master:8033</value>

</property>

<property>

<name>yarn.resourcemanager.webapp.address</name>

<value>master:8088</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>

<value>org.apache.hadoop.mapred.ShuffleHandler</value>

</property>

</configuration>

格式化 NameNode

[root@master ~]# su - hadoop

[hadoop@master ~]# cd /usr/local/src/hadoop/

[hadoop@master hadoop]$ bin/hdfs namenode –format

启动 NameNode

执行如下命令,启动 NameNode:

[hadoop@master hadoop]$ hadoop-daemon.sh start namenode

启动 SecondaryNameNode

[hadoop@master hadoop]$ hadoop-daemon.sh start secondarynamenode

[hadoop@master hadoop]$ jps

查看 HDFS 数据存放位置

[hadoop@master hadoop]$ ll dfs/

[hadoop@master hadoop]$ ll ./tmp/dfs

查看报告

[hadoop@master sbin]$ hdfs dfsadmin -report

网页

http://master:50070

http://master:50090

将输入数据文件复制到 HDFS 的/input 目录中: 

[hadoop@master hadoop]$ hdfs dfs -put ~/input/data.txt /input   

[hadoop@master hadoop]$ hdfs dfs -ls /input

[hadoop@master hadoop]$ hdfs dfs -mkdir /output

查看 HDFS 中的文件: 

[hadoop@master hadoop]$ hdfs dfs -ls /

上述目录中/input 目录是输入数据存放的目录,/output 目录是输出数据存放的目录。执 行如下命令,删除/output 目录。 

[hadoop@master hadoop]$ hdfs dfs -rm -r -f /output

[hadoop@master hadoop]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input/data.txt /output

在浏 览器的地址栏输入:http://master:8088

scala:

[root@master sbin]# ll /opt/software/
总用量 984
-rw-r--r--.  1 root   root   1004838 4月  18 16:46 mysql-connector-java-5.1.46.jar
drwxrwxr-x.  6 hadoop hadoop      50 3月   4 2016 scala
drwxr-xr-x. 15    501 l          235 4月  25 15:48 spark

vi /etc/profire

底行插入俩行配置文件

export SCALA_HOME=/opt/software/scala

export PATH=$PATH:${SCALA_HOME}/bin

source /etc/profire

scala -version

spark:

[root@master sbin]# ll /opt/software/
总用量 984
-rw-r--r--.  1 root   root   1004838 4月  18 16:46 mysql-connector-java-5.1.46.jar
drwxrwxr-x.  6 hadoop hadoop      50 3月   4 2016 scala
drwxr-xr-x. 15    501 l          235 4月  25 15:48 spark

vi /etc/profire

底行插入三行配置文件

export SPARK_HOME=/opt/software/spark
export PATH=$PATH:${SPARK_HOME}/bin
export PATH=$PATH:${SPARK_HOME}/sbin

添加以下几行:

export SCALA_HOME=/opt/software/scala
export JAVA_HOME=/usr/local/src/jdk1.8.0_152
export SPARK_MASTER_IP=master
export SPARK_WOKER_CORES=2
export SPARK_WOKER_MEMORY=2g
export HADOOP_CONF_DIR=/usr/local/src/hadoop
#export SPARK_MASTER_WEBUI_PORT=8080
#export SPARK_MASTER_PORT=7070

root@master sbin]# ./start-all.sh
[root@master sbin]# spark-shell

jps:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值