spark on mesos (Centos集群和Mac本地)

Spark on mesos的搭建

Centos7

1台mesos-master: 192.168.1.5
2台mesos-slave: 192.168.1.6, 192.168.1.7

1.安装zookeeper

docker run -itd -p 2181:2181 --name zk zookeeper

2.安装mesos

添加阿里yum源(3台机器)

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

添加mesosphere源(3台机器)

rpm -Uvh http://repos.mesosphere.com/el/7/noarch/RPMS/mesosphere-el-repo-7-1.noarch.rpm

安装mesos(3台机器)

yum -y install mesos

配置mesos-master(192.168.1.5)

echo zk://192.168.1.5:2181/mesos > /etc/mesos/zk
echo 1 > /etc/mesos-master/quorum

master节点(192.168.1.5)上启动master,禁用slave

systemctl stop mesos-slave.service
systemctl disable mesos-slave.service
service mesos-master start

slave节点(192.168.1.6,192.168.1.7)启动slave, 禁用master

echo zk://192.168.1.5:2181/mesos > /etc/mesos/zk
systemctl stop mesos-master.service
systemctl disable mesos-master.service
service mesos-slave start

访问 192.168.1.5:5050查看是否启动成功
Tips:如果有什么变动,可以在master或slave下配置,即/etc/mesos目录下新建文件,比如上面的zk配置

3.spark连接mesos

配置spark(我的spark安装目录为/usr/local/spark/spark-2.4.0-bin-hadoop2.7)
Tips: spark-2.4.0-bin-hadoop2.7.tgz文件最好放入hdfs

echo "export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI=/usr/local/spark/spark-2.4.0-bin-hadoop2.7.tgz" >> /usr/local/spark/spark-2.4.0-bin-hadoop2.7/conf/spark-env.sh

启动spark-shell

./bin/spark-shell.sh --master mesos://192.168.1.5:5050

测试:

val Data = sc.parallelize(1 to 100).count

Mac

在一台mac电脑上运行1个master进程,两个slave进程

1.安装zk,mesos
brew install zookeeper
brew install mesos

启动zk

zkServer start

启动mesos

/usr/local/sbin/mesos-master --registry=in_memory --ip=0.0.0.0 --zk=zk://192.168.199.163:2181/jpoint-mesos


/usr/local/sbin/mesos-slave --master=zk://127.0.0.1:2181/jpoint-mesos --port=5052 --work_dir=/tmp/mesos2


/usr/local/sbin/mesos-slave --master=zk://127.0.0.1:2181/jpoint-mesos --port=5053 --work_dir=/tmp/mesos3
2.spark连接mesos

修改./conf/sprak-env.sh

echo "export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.dylib
export SPARK_EXECUTOR_URI=/usr/local/spark/spark-2.4.0-bin-hadoop2.7.tgz" >> /usr/local/spark/spark-2.4.0-bin-hadoop2.7/conf/spark-env.sh

启动spark-shell

./bin/spark-shell.sh --master mesos://192.168.199.163:5050

测试:

val Data = sc.parallelize(1 to 10).count

参考

centos7: https://www.cnblogs.com/vergilchiu/p/5706473.html
mac: https://vanwilgenburg.wordpress.com/2015/05/10/how-to-run-a-spark-cluster-on-mesos-on-your-mac/
https://spark.apache.org/docs/latest/running-on-mesos.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值