Spark2.x集群安装

Spark安装(非ClouderaManager)

由于cloudera Manager自带的Spark版本为1.6,所以此处单独安装Spark-2.1.1
1. scala环境

scp scala-2.11.11.tgz hd-26:/usr/local/

ssh hd-26 "cd /usr/local/; tar xf scala-2.11.11.tgz; \
rm -rf scala-2.11.11.tgz; ln -s scala-2.11.11 scala; \
echo 'export SCALA_HOME=/usr/local/scala' >> /etc/profile; source /etc/profile;"
  1. spark
cp spark-2.1.1-bin-hadoop2.6.tgz /opt/soft
cd /opt/soft
tar xf spark-2.1.1-bin-hadoop2.6.tgz
cd ..
ln -s soft/spark-2.1.1-bin-hadoop2.6/ spark

cd spark/conf
cp spark-env.sh.template spark-env.sh
cp slaves.template  slaves
echo -e "hd-26\nhd-27\nhd-28\nhd-30\n" >> slaves
echo "SPARK_EXECUTOR_CORES=2" >> spark-env.sh
echo "SPARK_EXECUTOR_MEMORY=2G" >> spark-env.sh
echo "SPARK_DRIVER_MEMORY=2G" >> spark-env.sh
echo "SPARK_MASTER_HOST=hd-29" >> spark-env.sh
echo "SPARK_MASTER_PORT=7077" >> spark-env.sh
echo "SPARK_WORKER_CORES=4" >> spark-env.sh
echo "SPARK_WORKER_MEMORY=2G" >> spark-env.sh
echo "SPARK_WORKER_PORT=7078" >> spark-env.sh
echo "JAVA_HOME=/usr/local/jdk1.8.0_77" >> spark-env.sh
echo "SPARK_HOME=/opt/spark" >> spark-env.sh
echo "HADOOP_CONF_DIR=/etc/hadoop/conf" >> spark-env.sh
echo "SCALA_HOME=/usr/local/scala"" >> spark-env.sh

cd /opt/soft
scp -r spark-2.1.1-bin-hadoop2.6 hd-26:/opt/soft

ssh hd-26 "cd /opt; ln -s soft/spark-2.1.1-bin-hadoop2.6/ spark"

ssh hd-26 "echo 'SPARK_LOCAL_IP=hd-26' >> /opt/spark/conf/spark-env.sh"
ssh hd-27 "echo 'SPARK_LOCAL_IP=hd-27' >> /opt/spark/conf/spark-env.sh"

cd ../spark/sbin
./start-master.sh
./start-slaves.sh
  1. 相关页面
Apache Spark 2.x for Java Developers by Sourav Gulati English | 26 July 2017 | ISBN: 1787126498 | ASIN: B01LY3N7ZO | 350 Pages | AZW3 | 4.48 MB Key Features Perform big data processing with Spark—without having to learn Scala! Use the Spark Java API to implement efficient enterprise-grade applications for data processing and analytics Go beyond mainstream data processing by adding querying capability, Machine Learning, and graph processing using Spark Book Description Apache Spark is the buzzword in the big data industry right now, especially with the increasing need for real-time streaming and data processing. While Spark is built on Scala, the Spark Java API exposes all the Spark features available in the Scala version for Java developers. This book will show you how you can implement various functionalities of the Apache Spark framework in Java, without stepping out of your comfort zone. The book starts with an introduction to the Apache Spark 2.x ecosystem, followed by explaining how to install and configure Spark, and refreshes the Java concepts that will be useful to you when consuming Apache Spark's APIs. You will explore RDD and its associated common Action and Transformation Java APIs, set up a production-like clustered environment, and work with Spark SQL. Moving on, you will perform near-real-time processing with Spark streaming, Machine Learning analytics with Spark MLlib, and graph processing with GraphX, all using various Java packages. By the end of the book, you will have a solid foundation in implementing components in the Spark framework in Java to build fast, real-time applications. What you will learn Process data using different file formats such as XML, JSON, CSV, and plain and delimited text, using the Spark core Library. Perform analytics on data from various data sources such as Kafka, and Flume using Spark Streaming Library Learn SQL schema creation and the analysis of structured data using various SQL functions including Windowing
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值