- 配置Hadoop的yarn-site.xml,防止执行过程被意外杀死
|
三台虚拟机配置要一致
二、配置spark on yarn
在spark-env.sh中
YARN_CONF_DIR=(Hadoop的配置文件目录也就是hadoop/etc/hadoop)
在yarn-site.xml中添加
<property> <name>yarn.timeline-service.enabled</name> <value>false</value> </property> |
配置yarn log和jobhistory
配置maperd-site.xml
<property> <name>mapreduce.jobhistory.address</name> <value>master:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>master:19888</value> </property> |
配置yarn-site.xml
<property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <name>yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds</name> <value>3600</value> </property> <property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/tmp/logs</value> </property> <property> <name>yarn.log.server.url</name> <value>http://master:19888/jobhistory/logs</value> </property> |
配置spark-defaults.conf
spark.eventLog.enabled true spark.eventLog.dir hdfs://master:9000/history spark.eventLog.compress true |
sbin/mr-jobhistory-daemon.sh start historyserver
尝试官方求pi案例
spark-submit \ 使用submit提交方式 --class org.apache.spark.examples.SparkPi \ 使用SparkPi方法 --master yarn \ 使用yarn进行资源调度 ./examples/jars/spark-examples_2.11-2.1.1.jar \ 使用的jar包位置 10 附带的参数 \是换行输入,回车不会直接执行命令 |
代码编写:
创建一个maven项目,先写个wordcount
在pom.xml中导入依赖
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.cx</groupId> <artifactId>SparkDemo</artifactId> <version>1.0-SNAPSHOT</version> <name>cx-demo</name> <packaging>jar</packaging> <url>http://maven.apache.org</url> <properties> <encoding>UTF-8</encoding> <scope.type>compile</scope.type> <scala.binary.version>2.11</scala.binary.version> <scala.version>2.11.11</scala.version> <spark.version>2.2.1</spark.version> <spark.bagel.version>1.6.3</spark.bagel.version> <spark.avro.version>4.0.0</spark.avro.version> <hadoop.client.version>2.7.3</hadoop.client.version> <spring.version>5.0.5.RELEASE</spring.version> <logback.verson>1.2.3</logback.verson> </properties> <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_${scala.binary.version}</artifactId> <version>${spark.version}</version> <scope>${scope.type}</scope> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-bagel_${scala.binary.version}</artifactId> <version>${spark.bagel.version}</version> <scope>${scope.type}</scope> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-mllib_${scala.binary.version}</artifactId> <version>${spark.version}</version> <scope>${scope.type}</scope> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-repl_${scala.binary.version}</artifactId> <version>${spark.version}</version> <scope>${scope.type}</scope> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_${scala.binary.version}</artifactId> <version>${spark.version}</version> <scope>${scope.type}</scope> </dependency> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-compiler</artifactId> <version>${scala.version}</version> </dependency> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>${scala.version}</version> </dependency> <dependency> <groupId>org.scala-tools</groupId> <artifactId>maven-scala-plugin</artifactId> <version>2.11</version> </dependency> <dependency> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-eclipse-plugin</artifactId> <version>2.5.1</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <compilerArgument>-parameters</compilerArgument> <source>${java.version}</source> <target>${java.version}</target> <encoding>UTF-8</encoding> <showDeprecation>true</showDeprecation> <showWarnings>true</showWarnings> </configuration> </plugin> <plugin> <groupId>org.scala-tools</groupId> <artifactId>maven-scala-plugin</artifactId> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> </execution> </executions> <configuration> <sourceDir>src/main/scala</sourceDir> <jvmArgs> <jvmArg>-Xms64m</jvmArg> <jvmArg>-Xmx1024m</jvmArg> </jvmArgs> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>3.1.0</version> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> |
点击右键-àmavenàreimport
简单写一个wordcount项目:
package com.wc import org.apache.spark.rdd.RDD import org.apache.spark.{SparkConf, SparkContext} object WordCount{ def main(args: Array[String]): Unit = { //1.创建SparkConf并设置App名称 val conf = new SparkConf().setAppName("WC") //2.创建SparkContext,该对象是提交Spark App的入口 val sc = new SparkContext(conf)
sc.textFile(args(0)).flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).saveAsTextFile(args(1)) sc.stop() } } |
点击右边的mavenàLivecycleàpackage生成jar包
输出:
Building jar: D:\code\bigdata\WordCount\target\SparkDemo-1.0-SNAPSHOT.jar [INFO] [INFO] --- maven-assembly-plugin:3.1.0:single (make-assembly) @ SparkDemo --- [INFO] Building jar: D:\code\bigdata\WordCount\target\SparkDemo-1.0-SNAPSHOT-jar-with-dependencies.jar |
上面一个是项目的jar包,下面是包含lib的jar包
尝试在yarn上运行自己写的wordcount项目
在hdfs上创建input文件夹
hdfs dfs -mkdir /input
在input文件夹中上传需要wc的文件1.txt 2.txt
touch 1.txt 2.txt
echo "asdf asxi asdf weir asdf asd" >> 1.txt
echo "asdf asxi asdf weir asdf asd" >> 2.txt
hdfs dfs -put ./*.txt /input
hdfs dfs -ls /input
spark-submit --class com.wc.WordCount --master yarn-cluster ./SparkDemo-1.0-SNAPSHOT-jar-with-dependencies.jar /input /output |
结果在hdfs上的output目录中