[1.1]第一个Spark应用程序之Java & Scala版 Word Count

参考

王家林-DT大数据梦工厂系列教程


场景

分别用 scala 与 java 编写第一个Spark应用程序之 Word Count


代码
一、scala版

package cool.pengych.spark
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD.rddToPairRDDFunctions
/**
 *  author : pengych
 *  date : 2016/04/29
 *  function: first Spark program by eclipse
 */
object WordCount {
  def main(args: Array[String]): Unit = {
    /*
     *  1、创建配置对象SparkConf
     *  作用:设置Spark程序运行时的配置信息,eg、通过setMaster来设置程序要链接的Spark集群的Master的URL,
     *  如果设置为local,则表示Spark程序运行在本地
     */
    val conf = new SparkConf()
    conf.setAppName("my first spark app ") //设置应用程序的名称,在程序运行的监控界面可以看到
    //conf.setMaster("local")//,此时程序在本地运行,不需要安装Spark集群
    
    /*
     * 2、创建SparkContext对象
     * 简介:Spark程序所有功能的唯一入口,整个Spark应用程序中最重要的对象
     * 作用:初始化Spark应用程序运行所需要的核心组件,包括:DAGScheduler、TaskScheduler、SchedulerBackend
     * 同时还会负责Spark程序往Master注册程序等
     */
    val sc = new SparkContext(conf)  //创建SparkContext对象,通过conf定制Spark运行时的具体参数与配置信息
    
    /*
     * 3、创建RDD
     *  根据具体的数据来源(HDFS、HBase、Local FS 、DB、S3等)通过SparkContext来创建RDD
     *  RDD的创建基本有三种方式:根据外部的数据来源(例如 HDFS)、根据scala集合、由其他的RDD操作
     *  数据会被RDD划分成为一系列的Partitions,分配到每个Partition的数据属于一个Task的处理范畴
     */
    //val lines = sc.textFile("/opt/spark-1.6.0-bin-hadoop2.6/README.md",1); //本地部署模式下用
    val lines2 = sc.textFile("hdfs://112.74.21.122:9000/input/hdfs")
    
    /*
     * 4、对初始的RDD进行Transformation级别的处理,例如map、filter等高阶函数的编程,来进行具体的数据计算
     *       注:Spark是基于RDD操作的,每一个算子操作后的返回结果基本都是RDD
     */
     val words = lines2.flatMap {line => line.split(" ") } // 对每一行的字符串进行单词拆分并把所有行的拆分结果通过flat合并成为
     val pairs = words.map { word => (word,1) }// 对每个单词实例初始计算为 1 
     val wordCounts = pairs.reduceByKey(_+_) //对相同的Key进行Value的累计(包括Local和Reducer级别同时Reduce)
     wordCounts.collect.foreach(wordNumberPair => println(wordNumberPair._1 +":" + wordNumberPair._2))
     
     /*
      * 5、释放资源
      */
     sc.stop
  }
}

二、java版

package cool.pengych.spark.SparkApps;
import java.util.Arrays;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.api.java.function.VoidFunction;
import scala.Tuple2;
/**
 *  Word Count - java 版本
 * @author pengyucheng
 *
 */
public class WordCount 
{
	@SuppressWarnings("serial")
	public static void main(String[] args)
	{
		//创建SparkContext实例对象,并指定实例参数
		SparkConf conf = new SparkConf().setAppName("Spark WordCount of java version").setMaster("local");
		JavaSparkContext sc = new JavaSparkContext(conf);
		JavaRDD<String> lines = sc.textFile("/home/pengyucheng/java/wordcount.txt");
		
		//拆分成单词集合
		JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
			public Iterable<String> call(String line) throws Exception {
				return Arrays.asList(line.split(" "));
			}
		});
		
		//将每个单词实例计数为1
		JavaPairRDD<String,Integer> pairs  = words.mapToPair(new PairFunction<String, String, Integer>() {
			public Tuple2<String, Integer> call(String word) throws Exception {
				// TODO Auto-generated method stub
				return new Tuple2<String,Integer>(word,1);
			}
		});
		
		//统计每个单词出现的总数
		JavaPairRDD<String,Integer> wordsCount = pairs.reduceByKey(new Function2<Integer, Integer, Integer>() {
			public Integer call(Integer v1, Integer v2) throws Exception {
				return v1 + v2;
			}
		});
		
		wordsCount.foreach(new VoidFunction<Tuple2<String,Integer>>() {
			public void call(Tuple2<String, Integer> pairs) throws Exception {
				System.out.println(pairs._1+":"+pairs._2);
			}
		});
		
		sc.close();
	}
}

三、pom.xml

java版使用Eclipse + Maven插件管理相关依赖包的,这里贴出 pom.xml 文件中配置,方便后续使用

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>

	<groupId>cool.pengych.spark</groupId>
	<artifactId>SparkApps</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<packaging>jar</packaging>
	<name>SparkApps</name>
	<url>http://maven.apache.org</url>
	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
	</properties>
	<dependencies>
		<dependency>
			<groupId>junit</groupId>
			<artifactId>junit</artifactId>
			<version>3.8.1</version>
			<scope>test</scope>
		</dependency>
		<dependency>
			<groupId>org.apache.spark</groupId>
			<artifactId>spark-core_2.10</artifactId>
			<version>1.6.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.spark</groupId>
			<artifactId>spark-sql_2.10</artifactId>
			<version>1.6.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.spark</groupId>
			<artifactId>spark-hive_2.10</artifactId>
			<version>1.6.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.spark</groupId>
			<artifactId>spark-streaming_2.10</artifactId>
			<version>1.6.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.spark</groupId>
			<artifactId>spark-streaming-kafka_2.10</artifactId>
			<version>1.6.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.spark</groupId>
			<artifactId>spark-graphx_2.10</artifactId>
			<version>1.6.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.spark</groupId>
			<artifactId>spark-mllib_2.10</artifactId>
			<version>1.6.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.hive</groupId>
			<artifactId>hive-jdbc</artifactId>
			<version>1.2.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.httpcomponents</groupId>
			<artifactId>httpclient</artifactId>
			<version>4.4.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.httpcomponents</groupId>
			<artifactId>httpcore</artifactId>
			<version>4.4.1</version>
		</dependency>
	</dependencies>
	<build>
		<sourceDirectory>src/main/java</sourceDirectory>
		<testSourceDirectory>src/main/test</testSourceDirectory>
		<plugins>
			<plugin>
				<artifactId>maven-assembly-plugin</artifactId>
				<configuration>
					<descriptorRefs>
						<descriptorRef>jar-with-dependencies</descriptorRef>
					</descriptorRefs>
					<archive>
						<manifest>
							<mainClass />
						</manifest>
					</archive>
				</configuration>
				<executions>
					<execution>
						<id>make-assembly</id>
						<phase>package</phase>
						<goals>
							<goal>single</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
			<plugin>
				<groupId>org.codehaus.mojo</groupId>
				<artifactId>exec-maven-plugin</artifactId>
				<version>1.2.1</version>
				<executions>
					<execution>
						<goals>
							<goal>exec</goal>
						</goals>
					</execution>
				</executions>
				<configuration>
					<executable>java</executable>
					<includeProjectDependencies>true</includeProjectDependencies>
					<includePluginDependencies>false</includePluginDependencies>
					<classpathScope>compile</classpathScope>
					<mainClass>cool.pengych.spark.SparkAjarpps</mainClass>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-compiler-plugin</artifactId>
				<configuration>
					<source>1.6</source>
					<target>1.6</target>
				</configuration>
			</plugin>
		</plugins>
	</build>
</project>


四、WordCount执行流程图解


总结

本地顺利运行了,但是在集群环境下跑WordCount程序时出现以下异常,目测是网络原因导致的,目前没有找到解决办法,故先记录下来,后续进一步分析:

16/04/28 12:15:58 WARN netty.NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(0,java.io.IOException: Failed to create directory /home/hadoop/spark-1.6.0-bin-hadoop2.6/work/app-20160428121358-0004/0)] in 1 attempts
org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
    at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值