Idea基于maven,java语言的spark环境搭建

本文介绍如何使用IntelliJ IDEA和Maven构建Spark单词计数应用,并详细说明了配置过程及代码实现,最后演示了如何打包并运行jar包。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

环境介绍:IntelliJ IDEA开发软件,hadoop01-hadoop04的集群(如果不进行spark集群测试可不安装),其中spark安装目录为/opt/moudles/spark-1.6.1/

准备工作

首先在集群中的hdfs中添加a.txt文件,将来需在项目中进行单词统计
这里写图片描述

构建Maven项目

点击File->New->Project…
这里写图片描述
点击Next,其中GroupId和ArtifactId可随意命名
这里写图片描述
点击Next
这里写图片描述
点击Finish,出现如下界面:
这里写图片描述

书写wordCount代码

请在pom.xml中的version标签后追加如下配置

<properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>3.8.1</version>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-hive_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-client</artifactId>
        <version>2.7.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming-kafka_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-graphx_2.10</artifactId>
        <version>1.6.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-assembly-plugin</artifactId>
        <version>2.2-beta-5</version>
    </dependency>
    <dependency>
        <groupId>commons-lang</groupId>
        <artifactId>commons-lang</artifactId>
        <version>2.3</version>
    </dependency>
</dependencies>
<build>
    <sourceDirectory>src/main/java</sourceDirectory>
    <testSourceDirectory>src/test/java</testSourceDirectory>
    <plugins>
        <plugin>
            <artifactId>maven-assembly-plugin</artifactId>
            <configuration>
                <descriptorRefs>
                    <descriptorRef>jar-with-dependencies</descriptorRef>
                </descriptorRefs>
                <archive>
                    <manifest>
                        <maniClass></maniClass>
                    </manifest>
                </archive>
            </configuration>
            <executions>
                <execution>
                    <id>make-assembly</id>
                    <phase>package</phase>
                    <goals>
                        <goal>single</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
        <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>exec-maven-plugin</artifactId>
            <version>1.3.1</version>
            <executions>
                <execution>
                    <goals>
                        <goal>exec</goal>
                    </goals>
                </execution>
            </executions>
            <configuration>
                <executable>java</executable>
                <includeProjectDependencies>false</includeProjectDependencies>
                <classpathScope>compile</classpathScope>
                <mainClass>com.dt.spark.SparkApps.App</mainClass>
            </configuration>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>


            <configuration>
                <source>1.6</source>
                <target>1.6</target>
            </configuration>
        </plugin>
    </plugins>
</build>

点击右下角的Import Changes导入相应的包
这里写图片描述
点击File->Project Structure…->Moudules,将src和main都选为Sources文件
这里写图片描述
在java文件夹下创建SparkWordCount java文件
这里写图片描述
该文件代码为:

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.api.java.function.VoidFunction;
import scala.Tuple2;

import java.util.Arrays;

/**
 * Created by hadoop on 17-4-4.
 */
public class SparkWordCount {
    public static void main(String[] args){
        SparkConf conf = new SparkConf()
                .setAppName("WordCountCluster");
        //第二步
        JavaSparkContext sc = new JavaSparkContext(conf);
        JavaRDD<String> lines = sc.textFile("hdfs://hadoop01:9000/a.txt");
        JavaRDD<String> words = lines.flatMap(new FlatMapFunction<String, String>(){
            private static final long serialVersionUID = 1L;

            @Override
            public Iterable<String> call(String line) throws Exception{
                return Arrays.asList(line.split(" "));
            }
        });


        JavaPairRDD<String,Integer> pairs = words.mapToPair(
                new PairFunction<String, String, Integer>() {

                    private  static final long serialVersionUID = 1L;

                    public Tuple2<String, Integer> call(String word) throws Exception {
                        return new Tuple2<String, Integer>(word,1);
                    }
                }
        );

        JavaPairRDD<String,Integer> wordCounts = pairs.reduceByKey(
                new Function2<Integer, Integer, Integer>() {
                    @Override
                    public Integer call(Integer v1, Integer v2) throws Exception {
                        return v1+v2;
                    }
                }
        );


        wordCounts.foreach(new VoidFunction<Tuple2<String, Integer>>() {
            @Override
            public void call(Tuple2<String, Integer> wordCount) throws Exception {
                System.out.println(wordCount._1+" : "+ wordCount._2 );
            }
        });

        sc.close();

    }
}

生成jar包

点击File->Project Structure…->Artifacts,点击+号
这里写图片描述
选择Main Class
这里写图片描述
点击ok
这里写图片描述
由于集群中已包含spark相关jar包,将那些依赖jar包删除
这里写图片描述
点击apply,ok。然后点击菜单栏中的Build->Build Artifacts…->Build,将会在out目录中生成相应的jar包
这里写图片描述

jar包上传到集群并执行

本文使用scp将jar包上传到集群,如果在windows下可使用filezilla或xftp软件来上传
这里写图片描述
在集群上输入如下命令来执行:

/opt/moudles/spark-1.6.1/bin/spark-submit --class SparkWordCount sparkStudy.jar  --master=spark://192.168.20.171:7077

最终结果为:
这里写图片描述

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值