Linux 下Maven交互式初始化项目
1)mkdir myspark; cd myspark
2)mvn命令初始化,创建相应目录与文件
mvn archetype:generate //创建,下载jars
默认类型: org.apache.maven.archetypes:maven-archetype-quickstart
手动输入 groupId:com.eric; artifactId:myspark; package:com.xxx
生成java项目,包含main/test 及相关文件
hadoop@slave1:~/myspk/myspark/myspark$ find .
...
./src/main/java/com/eric/App.java
...
./src/test/java/com/eric/AppTest.java
./pom.xml
或者指令提供所有信息:
mvn archetype:generate -DgroupId=com.eric -DartifactId=myspark -DarchetypeArtifactId=maven-archetype-quickstart //-DinteractiveMode=false
3) java源码 /myspark/src/main/java/com/eric
4)根据依赖关系,pom中指定依赖
...
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.2.0</version>
...
5) mvn package 完成编译
6) 运行:如果报链接master错误,则需要设置sparkconf.setMaster("local");
java -cp myspark-1.0-SNAPSHOT.jar:/home/hadoop/work/spark-1.5.2/assembly/target/scala-2.10/spark-assembly-1.5.2-hadoop2.6.0.jar com.eric.JavaWordCount ~/test/exp.txt
7)插件运行,需要先edit pom.xml
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2.1</version>
</plugin>
执行命令:
mvn exec:java -Dexec.mainClass="com.eric.JavaWordCount" -Dexec.args="/home/hadoop/test/exp.txt"
8) scalac 命令行编译运行
在maven生成的项目下添加.scala 文件并编译
scalac -d target/classes -classpath target/classes:/home/hadoop/work/spark-1.5.2/assembly/target/scala-2.10/spark-assembly-1.5.2-hadoop2.6.0.jar src/main/java/com/eric/myscala.scala
运行: -cp //classpath
java -cp target/classes:/home/hadoop/work/spark-1.5.2/assembly/target/scala-2.10/spark-assembly-1.5.2-hadoop2.6.0.jar myscala ~/test/exp.txt
9) maven 插件编译执行scala
edit pom.xml
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.10.4</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.5.2</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2.1</version>
</plugin>
</plugins>
</build>
编译
mvn scala:compile
运行:
mvn exec:java -Dexec.mainClass="myscala" -Dexec.args="/home/hadoop/test/log.txt"
Windows project之后用IDEA试试.
1)mkdir myspark; cd myspark
2)mvn命令初始化,创建相应目录与文件
mvn archetype:generate //创建,下载jars
默认类型: org.apache.maven.archetypes:maven-archetype-quickstart
手动输入 groupId:com.eric; artifactId:myspark; package:com.xxx
生成java项目,包含main/test 及相关文件
hadoop@slave1:~/myspk/myspark/myspark$ find .
...
./src/main/java/com/eric/App.java
...
./src/test/java/com/eric/AppTest.java
./pom.xml
或者指令提供所有信息:
mvn archetype:generate -DgroupId=com.eric -DartifactId=myspark -DarchetypeArtifactId=maven-archetype-quickstart //-DinteractiveMode=false
3) java源码 /myspark/src/main/java/com/eric
4)根据依赖关系,pom中指定依赖
...
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.2.0</version>
...
5) mvn package 完成编译
6) 运行:如果报链接master错误,则需要设置sparkconf.setMaster("local");
java -cp myspark-1.0-SNAPSHOT.jar:/home/hadoop/work/spark-1.5.2/assembly/target/scala-2.10/spark-assembly-1.5.2-hadoop2.6.0.jar com.eric.JavaWordCount ~/test/exp.txt
7)插件运行,需要先edit pom.xml
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2.1</version>
</plugin>
执行命令:
mvn exec:java -Dexec.mainClass="com.eric.JavaWordCount" -Dexec.args="/home/hadoop/test/exp.txt"
8) scalac 命令行编译运行
在maven生成的项目下添加.scala 文件并编译
scalac -d target/classes -classpath target/classes:/home/hadoop/work/spark-1.5.2/assembly/target/scala-2.10/spark-assembly-1.5.2-hadoop2.6.0.jar src/main/java/com/eric/myscala.scala
运行: -cp //classpath
java -cp target/classes:/home/hadoop/work/spark-1.5.2/assembly/target/scala-2.10/spark-assembly-1.5.2-hadoop2.6.0.jar myscala ~/test/exp.txt
9) maven 插件编译执行scala
edit pom.xml
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.10.4</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.5.2</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2.1</version>
</plugin>
</plugins>
</build>
编译
mvn scala:compile
运行:
mvn exec:java -Dexec.mainClass="myscala" -Dexec.args="/home/hadoop/test/log.txt"
Windows project之后用IDEA试试.