原生的spark assembly jar是不依赖hive的,如果要使用spark hql必须将hive相关的依赖包打到spark assembly jar中来。打包方法:
假设已经装好了maven,
1添加环境变量,如果jvm的这些配置太小的话,可能导致在编译过程中出现OOM,因此放大一些:
export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
2 cd到spark源码目录,执行:
mvn -Pyarn -Dhadoop.version=2.5.0-cdh5.3.0 -Dscala-2.10.4 -Phive -Phive-thriftserver -DskipTests clean package
(其实好像用cdh版本的只要写 mvn -Pyarn -Phive -Phive-thriftserver -DskipTests clean package就可以了)
注意hadoop.version和scala的版本设置成对应的版本
经过漫长的编译过程(我编译了2个半小时),最终成功了,在assembly/target/scala-2.10目录下面有spark-assembly-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar文件,用rar打开看看hive jdbc package有没有包含在里面,有的话说明编译成功了。
源码目录下面有make-distribution.sh,可以用来打bin包:
./make-distribution.sh --name custom-spark --skip-java-test --tgz -Pyarn -Dhadoop.version=2.5.0-cdh5.3.0 -Dscala-2.10.4 -Phive -Phive-thriftserver
If you want IDEA compile your spark project (version 1.0.0 and above), you should do it with following steps.
1 clone spark project
2 use mvn to compile your spark project ( because you need the generated avro source file
in flume-sink module
)
3 open spark/pom.xml with IDEA
4 check profiles you need in “maven projects” window
5 modify the source path of flume-sink module, make “target/scala-2.10/src_managed/main/compiled_avro” as a source path
6 if you checked yarn profile, you need to
remove the module "spark-yarn_2.10”
add “spark/yarn/common/src/main/scala” and “spark/yarn/stable/src/main/scala” the source path of module “yarn-parent_2.10"
7 then you can run "
Build -> Rebuild Project"
in IDEA.
PS: you should run “rebuild” after you run mvn or sbt command to spark project.