运行环境:
用图形更直观点。
在 spark cluster 和 yarn cluster 两种方式运行spark sql, 操作hive中的数据,另外,hive 是独立的,可以直接运行hive处理数据。
spark sql的程序比较好写,直接看spark的example的例子HiveFromSpark ,很容易理解
首先,在spark cluster上运行:
将hive的 hive-site.xml 配置文件放到 ${SPARK_HOME}/conf 目录下
#!/bin/bash
cd $SPARK_HOME
./bin/spark-submit \
--class com.datateam.spark.sql.HotelHive \
--master spark://192.168.44.80:8070 \
--executor-memory 2G \
--total-executor-cores 10 \
/home/q/spark/spark-1.1.1-SNAPSHOT-bin-2.2.0/jobs/spark-jobs-20141023.jar \
执行脚本,遇到下面的错误:
Exception in thread "main" org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch table dw_hotel_price_log
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:958)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:924)
……
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error :
The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH.
Please check your CLASSPATH specification, and the name of the driver.
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:237)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:110)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init&g