1.拷贝hive-site.xml文件到spark的conf目录下
2.[hadoop@hadoop002 bin]$ ./spark-shell --master local[2] --jars ~/software/mysql-connector-java-5.1.47.jar
注意用5版本的mysql-connector-java
scala> spark.sql("show databases").show
+------------+
|databaseName|
+------------+
| default|
| test|
+------------+
scala> spark.sql("select *from test.wc").show
20/02/19 08:50:05 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
+-----------------+
| sentence|
+-----------------+
|hello hello hello|
| spark hadoop|
| hive|
+-----------------+
3.另一种启动方式
[hadoop@hadoop002 bin]$ ./spark-sql --master local --jars ~/software/mysql-connector-java-5.1.47.jar --driver-class-path ~/software/mysql-connector-java-5.1.47.jar
--driver-class-path 表明driver端也要需要该jar包服务。另一种方式是把jar包放$SPARK_HOME的lib下面,不过这样的话任何一个spark程序启动都会加载这个jar包。
spark-sql (default)> desc formatted wc;
20/02/19 21:08:05 INFO metastore.HiveMetaStore: 0: get_database: test
20/02/19 21:08:05 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: test
20/02/19 21:08:05 INFO metastore.HiveMetaStore: 0: get_table : db=test tbl=wc
20/02/19 21:08:05 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_table : db=test tbl=wc
20/02/19 21:08:05 INFO metastore.HiveMetaStore: 0: get_table : db=test tbl=wc
20/02/19 21:08:05 INFO HiveMetaStore.audit: ugi=hadoop ip=unknown-ip-addr cmd=get_ta