首先启动 thriftserver
./start-thriftserver.sh --master local[2] --jars /opt/module/hive-1.2.2/lib/mysql-connector-java-5.1.27-bin.jar
添加 pom 依赖
<dependency>
<groupId>org.spark-project.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>1.2.1.spark2</version>
</dependency>
编写代码如下:
/**
* jdbc 编程访问 spark sql 数据
*/
object SparkSQLJdbcDemo {
def main(args: Array[String]): Unit = {
Class.forName("org.apache.hive.jdbc.HiveDriver")
val conn = DriverManager.getConnection("jdbc:hive2://192.168.31.102:10000","hadoop","hadoop")
val pstmt = conn.prepareStatement("select * from emp")
val rs = pstmt.executeQuery()
while(rs.next()){
println("empno: " + rs.getInt("empno") + " , ename: " + rs.getString("ename"))
}
rs.close()
pstmt.close()
conn.close()
}
}
运行结果为: