首先进入spark虚拟环境安装依赖和各种需要的包:
conda activate pyspark
yum install zlib-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel libffi-devel gcc make gcc-c++ python-devel cyrus-sasl-devel cyrus-sasl-plain cyrus-sasl-gssapi -y
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple pyhive pymysql sasl thrift thrift_sasl
进入spark文件夹下的bin目录打开 thriftserver服务持续监听端口,随时更新sql语句:
start-thriftserver.sh
保证hadoop集群和hive的metastore服务和hiveserver2服务开启:
nohup hive --service metastore 2>&1 &
nohup hive --service hiveserver2 2>&1 &
测试代码:确保有可查询的表存在
# coding:utf8
from pyhive import hive
if __name__ == '__main__':
# 获取到Hive(Spark ThriftServer的链接)
conn = hive.Connection(host="192.168.88.161", port=10000, username="root",database='default')
# 获取一个游标对象
cursor = conn.cursor()
# 执行SQL
cursor.execute("SELECT * FROM student")
# 通过fetchall API 获得返回值
result = cursor.fetchall()
print(result)
连接数据库需要导入jar包