我在这里通过这个网址将pyspark连接到redshift:
我创建了一个文件夹,下载了
RedshiftJDBC42-1.2.12.1017.jar
并创建了Python文件样品.py用下面的代码from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
aws_access_key = "xxxx"
aws_secret_key = "xxxxyyyy"
bucket = "redshiftbucketadrian"
spark = SparkSession.builder.master("yarn").appName("Connect to redshift").enableHiveSupport().getOrCreate()
sc = spark.sparkContext
sql_context = HiveContext(sc)
sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", aws_access_key)
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", aws_secret_key)
df = sql_context.read\
.format("com.databricks.spark.redshift")\
.option("url", "jdbc:redshift://xxxxx")\
.option("dbtable&