任务一
编写Scala代码,使用Spark将MySQL库中表ChangeRecord,BaseMachine,MachineData, ProduceRecord全量抽取到Hive的ods库中对应表changerecord,basemachine, machinedata,producerecord中。
第一题:
抽取MySQL的shtd_industry库中ChangeRecord表的全量数据进入Hive的ods库中表changerecord,字段排序、类型不变,同时添加静态分区,分区字段为etldate,类型为String,且值为当前比赛日的前一天日期(分区字段格式为yyyyMMdd)。使用hive cli执行show partitions ods.changerecord命令,将hive cli的执行结果截图粘贴至客户端桌面【Release\任务B提交结果.docx】中对应的任务序号下;
代码实现:
package ModuleB.BookTask5.Task1
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.lit
object Task1_1 {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("Task1_1").setMaster("local")
val spark = SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
spark.sparkContext.setLogLevel("OFF")
spark.conf.set("hive.exec.dynamic.partition.mode","nonstrict")
val mysqldf = spark.read.format("jdbc")
.option("driver","com.mysql.jdbc.Driver")
.option("url","jdbc:mysql://master:3306/shtd_industry")
.option("user","root")
.option("password","123456")
.option("dbtable","changerecord")
.load()
mysqldf.show()
val etldate = java.time.LocalDate.now().minusDays(1).format(java.time.format.DateTimeFormatter.ofPattern("yyyyMMdd"))
val df1 = mysqldf.withColumn("etldate",lit(etldate))
df1.write.format("hive")
.mode("append")
.partitionBy("etldate")
.saveAsTable("ods.changerecord")
spark.sql("show partitions ods.changerecord").show()
spark.stop()
}
}
以上代码如有错误,请各位大佬指正