1.8.6 大数据-Spark与Mysql集成

拷贝hive中的MySQL jar包到spark 的jars目录

mysql-connector-java-5.1.27-bin.jar  

MysqL yum安装

需要在虚拟机里面联网使用软件 改为auto 安装好了联回来
采用root用户安装与启动 或者 sudo命令

yum install mysql-server
 
which mysql
/usr/bin/mysql
 
service mysqld status
service mysqld start

网络自动连接改回来 可以在CRT用了

/usr/bin/mysqladmin -u root password '123456' 
mysql -uroot -p12345
mysql> show databases;
 
CREATE  TABLE  webCount (
  titleName varchar(255)   CHARACTER   SET   utf8   DEFAULT   NULL,
  count int(11) DEFAULT   NULL
) ENGINE=InnoDB   DEFAULT   CHARSET=utf8

把表写入MysqL(自动创建表)

scala> val df = spark.sql("select * from kfk.test")
 
scala> import java.util.Properties
import java.util.Properties
 
scala> val pro = new Properties()
pro: java.util.Properties = {}
 
scala>  pro.setProperty("driver","com.mysql.jdbc.Driver")
res1: Object = null
 
scala> df.write.jdbc("jdbc:mysql://bigdata-pro01.kfk.com/test?user=root&password=123456"  ,"spark1",pro);
mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| spark1         |
+----------------+
 
mysql> select * from spark1;
+--------+----------+
| userid | username |
+--------+----------+
| 001    | spark    |
| 002    | hive     |
| 003    | hbase    |
| 004    | hadoop   |
+--------+----------+

读取MysqL表

scala> :paste
// Entering paste mode (ctrl-D to finish)
 
val jdbcDF = spark.read 
        .format("jdbc") 
        .option("url","jdbc:mysql://bigdata-pro01.kfk.com:3306/test") 
        .option("dbtable","spark1") 
        .option("user","root")
        .option("password","123456") 
        .load()
 
// Exiting paste mode, now interpreting. --Ctrl + D
 
jdbcDF: org.apache.spark.sql.DataFrame = [userid: string, username: string]
 
scala> jdbcDF.show
+------+--------+
|userid|username|
+------+--------+
|   001|   spark|
|   002|    hive|
|   003|   hbase|
|   004|  hadoop|
+------+--------+
©️2020 CSDN 皮肤主题: 大白 设计师:CSDN官方博客 返回首页