Spark连接Mysql、Hive、HBase

本篇使用到的开发工具IntelliJ IDEA,jdk版本为:jdk1.8,虚拟机版本为CentOS 7,scala版本为:scala-2.11.12,Spark版本为spark-2.4.4-bin-hadoop2.6

MySQL版本为:5.6.50 MySQL Community Server (GPL),hive版本为:hive-1.1.0-cdh5.14.2,HBase版本为:hbase-1.2.0-cdh5.14.2。

一、使用Spark连接Mysql

新建一个maven项目sparkConnect

在pom.xml文件中

<properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <scala.version>2.11.12</scala.version>
        <spark.version>2.4.4</spark.version>
    </properties>

    <repositories>
        <repository>
            <id>cloudera</id>
            <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
        </repository>
    </repositories>

    <dependencies>
        <!--scala-->
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
        </dependency>

        <!-- spark-core -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.4.4</version>
        </dependency>

        <!-- spark-sql -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>2.4.4</version>
        </dependency>

        <!-- spark-hive -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-hive_2.11</artifactId>
            <version>2.4.4</version>
        </dependency>

    <!-- spark-graphx -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-graphx_2.11</artifactId>
            <version>2.4.4</version>
        </dependency>

        <!-- mysql-connector-java JDBC驱动-->
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.32</version>
        </dependency>



        <!-- log4j -->
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>

        <!-- junit -->
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
        </dependency>


    </dependencies>


    <build>
        <plugins>
            <!--java打包插件-->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.2</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                    <encoding>UTF-8</encoding>
                </configuration>
                <executions>
                    <execution>
                        <phase>compile</phase>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <!--scala打包插件-->
            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <version>2.15.2</version>
                <executions>
                    <execution>
                        <id>scala-compile-first</id>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                        <configuration>
                            <includes>
                                <include>**/*.scala</include>
                            </includes>
                        </configuration>
                    </execution>
                </executions>
            </plugin>

            <!--将依赖打入jar包-->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>2.6</version>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

在resources目录中新建一个log4j.properties

log4j.rootLogger=WARN, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n
log4j.appender.logfile=org.apache.log4j.FileAppender
log4j.appender.logfile.File=target/spring.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n

创建Spark2Mysql.scala

import java.util.Properties

import org.apache.spark.sql.{DataFrame, SparkSession}

object Spark2Mysql extends App {
  val spark: SparkSession = SparkSession.builder()
    .master("local")
    .appName(this.getClass.getName)
    .getOrCreate()
  val url = "jdbc:mysql://single:3306/school"
  val tableName = "score"
  val props = new Properties()

  //设置要访问的用户名,密码,及driver
  props.setProperty("user", "root")
  props.setProperty("password", "******")
  props.setProperty("driver", "com.mysql.jdbc.Driver")
  //通过spark.read.jdbc方法读取mysql中的数据
  val df1: DataFrame = spark.read.jdbc(url, tableName, props)
//  df1.show()
  //通过append追加方式,更新原表
  df1.write.mode("append").jdbc(url, tableName, props)
  //创建临时表score2,并使用spark.sql查询临时表中的数据
  df1.createOrReplaceTempView("score1")
  spark.sql("select * from score1").show()
}

运行结果如下:

二、Spark连接Hive

新建一个maven项目

在pom.xml文件中

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <hadoop.version>2.6.0-cdh5.14.2</hadoop.version>
    <hive.version>1.1.0-cdh5.14.2</hive.version>
    <hbase.version>1.2.0-cdh5.14.2</hbase.version>
    <scala.version>2.11.12</scala.version>
    <spark.version>2.4.4</spark.version>
  </properties>

  <repositories>
    <repository>
      <id>cloudera</id>
      <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
    </repository>
  </repositories>

  <dependencies>
    <!--scala-->
    <dependency>
      <groupId>org.scala-lang</groupId>
      <artifactId>scala-library</artifactId>
      <version>${scala.version}</version>
    </dependency>

    <!-- spark-core -->
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-core_2.11</artifactId>
      <version>2.4.4</version>
    </dependency>

    <!-- spark-sql -->
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.11</artifactId>
      <version>2.4.4</version>
    </dependency>

    <!-- spark-hive -->
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-hive_2.11</artifactId>
      <version>2.4.4</version>
    </dependency>

    <!-- mysql-connector-java -->
    <dependency>
      <groupId>mysql</groupId>
      <artifactId>mysql-connector-java</artifactId>
      <version>5.1.31</version>
    </dependency>

    <!-- spark-graphx -->
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-graphx_2.11</artifactId>
      <version>2.4.4</version>
    </dependency>


    <!-- hadoop -->
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-common</artifactId>
      <version>${hadoop.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>${hadoop.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-hdfs</artifactId>
      <version>${hadoop.version}</version>
    </dependency>

    <!-- log4j -->
    <dependency>
      <groupId>log4j</groupId>
      <artifactId>log4j</artifactId>
      <version>1.2.17</version>
    </dependency>

    <!-- junit -->
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.11</version>
    </dependency>
  </dependencies>


  <build>
    <plugins>
      <!--java打包插件-->
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>3.2</version>
        <configuration>
          <source>1.8</source>
          <target>1.8</target>
          <encoding>UTF-8</encoding>
        </configuration>
        <executions>
          <execution>
            <phase>compile</phase>
            <goals>
              <goal>compile</goal>
            </goals>
          </execution>
        </executions>
      </plugin>

      <!--scala打包插件-->
      <plugin>
        <groupId>org.scala-tools</groupId>
        <artifactId>maven-scala-plugin</artifactId>
        <version>2.15.2</version>
        <executions>
          <execution>
            <id>scala-compile-first</id>
            <goals>
              <goal>compile</goal>
            </goals>
            <configuration>
              <includes>
                <include>**/*.scala</include>
              </includes>
            </configuration>
          </execution>
        </executions>
      </plugin>

      <!--将依赖打入jar包-->
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-assembly-plugin</artifactId>
        <version>2.6</version>
        <configuration>
          <descriptorRefs>
            <descriptorRef>jar-with-dependencies</descriptorRef>
          </descriptorRefs>
        </configuration>
        <executions>
          <execution>
            <id>make-assembly</id>
            <phase>package</phase>
            <goals>
              <goal>single</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>

在resources目录下放入hive-site.xml文件

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
  <name>hive.metastore.warehouse.dir</name>
  <value>hdfs://single:9000/opt/software/hadoop/hive110/warehouse</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://192.168.153.130:3306/hive110?createDatabaseIfNotExist=true</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>root</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>root</value>
</property>
</configuration>

创建Spark2Hive.scala

import org.apache.spark.sql.SparkSession

object Spark2Hive extends App {
  val spark: SparkSession = SparkSession.builder()
    .master("local")
    .appName(this.getClass.getName)
    .enableHiveSupport()//开启hive支持
    .getOrCreate()

  //读取hive中数据表
  spark.sql("select * from exam.stu").show()
}

运行结果如下:

三、Spark连接HBase

新建一个maven项目。

在pom.xml中(这里需要强调的是<!-- spark-hbase -->两个依赖必须要放到最后,不然报错

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>cn.alisa</groupId>
    <artifactId>sparkConnect</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <hadoop.version>2.6.0-cdh5.14.2</hadoop.version>
        <hive.version>1.1.0-cdh5.14.2</hive.version>
        <hbase.version>1.2.0-cdh5.14.2</hbase.version>
        <scala.version>2.11.12</scala.version>
        <spark.version>2.4.4</spark.version>
    </properties>

    <repositories>
        <repository>
            <id>cloudera</id>
            <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
        </repository>
    </repositories>

    <dependencies>
        <!--scala-->
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
        </dependency>

        <!-- spark-core -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.4.4</version>
        </dependency>

        <!-- spark-sql -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>2.4.4</version>
        </dependency>

        <!-- spark-hive -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-hive_2.11</artifactId>
            <version>2.4.4</version>
        </dependency>


        <!-- mysql-connector-java -->
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.31</version>
        </dependency>

        <!-- spark-graphx -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-graphx_2.11</artifactId>
            <version>2.4.4</version>
        </dependency>


        <!-- hadoop -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>${hadoop.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>${hadoop.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>${hadoop.version}</version>
        </dependency>

        <!-- log4j -->
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>

        <!-- junit -->
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
        </dependency>

        <!-- spark-hbase -->
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>${hbase.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-server</artifactId>
            <version>${hbase.version}</version>
        </dependency>

    </dependencies>


    <build>
        <plugins>
            <!--java打包插件-->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.2</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                    <encoding>UTF-8</encoding>
                </configuration>
                <executions>
                    <execution>
                        <phase>compile</phase>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <!--scala打包插件-->
            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <version>2.15.2</version>
                <executions>
                    <execution>
                        <id>scala-compile-first</id>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                        <configuration>
                            <includes>
                                <include>**/*.scala</include>
                            </includes>
                        </configuration>
                    </execution>
                </executions>
            </plugin>

            <!--将依赖打入jar包-->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>2.6</version>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

创建Spark2HBaseRead.scala文件

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client._
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession
import org.apache.hadoop.hbase.util.Bytes

object Spark2HBase extends App {
  val spark: SparkSession = SparkSession.builder()
    .master("local")
    .appName(this.getClass.getName)
    .getOrCreate()

  //建立HBase的连接
  val conf: Configuration = HBaseConfiguration.create()
  conf.set("hbase.zookeeper.property.clientPort", "2181")//设置zookeeper client端口
  conf.set("hbase.zookeeper.quorum", "192.168.153.130") // 设置zookeeper quorum
  conf.set(TableInputFormat.INPUT_TABLE, "kb10:student")
  val sc: SparkContext = spark.sparkContext
  //读取数据并转化成rdd
  val HbaseRDD: RDD[(ImmutableBytesWritable, Result)] = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat],
    classOf[ImmutableBytesWritable],
    classOf[Result])
  val count: Long = HbaseRDD.count()
  println(count)
  HbaseRDD.foreach({ case (_, result) => {
    //获取行键
    val key = Bytes.toString(result.getRow)
    //通过列族和列名获取列
    val name = Bytes.toString(result.getValue("basicinfo".getBytes, "name".getBytes))
    val age = Bytes.toString(result.getValue("basicinfo".getBytes, "age".getBytes))
    println("Row key:" + key + " Name:" + name + " Age:" + age)
  }}
  )
  sc.stop()

}

运行结果如下:

创建Spark2HBaseWrite.scala文件

import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.Put
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.mapred.TableOutputFormat
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.mapred.JobConf
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession

object Spark2HBaseWrite extends App {
  val spark: SparkSession = SparkSession.builder()
    .master("local")
    .appName(this.getClass.getName)
    .getOrCreate()

  //建立HBase的连接
  val conf: Configuration = HBaseConfiguration.create()
  conf.set("hbase.zookeeper.property.clientPort", "2181") //设置zookeeper client端口
  conf.set("hbase.zookeeper.quorum", "192.168.153.130") // 设置zookeeper quorum
  val jobConf = new JobConf(conf)
  jobConf.setOutputFormat(classOf[TableOutputFormat])
  jobConf.set(TableOutputFormat.OUTPUT_TABLE, "kb10:student")
  val sc: SparkContext = spark.sparkContext
    //往HBase中插入两条数据
  val dataRDD: RDD[String] = sc.makeRDD(Array("1003,alisa,22", "1004,jason,34"))
  private val rdd: RDD[(ImmutableBytesWritable, Put)] = dataRDD.map(_.split(",")).map({ arr => {
    //  一个Put对象就是一行记录,在构造方法中指定主键
    //  所有插入的数据必须用org.apache.hadoop.hbase.util.Bytes.toBytes方法转换
    //Put.add方法接收三个参数:列族,列名,数据
    val put = new Put(Bytes.toBytes(arr(0)))
    put.add(Bytes.toBytes("basicinfo"), Bytes.toBytes("name"), Bytes.toBytes(arr(1)))
    put.add(Bytes.toBytes("basicinfo"), Bytes.toBytes("age"), Bytes.toBytes(arr(2)))
    //必须有这两个返回值,put为要传入的数据
    //转化成RDD[(ImmutableBytesWritable,Put)]类型才能调用saveAsHadoopDataset
    (new ImmutableBytesWritable, put)
  }
  })
  rdd.saveAsHadoopDataset(jobConf)
  sc.stop()
}

运行过Spark2HBaseWrite.scala文件后,执行Spark2HBaseRead运行结果如下:

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值