springboot项目多数据源配置,集成hive和mysql

最近做的这个项目,原本数据库是用的mysql,后来需求问题,经过讨论后打算把hive集成进来,就是有一批数据入hive。

经过昨晚的尝试加上上网查阅资料总算是集成好了。

集成好了后的效果就是mysql和hive互不影响可独立使用,而且也不会影响mybatis的 使用,既然集成应该都是要这种效果吧。

代码有点多,不要着急,咱们一步一步的来

第一步pom文件以下jar如果你已经有了就无需在添加,如果没有就添加吧,hive的版本号一定要与你的hive服务版本号一致

    <dependency>
           <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-configuration-processor</artifactId>
            <optional>true</optional>
        </dependency>
   <!-- 添加数据库连接池 -->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>druid</artifactId>
            <version>1.0.29</version>
        </dependency>
  <!-- 添加spring管理bean对象 -->
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-beans</artifactId>
        </dependency>

        <!-- 添加hadoop依赖 -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.6.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-core</artifactId>
            <version>2.6.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-common</artifactId>
            <version>2.6.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.6.0</version>
        </dependency>

        <!-- 添加hive依赖 -->
        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-jdbc</artifactId>
            <version>1.2.1</version>
            <exclusions>
                <exclusion>
                    <groupId>org.eclipse.jetty.aggregate</groupId>
                    <artifactId>*</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

第二步,yml的配置写法:配置的地方跟这个所对应,目的是把mysql和hive区分开,其他的配置不需要动。

spring:
  application:
    name: zhongtai
  datasource:
    mysqlMain: #mysql主数据源
      type: com.alibaba.druid.pool.DruidDataSource
      password: 123456
      driver-class-name: com.mysql.cj.jdbc.Driver
      url: jdbc:mysql://连接地址:3306/库名?useUnicode=true&characterEncoding=utf8&useSSL=false&allowMultiQueries=true
      username: root
    hive: #hive数据源
      url: jdbc:hive2://连接地址:10000/库名
      type: com.alibaba.druid.pool.DruidDataSource
      username: root
      password: ******
      driver-class-name: org.apache.hive.jdbc.HiveDriver
    common-config: #连接池统一配置,应用到所有的数据源
      initialSize: 1
      minIdle: 1
      maxIdle: 5
      maxActive: 50
      maxWait: 10000
      timeBetweenEvictionRunsMillis: 10000
      minEvictableIdleTimeMillis: 300000
      validationQuery: select 'x'
      testWhileIdle: true
      testOnBorrow: false
      testOnReturn: false
      poolPreparedStatements: true
      maxOpenPreparedStatements: 20
      filters: stat

第三步,就是建管理配置文件的几个类,位置无所谓,但是为了归类不乱,最好放一块

 

1.HiveDruidConfig.java hive数据源配置类

package cn.**.**.system.config.druid.hive;
import javax.sql.DataSource;

import cn.cnic.zhongtai.system.config.DataSourceCommonProperties;
import cn.cnic.zhongtai.system.config.DataSourceProperties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import com.alibaba.druid.pool.DruidDataSource;

import java.sql.SQLException;

/**
* 
 * hive 数据源配置类
* Created  by wdd on 2019/11/5
*  
**/
@Configuration
@EnableConfigurationProperties({DataSourceProperties.class, DataSourceCommonProperties.class})//将配置类注入到bean容器,使ConfigurationProperties注解类生效
public class HiveDruidConfig {

    private static Logger logger = LoggerFactory.getLogger(HiveDruidConfig.class);

    @Autowired
    private DataSourceProperties dataSourceProperties;

    @Autowired
    private DataSourceCommonProperties dataSourceCommonProperties;

    @Bean("hiveDruidDataSource") //新建bean实例
    @Qualifier("hiveDruidDataSource")//标识
    public DataSource dataSource(){
        DruidDataSource datasource = new DruidDataSource();

        //配置数据源属性
        datasource.setUrl(dataSourceProperties.getHive().get("url"));
        datasource.setUsername(dataSourceProperties.getHive().get("username"));
        datasource.setPassword(dataSourceProperties.getHive().get("password"));
        datasource.setDriverClassName(dataSourceProperties.getHive().get("driver-class-name"));

        //配置统一属性
        datasource.setInitialSize(dataSourceCommonProperties.getInitialSize());
        datasource.setMinIdle(dataSourceCommonProperties.getMinIdle());
        datasource.setMaxActive(dataSourceCommonProperties.getMaxActive());
        datasource.setMaxWait(dataSourceCommonProperties.getMaxWait());
        datasource.setTimeBetweenEvictionRunsMillis(dataSourceCommonProperties.getTimeBetweenEvictionRunsMillis());
        datasource.setMinEvictableIdleTimeMillis(dataSourceCommonProperties.getMinEvictableIdleTimeMillis());
        datasource.setValidationQuery(dataSourceCommonProperties.getValidationQuery());
        datasource.setTestWhileIdle(dataSourceCommonProperties.isTestWhileIdle());
        datasource.setTestOnBorrow(dataSourceCommonProperties.isTestOnBorrow());
        datasource.setTestOnReturn(dataSourceCommonProperties.isTestOnReturn());
        datasource.setPoolPreparedStatements(dataSourceCommonProperties.isPoolPreparedStatements());
        try {
            datasource.setFilters(dataSourceCommonProperties.getFilters());
        } catch (SQLException e) {
            logger.error("Druid configuration initialization filter error.", e);
        }
        return datasource;
    }

}

2.MysqlMainDruidConfig.java   配置mysql主数据源,mysql连接默认主数据源

package cn.**.**.system.config.druid.mysql;

import java.sql.SQLException;

import javax.sql.DataSource;

import cn.cnic.zhongtai.system.config.DataSourceCommonProperties;
import cn.cnic.zhongtai.system.config.DataSourceProperties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;

import com.alibaba.druid.pool.DruidDataSource;

/**
 * -配置mysql主数据源,mysql连接默认主数据源
 * @author wdd
 * @Date 2019年11月04日
 *
 */
@Configuration
@EnableConfigurationProperties({DataSourceProperties.class, DataSourceCommonProperties.class})//将配置类注入到bean容器,使ConfigurationProperties注解类生效
public class MysqlMainDruidConfig {

    private static Logger logger = LoggerFactory.getLogger(MysqlMainDruidConfig.class);

    @Autowired
    private DataSourceProperties dataSourceProperties;

    @Autowired
    private DataSourceCommonProperties dataSourceCommonProperties;

    @Primary //标明为主数据源,只能标识一个主数据源,mybatis连接默认主数据源
    @Bean("mysqlDruidDataSource") //新建bean实例
    @Qualifier("mysqlDruidDataSource")//标识
    public DataSource dataSource(){
        DruidDataSource datasource = new DruidDataSource();

        //配置数据源属性
        datasource.setUrl(dataSourceProperties.getMysqlMain().get("url"));
        datasource.setUsername(dataSourceProperties.getMysqlMain().get("username"));
        datasource.setPassword(dataSourceProperties.getMysqlMain().get("password"));
        datasource.setDriverClassName(dataSourceProperties.getMysqlMain().get("driver-class-name"));

        //配置统一属性
        datasource.setInitialSize(dataSourceCommonProperties.getInitialSize());
        datasource.setMinIdle(dataSourceCommonProperties.getMinIdle());
        datasource.setMaxActive(dataSourceCommonProperties.getMaxActive());
        datasource.setMaxWait(dataSourceCommonProperties.getMaxWait());
        datasource.setTimeBetweenEvictionRunsMillis(dataSourceCommonProperties.getTimeBetweenEvictionRunsMillis());
        datasource.setMinEvictableIdleTimeMillis(dataSourceCommonProperties.getMinEvictableIdleTimeMillis());
        datasource.setValidationQuery(dataSourceCommonProperties.getValidationQuery());
        datasource.setTestWhileIdle(dataSourceCommonProperties.isTestWhileIdle());
        datasource.setTestOnBorrow(dataSourceCommonProperties.isTestOnBorrow());
        datasource.setTestOnReturn(dataSourceCommonProperties.isTestOnReturn());
        datasource.setPoolPreparedStatements(dataSourceCommonProperties.isPoolPreparedStatements());
        try {
            datasource.setFilters(dataSourceCommonProperties.getFilters());
        } catch (SQLException e) {
            logger.error("Druid configuration initialization filter error.", e);
        }
        return datasource;
    }

}

3.DataSourceCommonProperties.java   扩展连接池,通用配置属性,可应用到所有数据源

package cn.**.**.system.config;

import org.springframework.boot.context.properties.ConfigurationProperties;

/**
 * -扩展连接池,通用配置属性,可应用到所有数据源
 * @author wdd
 * @Date 2019年11月04日
 *
 */
@ConfigurationProperties(prefix = DataSourceCommonProperties.DS, ignoreUnknownFields = false)
public class DataSourceCommonProperties {
    final static String DS = "spring.datasource.common-config";

    private int initialSize = 10;
    private int minIdle;
    private int maxIdle;
    private int maxActive;
    private int maxWait;
    private int timeBetweenEvictionRunsMillis;
    private int minEvictableIdleTimeMillis;
    private String validationQuery;
    private boolean testWhileIdle;
    private boolean testOnBorrow;
    private boolean testOnReturn;
    private boolean poolPreparedStatements;
    private int maxOpenPreparedStatements;
    private String filters;

    private String mapperLocations;
    private String typeAliasPackage;

    这里是get,set方法。自己生成一下
}

4.DataSourceProperties.java  统一属性控制类,获取配置文件属性

package cn.**.**.system.config;
import java.util.Map;

import org.springframework.boot.context.properties.ConfigurationProperties;

/**
 * -统一属性控制类,获取配置文件属性
 * @author wdd
 * @Date 2019年11月04日
 *
 */
@ConfigurationProperties(prefix = DataSourceProperties.DS, ignoreUnknownFields = false)
public class DataSourceProperties {
    final static String DS = "spring.datasource";

    private Map<String,String> mysqlMain;

    private Map<String,String> hive;

    private Map<String,String> commonConfig;


   get,set方法,自己生成
}

 5.HiveJdbcBaseDaoImpl.java  注入hive数据源

package cn.**.**.system.config.jdbcConfig;
import javax.sql.DataSource;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;

/**
 * -注入hive数据源
 * @author wdd
 * @Date 2019年11月04日
 *
 */
@Repository
public class HiveJdbcBaseDaoImpl {

    private JdbcTemplate jdbcTemplate;

    public JdbcTemplate getJdbcTemplate() {
        return jdbcTemplate;
    }

    @Autowired
    public void setJdbcTemplate(@Qualifier("hiveDruidDataSource") DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

}

 6.MysqlMainJdbcBaseDaoImpl.java  注入jdbctemplate

package cn.**.**.system.config.jdbcConfig;
import javax.sql.DataSource;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Repository;

/**
 * -注入jdbctemplate
 * @author wdd
 * 2019年11月04日
 */
@Repository
public class MysqlMainJdbcBaseDaoImpl {

    private JdbcTemplate jdbcTemplate;

    public JdbcTemplate getJdbcTemplate() {
        return jdbcTemplate;
    }

    @Autowired
    public void setJdbcTemplate(@Qualifier("mysqlDruidDataSource") DataSource dataSource) {
        this.jdbcTemplate = new JdbcTemplate(dataSource);
    }

}

到了这一步就算是创建完了,接下来就是测试一下吧,写一个测试类

用法呢就是直接注入即可使用

import cn.cnic.zhongtai.ZhongtaiApplication;
import cn.cnic.zhongtai.system.config.jdbcConfig.HiveJdbcBaseDaoImpl;
import cn.cnic.zhongtai.system.config.jdbcConfig.MysqlMainJdbcBaseDaoImpl;
import cn.cnic.zhongtai.system.mapper.TestDao;
import cn.cnic.zhongtai.system.model.GenTableColumn;
import cn.cnic.zhongtai.system.service.GenTableColumnService;
import cn.cnic.zhongtai.utils.SqlUtils;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;

import java.util.List;
import java.util.Map;

/**
 * 使用 hive and mysql test
 */
@RunWith(SpringRunner.class)
@SpringBootTest(classes={ZhongtaiApplication.class})
public class HiveAndMysqlTest {

   
    @Autowired
    private HiveJdbcBaseDaoImpl hiveJdbcBaseDaoImpl;

    @Autowired
    private MysqlMainJdbcBaseDaoImpl MysqlMainJdbcBaseDaoImpl;


    @Test
    public void testMysql() {
        String sql = "SELECT name from test.people limit 1";
        String info = hiveJdbcBaseDaoImpl.getJdbcTemplate().queryForObject(sql,String.class);
        System.out.println("hive中查出的数据是:"+info);

        //校验该表是否在数据库中存在
        String sql1 = "SELECT  count(*) FROM person  ;";
        int count = MysqlMainJdbcBaseDaoImpl.getJdbcTemplate().queryForObject(sql1, Integer.class);
        System.out.println("mysql中查出的数据是:"+count);
    }
 


}

 测试结果:

就这样,写完了,这就是springboot多数据配置的两种hive和mysql。这应该够详细了。

如果有问题的话就多检查一下哪里设置错了,也欢迎留言讨论,如果对你有帮助就点个赞再走呗. 

  • 2
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
在Spring Boot项目中连接Hive集群,你可以按照以下步骤进行配置: 1. 添加Hive依赖:在项目的pom.xml文件中添加Hive相关的依赖,例如: ```xml <dependency> <groupId>org.apache.hive</groupId> <artifactId>hive-jdbc</artifactId> <version>3.1.2</version> </dependency> ``` 2. 配置Hive连接信息:在项目配置文件(如application.properties或application.yml)中添加Hive连接的相关配置,包括Hive的JDBC URL、用户名和密码等,例如: ```properties spring.datasource.url=jdbc:hive2://hive-server:10000/default spring.datasource.username=hiveuser spring.datasource.password=hivepassword ``` 3. 创建Hive连接:在代码中创建Hive连接,可以使用JDBC来实现。示例代码如下: ```java import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; public class HiveConnectionUtil { public static Connection getConnection() throws SQLException { String url = "jdbc:hive2://hive-server:10000/default"; String username = "hiveuser"; String password = "hivepassword"; return DriverManager.getConnection(url, username, password); } } ``` 4. 使用Hive连接:通过获取到的Hive连接,可以执行Hive的SQL语句或者操作Hive表等。示例代码如下: ```java import java.sql.Connection; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; public class HiveExample { public static void main(String[] args) { try { Connection connection = HiveConnectionUtil.getConnection(); Statement statement = connection.createStatement(); // 执行Hive查询 ResultSet resultSet = statement.executeQuery("SELECT * FROM my_table"); // 处理查询结果 while (resultSet.next()) { // 处理每一行数据 String column1 = resultSet.getString("column1"); int column2 = resultSet.getInt("column2"); // ... } // 关闭连接 resultSet.close(); statement.close(); connection.close(); } catch (SQLException e) { e.printStackTrace(); } } } ``` 这样,你就可以在Spring Boot项目中成功连接Hive集群并进行操作了。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值