一个SpringBoot项目,同时连接两个数据库:比如一个是pgsql数据库,一个是oracle数据库(啥数据库都一样,连接两个同为oracle的数据库,或两个不同的数据库,只需要更改对应的driver-class-name和jdbc-url等即可)注意:连接什么数据库,要引入对应数据库的包。
该例子利用主从MySQL作为例子
1、导入pom.xml(这里重点需要导入druid数据库连接池)其它的包不多介绍,你需要用到啥就导入啥,跟平时项目一样。
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
2、修改application.properties配置文件(由于我采用本地的,IP地址是一致的,只是两个库不一样,实际开发中,应该是两台云服务,两台MySQL地址进行主从读写)
# 端口号
server.port=8090
#主数据库配置 - 这里地址可以选你第一台MySQL的IP地址
spring.datasource.masterdb.type=com.alibaba.druid.pool.DruidDataSource
spring.datasource.masterdb.jdbc-url=jdbc:mysql://localhost:3306/shiro?characterEncoding=utf8&useSSL=true&serverTimezone=Asia/Shanghai
spring.datasource.masterdb.username=root
spring.datasource.masterdb.password=123456
spring.datasource.masterdb.driver-class-name=com.mysql.cj.jdbc.Driver
#从数据库配置 - 这里地址可以选你第二台MySQL的IP地址
spring.datasource.devdb.type=com.alibaba.druid.pool.DruidDataSource
spring.datasource.devdb.jdbc-url=jdbc:mysql://localhost:3306/vuedemo?characterEncoding=utf8&useSSL=true&serverTimezone=Asia/Shanghai
spring.datasource.devdb.username=root
spring.datasource.devdb.password=123456
spring.datasource.devdb.driver-class-name=com.mysql.cj.jdbc.Driver
#######其它的配置按需自行配置#########
3、新建两个Config配置类 (注意,一定要按照这种写法)
3.1 DBMasterConfig为主数据库DBMasterConfig.java,项目启动默认连接此数据库
package com.lq.flyrabbitmail.config;
import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionFactoryBean;
import org.mybatis.spring.SqlSessionTemplate;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.jdbc.DataSourceBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;
import javax.sql.DataSource;
/**
* @Author LQ
* @Date2021/1/5 20:40
* @Version V1.0
* 数据库连接池 - 主库
**/
@Configuration
@MapperScan(basePackages = "com.lq.flyrabbitmail.mapper.master", sqlSessionTemplateRef = "MasterSqlSessionTemplate")
public class DruidMasterConfig {
@Bean(name = "MasterDataSource")
@ConfigurationProperties(prefix = "spring.datasource.masterdb")
@Primary
public DataSource testDataSource() {
return DataSourceBuilder.create().build();
}
@Bean(name = "MasterSqlSessionFactory")
@Primary
public SqlSessionFactory testSqlSessionFactory(@Qualifier("MasterDataSource") DataSource dataSource) throws Exception {
SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
bean.setDataSource(dataSource);
bean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath:mapper/*.xml"));
return bean.getObject();
}
@Bean(name = "MasterTransactionManager")
@Primary
public DataSourceTransactionManager testTransactionManager(@Qualifier("MasterDataSource") DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
@Bean(name = "MasterSqlSessionTemplate")
@Primary
public SqlSessionTemplate testSqlSessionTemplate(@Qualifier("MasterSqlSessionFactory") SqlSessionFactory sqlSessionFactory) throws Exception {
return new SqlSessionTemplate(sqlSessionFactory);
}
}
3.2 第二个数据库作为从数据库 DBSlaverConfig.java
package com.lq.flyrabbitmail.config;
import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionFactoryBean;
import org.mybatis.spring.SqlSessionTemplate;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.jdbc.DataSourceBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;
import javax.sql.DataSource;
/**
* @Author LQ
* @Date2021/1/5 20:40
* @Version V1.0
* 数据库连接池 - 从库
**/
@Configuration
@MapperScan(basePackages = "com.lq.flyrabbitmail.mapper.slaver", sqlSessionTemplateRef = "MasterSqlSessionTemplate")
public class DruidDevdbConfig {
@Bean(name = "SalverDataSource")
@ConfigurationProperties(prefix = "spring.datasource.devdb")
public DataSource testDataSource() { return DataSourceBuilder.create().build(); }
@Bean(name = "SalverSqlSessionFactory")
public SqlSessionFactory testSqlSessionFactory(@Qualifier("SalverDataSource") DataSource dataSource) throws Exception {
SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
bean.setDataSource(dataSource);
bean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath:mapper/*.xml"));
return bean.getObject();
}
@Bean(name = "SalverTransactionManager")
public DataSourceTransactionManager testTransactionManager(@Qualifier("SalverDataSource") DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
@Bean(name = "SalverSqlSessionTemplate")
public SqlSessionTemplate testSqlSessionTemplate(@Qualifier("SalverSqlSessionFactory") SqlSessionFactory sqlSessionFactory) throws Exception {
return new SqlSessionTemplate(sqlSessionFactory);
}
}
4、在Mapper包下新建两个子包master 和 slaver 分别放两个不同数据库的dao层文件(具体位置看上面项目截图)
/* 这里就列举UserMapper RoleMapper一样 */
@Mapper
public interface UserMapper {
@Select("SELECT id,DATEDIFF(endtime,starttime) FROM task WHERE STATUS = '进行'")
List<String> selct();
}
注意:主数据库不需要加@Transactional(value = "SalverTransactionManager")从数据库需要添加
事务在从数据库使用
5、在resource下新建mapper文件夹,分别放入对应dao层的xml文件(具体看项目截图,这里一定要创建,不然会报错)
6、可以进行使用了,这里我使用Juit进行测试
@SpringBootTest
class FlyrabbitmailApplicationTests {
@Autowired
private UserMapper userMapper;
@Test
void contextLoads() {
List<String> selct = userMapper.selct();
System.out.println(selct);
}
}
主从双数据库可以进行读写分离,提高数据库的吞吐量,提高并发能力。