一、今天将以前公司的由SpringMVC搭建的框架和redis整合。Spring版本为4.0.9 redis安装在linux(CentOS6.5)下面。数据库使用mysql。
由于redis,
1、优异的读写性能
我在这里使用了Redis-benchmark这个自带的性能测试工具进行了测试,由于是学习的机子,性能一般所以测试结果
测试命令: redis-benchmark -h 192.168.100.131 -p 6379 -c 100 -n 100000
100个并发连接,100000个请求,检测host为localhost 端口为6379的redis服务器性能
====== SET ======
100000 requests completed in 5.15 seconds
19432.57 requests per second
====== GET ======
100000 requests completed in 5.51 seconds
18162.01 requests per second
每秒钟可以set 或者get差不多2万条。
2 支持数据持久化,支持AOF和RDB两种持久化方式
3 支持主从复制,主机会自动将数据同步到从机,可以进行读写分离。
4 数据结构丰富:string(字符串)、list(链表)、set(集合)、zset(sorted set --有序集合)和hash(哈希类型)
该项目是一个作品投票的项目,是一个互联网项目的一个子项目,我们知道作品投票有个特点就是:作品这个表是经常被查询,数据基本不变,我们就没有必要每次用户进入页面的时候就进行数据库查询,我们只要在redis中直接读取就可以了,下面我通过这个项目进行spring和redis进行整合。
1、项目的POM.xml文件加入所需要的jar包
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-pool2</artifactId>
<version>2.4.2</version>
</dependency>
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.8.0</version>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
<version>1.6.6.RELEASE</version>
</dependency>
2、为了便于redis配置的管理,我们把一些配置信息单独出来放入redis.properties
redis.hostName=192.168.100.131
redis.port=6379
redis.timeout=15000
redis.usePool=true
redis.maxIdle=6
redis.minEvictableIdleTimeMillis=300000
redis.numTestsPerEvictionRun=3
redis.timeBetweenEvictionRunsMillis=60000
3、需要加入redis-context.xml 配置如下
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd" default-autowire="byName">
<bean id="jedisPoolConfig" class="redis.clients.jedis.JedisPoolConfig">
<!-- <property name="maxIdle" value="6"></property>
<property name="minEvictableIdleTimeMillis" value="300000"></property>
<property name="numTestsPerEvictionRun" value="3"></property>
<property name="timeBetweenEvictionRunsMillis" value="60000"></property> -->
<property name="maxIdle" value="${redis.maxIdle}"></property>
<property name="minEvictableIdleTimeMillis" value="${redis.minEvictableIdleTimeMillis}"></property>
<property name="numTestsPerEvictionRun" value="${redis.numTestsPerEvictionRun}"></property>
<property name="timeBetweenEvictionRunsMillis" value="${redis.timeBetweenEvictionRunsMillis}"></property>
</bean>
<bean id="jedisConnectionFactory" class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory" destroy-method="destroy">
<property name="poolConfig" ref="jedisPoolConfig"></property>
<property name="hostName" value="${redis.hostName}"></property>
<property name="port" value="${redis.port}"></property>
<property name="timeout" value="${redis.timeout}"></property>
<property name="usePool" value="${redis.usePool}"></property>
<property name="password" value="${redis.password}"></property>
</bean>
<bean id="jedisTemplate" class="org.springframework.data.redis.core.RedisTemplate">
<property name="connectionFactory" ref="jedisConnectionFactory"></property>
<property name="keySerializer">
<bean class="org.springframework.data.redis.serializer.StringRedisSerializer"/>
</property>
<property name="valueSerializer">
<bean class="org.springframework.data.redis.serializer.JdkSerializationRedisSerializer"/>
</property>
</bean>
</beans>
4、在Spring配置文件中加入读取redis.properties 和redis-context.xml代码
<bean
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<value>classpath:/config/redis.properties</value>
</list>
</property>
</bean>
<import resource="classpath*:modules/redis-context.xml" />
这样以上环境就配置好了。下面我们加入代码进行测试
5、这里需要加入监听器,这个监听器的作用是我们启动项目的时候会把数据库中的数据(这些数据基本不变),放入redis中,下次们去拿的数据,可以直接总redis中获得,无需进入数据库中读取。
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationListener;
import org.springframework.context.event.ContextRefreshedEvent;
import org.springframework.stereotype.Service;
@Service
public class StartAddCacheListener implements ApplicationListener<ContextRefreshedEvent> {
// 日志
private final Logger log = Logger.getLogger(StartAddCacheListener.class);
@Autowired
private RedisCacheUtil<Object> redisCache;
@Autowired
private VoteService voteService;
@Override
public void onApplicationEvent(ContextRefreshedEvent event) {
if (event.getApplicationContext().getDisplayName().equals("Root WebApplicationContext")) {
System.out.println("-------------缓存数据-------------");
List<VoteProject> voteProjectList = voteService.getAllProject();
Map<Integer, VoteProject> voteProjectMap = new HashMap<Integer, VoteProject>();
int cityListSize = voteProjectList.size();
for (int i = 0; i < cityListSize; i++) {
voteProjectMap.put(voteProjectList.get(i).getId(), voteProjectList.get(i));
}
redisCache.setCacheIntegerMap("voteProjectMap", voteProjectMap);
}
}
}
6、将该监听器放入spring配置文件中
<bean id="startAddCacheListener" class="com.eshine.vote.redis.listener.StartAddCacheListener"></bean>
7、新建一个util缓存工具类
import java.util.ArrayList;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.data.redis.core.BoundSetOperations;
import org.springframework.data.redis.core.HashOperations;
import org.springframework.data.redis.core.ListOperations;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.core.ValueOperations;
import org.springframework.stereotype.Service;
@Service
public class RedisCacheUtil<T> {
@Autowired
@Qualifier("jedisTemplate")
public RedisTemplate redisTemplate;
/**
* 缓存基本的对象,Integer、String、实体类等
*
* @param key
* 缓存的键值
* @param value
* 缓存的值
* @return 缓存的对象
*/
public <T> ValueOperations<String, T> setCacheObject(String key, T value) {
ValueOperations<String, T> operation = redisTemplate.opsForValue();
operation.set(key, value);
return operation;
}
/**
* 获得缓存的基本对象。
*
* @param key
* 缓存键值
* @param operation
* @return 缓存键值对应的数据
*/
public <T> T getCacheObject(String key/* ,ValueOperations<String,T> operation */) {
ValueOperations<String, T> operation = redisTemplate.opsForValue();
return operation.get(key);
}
/**
* 缓存List数据
*
* @param key
* 缓存的键值
* @param dataList
* 待缓存的List数据
* @return 缓存的对象
*/
public <T> ListOperations<String, T> setCacheList(String key, List<T> dataList) {
ListOperations listOperation = redisTemplate.opsForList();
if (null != dataList) {
int size = dataList.size();
for (int i = 0; i < size; i++) {
listOperation.rightPush(key, dataList.get(i));
}
}
return listOperation;
}
/**
* 获得缓存的list对象
*
* @param key
* 缓存的键值
* @return 缓存键值对应的数据
*/
public <T> List<T> getCacheList(String key) {
List<T> dataList = new ArrayList<T>();
ListOperations<String, T> listOperation = redisTemplate.opsForList();
Long size = listOperation.size(key);
for (int i = 0; i < size; i++) {
dataList.add((T) listOperation.leftPop(key));
}
return dataList;
}
/**
* 缓存Set
*
* @param key
* 缓存键值
* @param dataSet
* 缓存的数据
* @return 缓存数据的对象
*/
public <T> BoundSetOperations<String, T> setCacheSet(String key, Set<T> dataSet) {
BoundSetOperations<String, T> setOperation = redisTemplate.boundSetOps(key);
/*
* T[] t = (T[]) dataSet.toArray(); setOperation.add(t);
*/
Iterator<T> it = dataSet.iterator();
while (it.hasNext()) {
setOperation.add(it.next());
}
return setOperation;
}
/**
* 获得缓存的set
*
* @param key
* @param operation
* @return
*/
public Set<T> getCacheSet(String key/*
* ,BoundSetOperations<String,T>
* operation
*/) {
Set<T> dataSet = new HashSet<T>();
BoundSetOperations<String, T> operation = redisTemplate.boundSetOps(key);
Long size = operation.size();
for (int i = 0; i < size; i++) {
dataSet.add(operation.pop());
}
return dataSet;
}
/**
* 缓存Map
*
* @param key
* @param dataMap
* @return
*/
public <T> HashOperations<String, String, T> setCacheMap(String key, Map<String, T> dataMap) {
HashOperations hashOperations = redisTemplate.opsForHash();
if (null != dataMap) {
for (Map.Entry<String, T> entry : dataMap.entrySet()) {
/*
* System.out.println("Key = " + entry.getKey() + ", Value = " +
* entry.getValue());
*/
hashOperations.put(key, entry.getKey(), entry.getValue());
}
}
return hashOperations;
}
/**
* 获得缓存的Map
*
* @param key
* @param hashOperation
* @return
*/
public <T> Map<String, T> getCacheMap(String key/*
* ,HashOperations<String,String
* ,T> hashOperation
*/) {
Map<String, T> map = redisTemplate.opsForHash().entries(key);
/* Map<String, T> map = hashOperation.entries(key); */
return map;
}
/**
* 缓存Map
*
* @param key
* @param dataMap
* @return
*/
public <T> HashOperations<String, Integer, T> setCacheIntegerMap(String key, Map<Integer, T> dataMap) {
HashOperations hashOperations = redisTemplate.opsForHash();
if (null != dataMap) {
for (Map.Entry<Integer, T> entry : dataMap.entrySet()) {
/*
* System.out.println("Key = " + entry.getKey() + ", Value = " +
* entry.getValue());
*/
hashOperations.put(key, entry.getKey(), entry.getValue());
}
}
return hashOperations;
}
/**
* 获得缓存的Map
*
* @param key
* @param hashOperation
* @return
*/
public <T> Map<Integer, T> getCacheIntegerMap(String key/*
* ,HashOperations<
* String,String,T>
* hashOperation
*/) {
Map<Integer, T> map = redisTemplate.opsForHash().entries(key);
/* Map<String, T> map = hashOperation.entries(key); */
return map;
}
}
8、测试类
import java.util.Map;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.servlet.ModelAndView;
@Controller("/RedisTest")
@RequestMapping({ "/RedisTest*" })
public class RedisTest {
@Autowired
private RedisCacheUtil<Object> redisCache;
@RequestMapping("/testGetCache")
public ModelAndView testGetCache(HttpServletRequest req, HttpServletResponse rsp) {
ModelAndView mav = new ModelAndView("html/website/index");
Map<Integer, VoteProject> voteProjectMap = redisCache.getCacheIntegerMap("voteProjectMap");
for (int key : voteProjectMap.keySet()) {
System.out.println("key = " + key + ",value=" + voteProjectMap.get(key).getContent());
}
return mav;
}
}
以上我们已经将代码以及配置全部搭建好了,现在我数据库中一共有17条数据。如下图所示
下面我启动项目,日志提示将读取17条数据
9、上面提到,有个测试工具类,我们触发它,查看控制台打印数据
我这里数据库并不多,只有十几条,如果数量达到几百万上千万的时候,我每次重启项目都去查询数据库的一千万条记录,显然不合理,所以,对这个项目整合只是一个开始,后面我会尝试在表中加入大量数据,并且对数据进行持久化,如果有时间和机会会在项目尝试redis的主从复制以及读写分离等等。