以下说明以redis2为例,为经验收集贴
分为:
Redis在windows下的安装使用
Linux 下 Redis 安装详解
Redis配置文件解读
Jedis操作redis
四个部分。
1.Redis在windows下的安装使用
下载的windows版本是redis-2.0.2,解压到D盘下:
D:\redis-2.0.2
![](https://img-my.csdn.net/uploads/201204/28/1335584354_6006.jpg)
redis-server.exe:服务程序
redis-check-dump.exe:本地数据库检查
redis-check-aof.exe:更新日志检查
启动Redis服务(conf文件指定配置文件,若不指定则默认):
这时服务开启着,另外开一个窗口进行,设置客户端:
D:\redis-2.0.2>redis-cli.exe -h 202.117.16.133 -p 6379
然后可以开始玩了:
--------------------------------------------------------------------------------------------------------------------------------
Redis提供了多种语言的客户端,包括Java,C++,python。
Redis官网上推荐的Java包是Jedis,去下载Jedis,在Java项目中导入Jedis包,开始发现有错误,是因为缺少org.apache.commons这个包,
去网上找此包,下载导入后,Jedis就没有错误了。
以上转自: http://hi.baidu.com/strongpxq/blog/item/d8574803f25679114afb51a5.html
2.Linux 下 Redis 安装详解
1:下载redis
下载地址 http://code.google.com/p/redis/downloads/list
2:安装redis
下载后解压 tar zxvf redis-2.0.4.tar.gz 到任意目录,例如/usr/local/redis-2.0.4
解压后,进入redis目录
cd /usr/local/redis-2.0.4
make
设置内存分配策略(可选,根据服务器的实际情况进行设置)
/proc/sys/vm/overcommit_memory
可选值:0、1、2。
0, 表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
1, 表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
2, 表示内核允许分配超过所有物理内存和交换空间总和的内存
值得注意的一点是,redis在dump数据的时候,会fork出一个子进程,理论上child进程所占用的内存和parent是一样的,比如parent占用的内存为8G,这个时候也要同样分配8G的内存给child,如果内存无法负担,往往会造成redis服务器的down机或者IO负载过高,效率下降。所以这里比较优化的内存分配策略应该设置为 1(表示内核允许分配所有的物理内存,而不管当前的内存状态如何)
开启redis端口,修改防火墙配置文件
vi /etc/sysconfig/iptables
加入端口配置
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 6379 -j ACCEPT
重新加载规则
service iptables restart
3:启动redis服务
[root@Architect redis-2.0.4]# pwd
/usr/local/redis-2.0.4
[root@Architect redis-2.0.4]# redis-server redis.conf
查看进程,确认redis已经启动
[root@Architect redis-2.0.4]# ps -ef | grep redis
root 401 29222 0 18:06 pts/3 00:00:00 grep redis
root 29258 1 0 16:23 ? 00:00:00 redis-server redis.conf
如果这里启动redis服务失败,一般情况下是因为redis.conf文件有问题,建议检查或找个可用的配置文件进行覆盖,避免少走弯路,这里建议,修改redis.conf,设置redis进程为后台守护进程
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes
4:测试redis
[root@Architect redis-1.2.6]# redis-cli
redis> set name songbin
OK
redis> get name
"songbin"
5:关闭redis服务
redis-cli shutdown
redis服务关闭后,缓存数据会自动dump到硬盘上,硬盘地址为redis.conf中的配置项dbfilename dump.rdb所设定
强制备份数据到磁盘,使用如下命令
redis-cli save 或者 redis-cli -p 6380 save(指定端口)
修改自:http://www.oschina.net/question/12_18065
转载自:http://www.cnblogs.com/daizhj/articles/1956681.html 对部分配置选项做了一些说明
把配置项目从上到下看了一遍,有了个大致的了解,暂时还用不到一些高级的配置选项,先放在这,用到的时候再回来看。
配置文件参数说明:
1. Redis默认不是以守护进程的方式运行,可以通过该配置项修改,使用yes启用守护进程
daemonize no
2. 当Redis以守护进程方式运行时,Redis默认会把pid写入/var/run/redis.pid文件,可以通过pidfile指定
pidfile /var/run/redis.pid
3. 指定Redis监听端口,默认端口为6379,作者在自己的一篇博文中解释了为什么选用6379作为默认端口,因为6379在手机按键上MERZ对应的号码,而MERZ取自意大利歌女Alessia Merz的名字
port 6379
4. 绑定的主机地址
bind 127.0.0.1
5.当 客户端闲置多长时间后关闭连接,如果指定为0,表示关闭该功能,单位秒
timeout 300
6. 指定日志记录级别,Redis总共支持四个级别:debug、verbose、notice、warning,默认为verbose
loglevel verbose
7. 日志记录方式,默认为标准输出,如果配置Redis为守护进程方式运行,而这里又配置为日志记录方式为标准输出,则日志将会发送给/dev/null
logfile stdout
8. 设置数据库的数量,默认数据库为0,可以使用SELECT <dbid>命令在连接上指定数据库id
databases 16
9. 指定在多长时间内,有多少次更新操作,就将数据同步到数据文件,可以多个条件配合
save <seconds> <changes>
Redis默认配置文件中提供了三个条件:
save 900 1
save 300 10
save 60 10000
分别表示900秒(15分钟)内有1个更改,300秒(5分钟)内有10个更改以及60秒内有10000个更改。
10. 指定存储至本地数据库时是否压缩数据,默认为yes,Redis采用LZF压缩,如果为了节省CPU时间,可以关闭该选项,但会导致数据库文件变的巨大
rdbcompression yes
11. 指定本地数据库文件名,默认值为dump.rdb
dbfilename dump.rdb
12. 指定本地数据库存放目录
dir ./
13. 设置当本机为slav服务时,设置master服务的IP地址及端口,在Redis启动时,它会自动从master进行数据同步
slaveof <masterip> <masterport>
14. 当master服务设置了密码保护时,slav服务连接master的密码
masterauth <master-password>
15. 设置Redis连接密码,如果配置了连接密码,客户端在连接Redis时需要通过AUTH <password>命令提供密码,默认关闭
requirepass foobared
16. 设置同一时间最大客户端连接数,默认无限制,Redis可以同时打开的客户端连接数为Redis进程可以打开的最大文件描述符数,如果设置 maxclients 0,表示不作限制。当客户端连接数到达限制时,Redis会关闭新的连接并向客户端返回max number of clients reached错误信息
maxclients 128
17. 指定Redis最大内存限制,Redis在启动时会把数据加载到内存中,达到最大内存后,Redis会先尝试清除已到期或即将到期的Key,当此方法处理 后,仍然到达最大内存设置,将无法再进行写入操作,但仍然可以进行读取操作。Redis新的vm机制,会把Key存放内存,Value会存放在swap区
maxmemory <bytes>
18. 指定是否在每次更新操作后进行日志记录,Redis在默认情况下是异步的把数据写入磁盘,如果不开启,可能会在断电时导致一段时间内的数据丢失。因为 redis本身同步数据文件是按上面save条件来同步的,所以有的数据会在一段时间内只存在于内存中。默认为no
appendonly no
19. 指定更新日志文件名,默认为appendonly.aof
appendfilename appendonly.aof
20. 指定更新日志条件,共有3个可选值:
no:表示等操作系统进行数据缓存同步到磁盘(快)
always:表示每次更新操作后手动调用fsync()将数据写到磁盘(慢,安全)
everysec:表示每秒同步一次(折衷,默认值)
appendfsync everysec
21. 指定是否启用虚拟内存机制,默认值为no,简单的介绍一下,VM机制将数据分页存放,由Redis将访问量较少的页即冷数据swap到磁盘上,访问多的页面由磁盘自动换出到内存中(在后面的文章我会仔细分析Redis的VM机制)
vm-enabled no
22. 虚拟内存文件路径,默认值为/tmp/redis.swap,不可多个Redis实例共享
vm-swap-file /tmp/redis.swap
23. 将所有大于vm-max-memory的数据存入虚拟内存,无论vm-max-memory设置多小,所有索引数据都是内存存储的(Redis的索引数据 就是keys),也就是说,当vm-max-memory设置为0的时候,其实是所有value都存在于磁盘。默认值为0
vm-max-memory 0
24. Redis swap文件分成了很多的page,一个对象可以保存在多个page上面,但一个page上不能被多个对象共享,vm-page-size是要根据存储的 数据大小来设定的,作者建议如果存储很多小对象,page大小最好设置为32或者64bytes;如果存储很大大对象,则可以使用更大的page,如果不 确定,就使用默认值
vm-page-size 32
25. 设置swap文件中的page数量,由于页表(一种表示页面空闲或使用的bitmap)是在放在内存中的,,在磁盘上每8个pages将消耗1byte的内存。
vm-pages 134217728
26. 设置访问swap文件的线程数,最好不要超过机器的核数,如果设置为0,那么所有对swap文件的操作都是串行的,可能会造成比较长时间的延迟。默认值为4
vm-max-threads 4
27. 设置在向客户端应答时,是否把较小的包合并为一个包发送,默认为开启
glueoutputbuf yes
28. 指定在超过一定的数量或者最大的元素超过某一临界值时,采用一种特殊的哈希算法
hash-max-zipmap-entries 64
hash-max-zipmap-value 512
29. 指定是否激活重置哈希,默认为开启(后面在介绍Redis的哈希算法时具体介绍)
activerehashing yes
30. 指定包含其它的配置文件,可以在同一主机上多个Redis实例之间使用同一份配置文件,而同时各个实例又拥有自己的特定配置文件
include /path/to/local.conf
------------------------------------------------------------------------------------------------------------------------------
完整配置文件:
# Redis configuration file example
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize no
# When run as a daemon, Redis write a pid file in /var/run/redis.pid by default.
# You can specify a custom pid file location here.
pidfile /var/run/redis.pid
# Accept connections on the specified port, default is 6379
port 6379
# If you want you can bind a single interface, if the bind option is not
# specified all the interfaces will listen for connections.
#
# bind 127.0.0.1
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 300
# Set server verbosity to 'debug'
# it can be one of:
# debug (a lot of information, useful for development/testing)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel debug
# Specify the log file name. Also 'stdout' can be used to force
# the demon to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile stdout
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16
################################ SNAPSHOTTING (快照)#################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
save 900 1
save 300 10
save 60 10000
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
# The filename where to dump the DB
dbfilename dump.rdb
# For default save/load DB in/from the working directory
# Note that you must specify a directory not a file name.
dir ./
################################# REPLICATION #################################
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. Note that the configuration is local to the slave
# so for example it is possible to configure the slave to save the DB with a
# different interval, or to listen to another port, and so on.
#
# slaveof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth <master-password>
################################## SECURITY ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# requirepass foobared
################################### LIMITS ####################################
# Set the max number of connected clients at the same time. By default there
# is no limit, and it's up to the number of file descriptors the Redis process
# is able to open. The special value '0' means no limts.
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 128
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys with an
# EXPIRE set. It will try to start freeing keys that are going to expire
# in little time and preserve keys with a longer time to live.
# Redis will also try to remove objects from free lists if possible.
#
# If all this fails, Redis will start to reply with errors to commands
# that will use more memory, like SET, LPUSH, and so on, and will continue
# to reply to most read-only commands like GET.
#
# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
# 'state' server or cache, not as a real DB. When Redis is used as a real
# database the memory usage will grow over the weeks, it will be obvious if
# it is going to use too much memory in the long run, and you'll have the time
# to upgrade. With maxmemory after the limit is reached you'll start to get
# errors for write operations, and this may even lead to DB inconsistency.
#
# maxmemory <bytes>
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously(异步) dumps the dataset on disk. If you can live
# with the idea that the latest records will be lost if something like a crash
# happens this is the preferred way to run Redis. If instead you care a lot
# about your data and don't want to that a single record can get lost you should
# enable the append only mode: when this mode is enabled Redis will append
# every write operation received in the file appendonly.log. This file will
# be read on startup in order to rebuild the full dataset in memory.
#
# Note that you can have both the async dumps and the append only file if you
# like (you have to comment the "save" statements above to disable the dumps).
# Still if append only mode is enabled Redis will load the data from the
# log file at startup ignoring the dump.rdb file.
#
# The name of the append only file is "appendonly.log"
#
# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
# log file in background when it gets too big.
appendonly no
# The fsync() call tells the Operating System to actually write data on disk
# instead to wait for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log . Slow, Safest.
# everysec: fsync only if one second passed since the last fsync. Compromise.
#
# The default is "always" that's the safer of the options. It's up to you to
# understand if you can relax this to "everysec" that will fsync every second
# or to "no" that will let the operating system flush the output buffer when
# it want, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting).
appendfsync always
# appendfsync everysec
# appendfsync no
############################### ADVANCED CONFIG ###############################
# Glue small output buffers together in order to send small replies in a
# single TCP packet. Uses a bit more CPU but most of the times it is a win
# in terms of number of queries per second. Use 'yes' if unsure.
glueoutputbuf yes
# Use object sharing. Can save a lot of memory if you have many common
# string in your dataset, but performs lookups against the shared objects
# pool so it uses more CPU and can be a bit slower. Usually it's a good
# idea.
#
# When object sharing is enabled (shareobjects yes) you can use
# shareobjectspoolsize to control the size of the pool used in order to try
# object sharing. A bigger pool size will lead to better sharing capabilities.
# In general you want this value to be at least the double of the number of
# very common strings you have in your dataset.
#
# WARNING: object sharing is experimental, don't enable this feature
# in production before of Redis 1.0-stable. Still please try this feature in
# your development environment so that we can test it better.
# shareobjects no
# shareobjectspoolsize 1024
以上转自:http://www.cnblogs.com/brokencode/archive/2011/11/14/2248858.html
4.Jedis操作redis
package org.jzkangta.jedis;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import redis.clients.jedis.Jedis;
public class JedisDemo {
public void test1(){
Jedis redis = new Jedis ("192.168.10.64",6379);//连接redis
redis.auth("redis");//验证密码
/* ----------------------------------------------------------------------------------------------------------- */
/** KEY操作
//KEYS
Set keys = redis.keys("*");//列出所有的key,查找特定的key如:redis.keys("foo")
Iterator t1=keys.iterator() ;
while(t1.hasNext()){
Object obj1=t1.next();
System.out.println(obj1);
}
//DEL 移除给定的一个或多个key。如果key不存在,则忽略该命令。
redis.del("name1");
//TTL 返回给定key的剩余生存时间(time to live)(以秒为单位)
redis.ttl("foo");
//PERSIST key 移除给定key的生存时间。
redis.persist("foo");
//EXISTS 检查给定key是否存在。
redis.exists("foo");
//MOVE key db 将当前数据库(默认为0)的key移动到给定的数据库db当中。如果当前数据库(源数据库)和给定数据库(目标数据库)有相同名字的给定key,或者key不存在于当前数据库,那么MOVE没有任何效果。
redis.move("foo", 1);//将foo这个key,移动到数据库1
//RENAME key newkey 将key改名为newkey。当key和newkey相同或者key不存在时,返回一个错误。当newkey已经存在时,RENAME命令将覆盖旧值。
redis.rename("foo", "foonew");
//TYPE key 返回key所储存的值的类型。
System.out.println(redis.type("foo"));//none(key不存在),string(字符串),list(列表),set(集合),zset(有序集),hash(哈希表)
//EXPIRE key seconds 为给定key设置生存时间。当key过期时,它会被自动删除。
redis.expire("foo", 5);//5秒过期
//EXPIREAT EXPIREAT的作用和EXPIRE一样,都用于为key设置生存时间。不同在于EXPIREAT命令接受的时间参数是UNIX时间戳(unix timestamp)。
//一般SORT用法 最简单的SORT使用方法是SORT key。
redis.lpush("sort", "1");
redis.lpush("sort", "4");
redis.lpush("sort", "6");
redis.lpush("sort", "3");
redis.lpush("sort", "0");
List list = redis.sort("sort");//默认是升序
for(int i=0;i<list.size();i++){
System.out.println(list.get(i));
}
*/
/* ----------------------------------------------------------------------------------------------------------- */
/** STRING 操作
//SET key value将字符串值value关联到key。
redis.set("name", "wangjun1");
redis.set("id", "123456");
redis.set("address", "guangzhou");
//SETEX key seconds value将值value关联到key,并将key的生存时间设为seconds(以秒为单位)。
redis.setex("foo", 5, "haha");
//MSET key value [key value ...]同时设置一个或多个key-value对。
redis.mset("haha","111","xixi","222");
//redis.flushAll();清空所有的key
System.out.println(redis.dbSize());//dbSize是多少个key的个数
//APPEND key value如果key已经存在并且是一个字符串,APPEND命令将value追加到key原来的值之后。
redis.append("foo", "00");//如果key已经存在并且是一个字符串,APPEND命令将value追加到key原来的值之后。
//GET key 返回key所关联的字符串值
redis.get("foo");
//MGET key [key ...] 返回所有(一个或多个)给定key的值
List list = redis.mget("haha","xixi");
for(int i=0;i<list.size();i++){
System.out.println(list.get(i));
}
//DECR key将key中储存的数字值减一。
//DECRBY key decrement将key所储存的值减去减量decrement。
//INCR key 将key中储存的数字值增一。
//INCRBY key increment 将key所储存的值加上增量increment。
*/
/* ----------------------------------------------------------------------------------------------------------- */
/** Hash 操作
//HSET key field value将哈希表key中的域field的值设为value。
redis.hset("website", "google", "www.google.cn");
redis.hset("website", "baidu", "www.baidu.com");
redis.hset("website", "sina", "www.sina.com");
//HMSET key field value [field value ...] 同时将多个field - value(域-值)对设置到哈希表key中。
Map map = new HashMap();
map.put("cardid", "123456");
map.put("username", "jzkangta");
redis.hmset("hash", map);
//HGET key field返回哈希表key中给定域field的值。
System.out.println(redis.hget("hash", "username"));
//HMGET key field [field ...]返回哈希表key中,一个或多个给定域的值。
List list = redis.hmget("website","google","baidu","sina");
for(int i=0;i<list.size();i++){
System.out.println(list.get(i));
}
//HGETALL key返回哈希表key中,所有的域和值。
Map<String,String> map = redis.hgetAll("hash");
for(Map.Entry entry: map.entrySet()) {
System.out.print(entry.getKey() + ":" + entry.getValue() + "\t");
}
//HDEL key field [field ...]删除哈希表key中的一个或多个指定域。
//HLEN key 返回哈希表key中域的数量。
//HEXISTS key field查看哈希表key中,给定域field是否存在。
//HINCRBY key field increment为哈希表key中的域field的值加上增量increment。
//HKEYS key返回哈希表key中的所有域。
//HVALS key返回哈希表key中的所有值。
*/
/* ----------------------------------------------------------------------------------------------------------- */
/** LIST 操作
//LPUSH key value [value ...]将值value插入到列表key的表头。
redis.lpush("list", "abc");
redis.lpush("list", "xzc");
redis.lpush("list", "erf");
redis.lpush("list", "bnh");
//LRANGE key start stop返回列表key中指定区间内的元素,区间以偏移量start和stop指定。下标(index)参数start和stop都以0为底,也就是说,以0表示列表的第一个元素,以1表示列表的第二个元素,以此类推。你也可以使用负数下标,以-1表示列表的最后一个元素,-2表示列表的倒数第二个元素,以此类推。
List list = redis.lrange("list", 0, -1);
for(int i=0;i<list.size();i++){
System.out.println(list.get(i));
}
//LLEN key返回列表key的长度。
//LREM key count value根据参数count的值,移除列表中与参数value相等的元素。
*/
/* ----------------------------------------------------------------------------------------------------------- */
/** SET 操作
//SADD key member [member ...]将member元素加入到集合key当中。
redis.sadd("testSet", "s1");
redis.sadd("testSet", "s2");
redis.sadd("testSet", "s3");
redis.sadd("testSet", "s4");
redis.sadd("testSet", "s5");
//SREM key member移除集合中的member元素。
redis.srem("testSet", "s5");
//SMEMBERS key返回集合key中的所有成员。
Set set = redis.smembers("testSet");
Iterator t1=set.iterator() ;
while(t1.hasNext()){
Object obj1=t1.next();
System.out.println(obj1);
}
//SISMEMBER key member判断member元素是否是集合key的成员。是(true),否则(false)
System.out.println(redis.sismember("testSet", "s4"));
//SCARD key返回集合key的基数(集合中元素的数量)。
//SMOVE source destination member将member元素从source集合移动到destination集合。
//SINTER key [key ...]返回一个集合的全部成员,该集合是所有给定集合的交集。
//SINTERSTORE destination key [key ...]此命令等同于SINTER,但它将结果保存到destination集合,而不是简单地返回结果集
//SUNION key [key ...]返回一个集合的全部成员,该集合是所有给定集合的并集。
//SUNIONSTORE destination key [key ...]此命令等同于SUNION,但它将结果保存到destination集合,而不是简单地返回结果集。
//SDIFF key [key ...]返回一个集合的全部成员,该集合是所有给定集合的差集 。
//SDIFFSTORE destination key [key ...]此命令等同于SDIFF,但它将结果保存到destination集合,而不是简单地返回结果集。
*/
}
/**
* @param args
*/
public static void main(String[] args) {
JedisDemo t1 = new JedisDemo();
t1.test1();
}
}
以上转自: http://jzkangta.iteye.com/blog/1137428