一、redis基础入门

简单介绍

redis 是一种基于键值对(key-value)数据库,其中 value 可以为 string、hash、list、
set、zset 等多种数据结构,可以满足很多应用场景。还提供了键过期,发布订阅,事务,
流水线,等附加功能。

流水线: Redis 的流水线功能允许客户端一次将多个命令请求发送给服务器(批量提交命令), 并将
被执行的多个命令请求的结果在一个命令回复中全部返回给客户端,使用这个功能可以
有效地减少客户端在执行多个命令时需要与服务器进行通信的次数

主要特性

  1. 速度快,数据放在内存中,官方号称读写性能10万/S,其实与机器性能也有关

    1. 数据放内存中是速度快的主要原因
    2. 因为是C语言实现,与操作系统距离近,速度非常快,想想操作系统都是C语言写的
    3. 使用了单线程架构,预防多线程可能产生的竞争问题,多一个线程提交单个指
      令会在服务端排序执行,所以并发情况下单个指令都是线程安全的
  2. 是一个键值对的数据结构服务器,不管怎么变,都是需要key-value结构,value的数据结构从String类型演进化出来多种以适应不同的需求场景

  3. 丰富的功能:见上功能

  4. 简单稳定:单线程

  5. 持久化:发生断电或机器故障,数据可能会丢失,持久化到硬盘

  6. 主从复制:实现多个相同数据的 redis 副本

  7. 高可用和分布式:哨兵机制实现高可用,保证 redis 节点故障发现和自动转移

  8. 客户端语言多:java php python c c++ nodejs 等

常用指令

全局指令
  • ./redis-server /opt/redis/redis.conf:通知指定位置(位置自定义)的配置文件来启动redis的服务

  • ./redis-cli -h {IP} -p {port} -a {password}: 通过redis自带的客户端和服务端进行交互式的操作

  • ./redis-cli -h 192.168.42.128 -p 6379 -a 12345678 set name Chinaese:通过命令式设置值

  • ./redis-cli -h 192.168.42.128 -p 6379 -a 12345678 get name:通过命令式获取key对应的值

  • ./redis-cli -h 192.168.42.128 -p 6379 -a 12345678 shutdown : 关闭服务时,断开连接,生成持久化文件,相对安全,如果直接 kill 掉进程会导致缓冲区数据非法关闭,会导致AOF日志文件和数据丢失

  • shutdown nosave|save: 关闭服务之前触发一次AOF文件持久化

  • Redis的版本最后一位如果是奇数则是开发版,偶数则是稳定版本

  • keys: 获取当前所有的key列表,线上不要执行此命令,该命令会阻塞,当数据量大的时候会导致服务端短暂不可用,非常危险

  • dbsize: 返回所有的key的数量

  • exists key: 返回判断key是否存在,存在返回1,不存在返回0

  • expire name seconds: 设置,某个key的过期时间

  • ttl name: 查看某一个key的剩余过期时间

  • type key: 查看一个key的类型

  • flushall: 清除所有key

单线程架构

列举例子:三个客户端同时执行命令
客户端 1:set name test
客户端 2:incr num
客户端 3:incr num
执行过程:发送指令-〉执行命令-〉返回结果
执行命令:单线程执行,所有命令进入队列,按顺序执行,使用 I/O 多路复用解决 I/O 问题,通过 select/poll/epoll/kqueue 这些 I/O 多路复用函数库,解决了一个线程处理多个连接的问题
单线程快原因:纯内存访问, 非阻塞 I/O(使用多路复用),单线程避免线程切
换和竞争产生资源消耗
问题:如果某个命令执行,会造成其它命令的阻塞

String字符类型
  • set length 10 ex 10:10秒后过期 px 10000 毫秒过期

  • set age 23 ex 10:10秒后过期 px 10000 毫秒过期

  • setnx name test:不存在键name时,返回1设置成功;存在的话失败0

  • set age 25 xx:存在键age时,返回1成功

  • get age:获值命令,存在则返回value, 不存在返回nil

  • mset country china city beijing:批量设多个key-value值

  • mget country city address:批量获取,返回china beigjin, address为nil

  • 注意:若没有mget命令,则要执行n次get命令

  • incr age: 必须为整数自加1,非整数返回错误,无age键从0自增返回1

  • decr age: 整数age减1

  • incrby age 2: 整数age+2

  • decrby age 2: 整数age -2

  • incrbyfloat score 1.1: 浮点型score+1.1

  • set name hello; append name world: append追加指令:追加后成helloworld

  • set hello “世界”;strlen hello: 字符串长度:结果6,每个中文占3个字节

  • set name helloworld ; getrange name 2 4: 截取字符串:返回 llo

哈希(Hash)

哈希hash是一个string类型的field和value的映射表,hash特适合用于存储对象

  • hset user:1 name xzm 设值:成功返回1,失败返回0
  • hget user:1 name 取值:返回xzm
  • hdel user:1 age 删值:返回删除的个数
  • hset user:1 name xzm; hset user:1 age 23;hlen user:1 计算个数,返回2,user:1有两个属性值
  • hmset user:2 name xzm age 23 sex boy 批量设值:返回OK
  • hmget user:2 name age sex批量取值:返回三行:xzm 23 boy
  • hexists user:2 name判断field是否存在:若存在返回1,不存在返回0
  • hkeys user:2获取所有field:返回 name age sex三个field
  • hvals user:2 所有value:获取user:2,返回james 23 boy
  • hgetall user:2所有field与value:获取user:2,name age sex xzm 23 boy值
  • hincrby user:2 age 1 增加1:age+1
  • hincrbyfloat user:2 age 2:浮点型加2
使用hash类型存用户类型数据

hmset user:1 name xzm age 23 sex boy
优点:简单直观,使用合理可减少内存空间消耗
缺点:要控制ziplist与hashtable两种编码转换,且hashtable会消耗更多内存erialize(userInfo);

列表

1588511531966

  • rpush xzm c b a添加命令:从右向左插入cba, 返回值3

  • lrange xzm 0 -1 从左到右获取列表所有元素 返回 c b a,从左到右游标从[0–正无穷],从右边到左游标从[-1–负无穷]

  • lpush key c b a 从左向右插入cba

  • linsert xzm before b teacher在b之前插入teacher, after为之后,使用lrange xzm 0 -1 查看:c teacher b a

  • lrange key start end 查找命令:索引下标特点:从左到右为0到N-1

  • lindex xzm -1返回最右末尾a,-2返回b

  • llen xzm返回当前列表长度

  • lpop xzm把最左边的第一个元素c删除

  • rpop xzm把最右边的元素a删除

集合

用户标签,社交,查询有共同兴趣爱好的人,智能推荐
保存多元素,与列表不一样的是不允许有重复元素,且集合是无序,一个集合最多可存2的32次方减1个元素,除了支持增删改查,还支持集合交集、并集、差集;

1588512708605

  • exists user:1: 检查user键值是否存在
  • sadd user:1 a b c: 向user插入3个元素,返回3
  • sadd user:1 a b: 若再加入相同的元素,则重复无效,返回0
  • smembers user:1: 获取user的所有元素,返回结果无序
  • srem user:1 a: 返回1,删除a元素
  • scard user:1: 返回2,计算元素个数
  • sinter user:1 user2: 计算多个集合的交集 ,如下面的示例:
192.168.42.128:6379> sadd user:1 a b c d e
(integer) 5
192.168.42.128:6379> 
192.168.42.128:6379> 
192.168.42.128:6379> sadd user:2 a b c 
(integer) 3
192.168.42.128:6379> 
192.168.42.128:6379> sinter user:1 user:2
1) "c"
2) "a"
3) "b"
192.168.42.128:6379> sadd user:3 c
(integer) 1
192.168.42.128:6379> 
192.168.42.128:6379> 
192.168.42.128:6379> sinter user:1 user:2 user:3
1) "c"
有序集合(ZSET)
  • `zadd key score member [score member…]

  • zadd user:zan 200 MountAndWater MountAndWater的点赞数1, 返回操作成功的条数1

  • zadd user:zan 200 MountAndWater 120 mike 100 lee 返回3

  • zadd test:1 nx 100 MountAndWater 键test:1必须不存在,主用于添加

  • zadd test:1 xx incr 200 MountAndWater 键test:1必须存在,主用于修改,此时为300

  • zadd test:1 xx ch incr -299 MountAndWater 返回操作结果1,300-299=1

  • zrange test:1 0 -1 withscores 查看点赞(分数)与成员名

  • zcard test:1计算成员个数, 返回1

排名场景

  • zadd user:3 200 MountAndWater 120 mike 100 lee先插入数据

  • zrange user:3 0 -1 withscores查看分数与成员

  • zrank user:3 MountAndWater返回名次:第3名返回2,从0开始到2,共3名

  • zrevrank user:3 MountAndWater返回0, 反排序,点赞数越高,排名越前

数据结构是否允许元素重复是否有序有序实现方式应用场景
列表索引下标时间轴,消息队列
集合标签,社交
有序集合分值排行榜,点赞数
Redis数据库管理
redis****数据库管理方式
select 0:切换数据库
flushdb:清空当前所在库
flushall:清空redis实例面的所有数据库中的数据
dbsize:获取当前数据库中的key的数量

默认支持16个数据库;可以理解为一个命名空间
跟关系型数据库不一样的点

  • redis不支持自定义数据库名词
  • 每个数据库不能单独设置授权
  • 每个数据库之间并不是完全隔离的。 可以通过flushall命令清空redis实例面的所有数据库中的数据
  • 通过 select dbid 去选择不同的数据库命名空间 。 dbid的取值范围默认是0 -15

Redis持久化机制

redis是一个支持持久化的内存数据库,也就是说redis需要经常将内存中的数据同步到磁盘来保证持久化,持久化可以避免因进程退出而造成数据丢失

RDB持久化

RDB持久化把当前进程数据生成快照(.rdb)文件保存到硬盘的过程,有手动触发和自动触发
手动触发有save和bgsave两命令
save命令:阻塞当前Redis,直到RDB持久化过程完成为止,若内存实例比较大会造成长时间阻塞,线上环境不建议用它
bgsave命令:redis进程执行fork操作创建子进程,由子线程完成持久化,阻塞时间很短(微秒级),是save的优化,在执行redis-cli shutdown关闭redis服务时,如果没有开启AOF持久化,自动执行bgsave;

命令:config set dir /usr/local  //设置rdb文件保存路径
备份:bgsave  //将dump.rdb保存到usr/local下
恢复:将dump.rdb放到redis安装目录与redis.conf同级目录,重启redis即可
优点:1,压缩后的二进制文,适用于备份、全量复制,用于灾难恢复
     2,加载RDB恢复数据远快于AOF方式
缺点:1,无法做到实时持久化,每次都要创建子进程,频繁操作成本过高
     2,保存后的二进制文件,存在老版本不兼容新版本rdb文件的问题

RDB持久化之bgsave运行流程

1588660260933

AOF持久化

针对RDB不适合实时持久化,redis提供了AOF持久化方式来解决
开启:redis.conf设置:appendonly yes (默认不开启,为no)
默认文件名:appendfilename “appendonly.aof”

流程说明:
1,所有的写入命令(set hset)会append追加到aof_buf缓冲区中
2,AOF缓冲区向硬盘做sync同步
3,随着AOF文件越来越大,需定期对AOF文件rewrite重写,达到压缩
4,当redis服务重启,可load加载AOF文件进行恢复

1588658612038

命令写入(append), 文件同步(sync), 文件重写(BGREWRITEAOF), 重启加载(load)

AOF配置详解,

appendonly yes           //启用aof持久化方式
appendfsync always       //每收到写命令就立即强制写入磁盘,最慢的,但是保证完全的持久化,不推荐使用

appendfsync everysec     //每秒强制写入磁盘一次,性能和持久化方面做了折中,推荐

no-appendfsync-on-rewrite  yes   //正在导出rdb快照的过程中,要不要停止同步aof
auto-aof-rewrite-percentage 100  //aof文件大小比起上次重写时的大小,增长率100%时,重写
auto-aof-rewrite-min-size 64mb   //aof文件,至少超过64M时,重写

AOF如何恢复?

  1. 设置appendonly yes;
  2. 将appendonly.aof放到dir参数指定的目录;
  3. 启动Redis,Redis会自动加载appendonly.aof文件
redis重启时加载AOF与RDB的顺序?
  • 当AOF和RDB文件同时存在时,优先加载
  • 若关闭了AOF,加载RDB文件
  • 加载AOF/RDB成功,redis重启成功
  • AOF/RDB存在错误,启动失败打印错误信息

1588659556285
在这里插入图片描述

redis.conf 配置文件

# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
# 
# ./redis-server /path/to/redis.conf
# redis 服务端启动命令

# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.

################################## INCLUDES ###################################

# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings.  Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
...skipping...
# The name of the append only file (default: "appendonly.aof")

appendfilename "appendonly.aof"
# 默认开启aof同步


# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
# fsync()函数是操作系统提供函数,调用后通知操作系统刷新页缓冲数据到磁盘,可以手动调用触发也可以操作系统
# 自己的策略触发
#
# Redis supports three different modes:下面三种策略
#
# don't fsync 让操作系统自己刷新
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# 默认每秒刷新:everysec
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

# appendfsync always 每次都刷新
appendfsync everysec
# appendfsync no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
...skipping...
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes

# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
#
#   [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
#
# This is currently turned off by default in order to avoid the surprise
# of a format change, but will at some point be used as the default.
aof-use-rdb-preamble no

################################ LUA SCRIPTING  ###############################

# Max execution time of a Lua script in milliseconds. lua脚本最大执行时间
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000

################################ REDIS CLUSTER  ###############################
#                                reids 集群配置
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes
...skipping...
aof-rewrite-incremental-fsync yes

# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
# idea to start with the default settings and only change them after investigating
# how to improve the performances and how the keys LFU change over time, which
# is possible to inspect via the OBJECT FREQ command.
#
# There are two tunable parameters in the Redis LFU implementation: the
# counter logarithm factor and the counter decay time. It is important to
# understand what the two parameters mean before changing them.
#
# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
# uses a probabilistic increment with logarithmic behavior. Given the value
# of the old counter, when a key is accessed, the counter is incremented in
# this way:
#
# 1. A random number R between 0 and 1 is extracted.
# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
# 3. The counter is incremented only if R < P.
#
# The default lfu-log-factor is 10. This is a table of how the frequency
# counter changes with a different number of accesses with different
# logarithmic factors:
#
# +--------+------------+------------+------------+------------+------------+
# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |
# +--------+------------+------------+------------+------------+------------+
# | 0      | 104        | 255        | 255        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 1      | 18         | 49         | 255        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 10     | 10         | 18         | 142        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 100    | 8          | 11         | 49         | 143        | 255        |
# +--------+------------+------------+------------+------------+------------+
#
# NOTE: The above table was obtained by running the following commands:
#
#   redis-benchmark -n 1000000 incr foo
#   redis-cli object freq foo
#
# NOTE 2: The counter initial value is 5 in order to give new objects a chance
# to accumulate hits.
#
# The counter decay time is the time, in minutes, that must elapse in order
# for the key counter to be divided by two (or decremented if it has a value
# less <= 10).
#
# The default value for the lfu-decay-time is 1. A Special value of 0 means to
# decay the counter every time it happens to be scanned.
#
# lfu-log-factor 10
# lfu-decay-time 1

########################### ACTIVE DEFRAGMENTATION #######################
#
# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
# even in production and manually tested by multiple engineers for some
# time.
#
# What is active defragmentation?
# -------------------------------
#
# Active (online) defragmentation allows a Redis server to compact the
# spaces left between small allocations and deallocations of data in memory,
# thus allowing to reclaim back memory.
#
# Fragmentation is a natural process that happens with every allocator (but
# less so with Jemalloc, fortunately) and certain workloads. Normally a server
# restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in an "hot" way, while the server is running.
#
# Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the
# values in contiguous memory regions by exploiting certain specific Jemalloc
# features (in order to understand if an allocation is causing fragmentation
# and to allocate it in a better place), and at the same time, will release the
# old copies of the data. This process, repeated incrementally for all the keys
# will cause the fragmentation to drop back to normal values.
#
# Important things to understand:
#
# 1. This feature is disabled by default, and only works if you compiled Redis
#    to use the copy of Jemalloc we ship with the source code of Redis.
#    This is the default with Linux builds.
#
# 2. You never need to enable this feature if you don't have fragmentation
#    issues.
#
# 3. Once you experience fragmentation, you can enable this feature when
#    needed with the command "CONFIG SET activedefrag yes".
#
# The configuration parameters are able to fine tune the behavior of the
# defragmentation process. If you are not sure about what they mean it is
# a good idea to leave the defaults untouched.

# Enabled active defragmentation
# activedefrag yes

# Minimum amount of fragmentation waste to start active defrag
# active-defrag-ignore-bytes 100mb

# Minimum percentage of fragmentation to start active defrag
# active-defrag-threshold-lower 10

# Maximum percentage of fragmentation at which we use maximum effort
# active-defrag-threshold-upper 100

# Minimal effort for defrag in CPU percentage
# active-defrag-cycle-min 25

# Maximal effort for defrag in CPU percentage
# active-defrag-cycle-max 75

参考连接:
http://antirez.com/post/redis-persistence-demystified.html
http://redisdoc.com/

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值