redis

redis笔记(具体参考尚硅谷周阳课程)

一、redis安装

1.redis网址

官网网址:redis.io	
中文网址:redis.cn

2.下载安装(linux)

a.下载

​ redis-xxx-xxx.tar.gz

b.安装

  1. 上传redis-xxx-xxx.tar.gz压缩包到linux系统,例,/usr/local/wangsy/

  2. 解压 tar -zxvf redis-xxx-xxx.tar.gz

  3. 安装gcc环境

    查看是否有gcc环境:gcc -v

    没有则安装gcc环境:yum install gcc-c++

  4. 开始redis安装

    ①make ②make install

  5. 创建根目录下创建myredis目录:mkdir /myredis

  6. 拷贝redis.conf配置文件到myredis目录中:【root@db01 redis-3.0.1】 cp redis.conf /myredis

  7. 修改myredis(后续都是这个目录下的redis.conf文件):vim /myredis/redis.conf

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-2dkFPFra-1609228955273)(D:\API\笔记\后端框架\redis\pic\1.png)]

  8. 进入到命令执行目录下:cd /usr/local/bin/

  9. 启动redis服务端(使用自己的redis.conf配置文件启动,命令帮助:redis-server --help): ./redis-server /myredis/redis.conf

  10. 查看是否启动成功:ps -ef|grep redis

  11. 启动客户端:./redis-cli

  12. 127.0.0.1:6379> ping 回应PONG

二、启动后杂项知识

1.命令说明

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Hq3VRJEZ-1609228955284)(D:\API\笔记\后端框架\redis\pic\2.png)]

2.redis库说明

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-JlYtdH56-1609228955289)(D:\API\笔记\后端框架\redis\pic\3.png)]

1.说明:redis 默认 16 个库(0-15号),默认 0 号库。	(linux下 Tab键可以补全命令)

2.命令:
	①切换到指定库:		select 库号	
	②当前库key的数量:	dbsize
	③清空当前库:		 flushdb
    ④清空所有库:		 flushall
    ⑤当前库的所有key:  keys *		(慎用,实际可能有很多key,可能会造成卡顿)
    ⑥帅选key:			keys xx*

三、常用5大数据类型

命令网址:http://doc.redisfans.com/

3.0 key键

1. 列出所有键:keys *

2. 删除一个或多个键:del 键名

删除一个key:	key k1
删除多个key:	key k1 k2 k3...

3. 判断键是否存在:exist 键名

判断键 k1 是否存在:	exists k1	(1-存在,0-不存在)

4. 设置键过期时间:expire 键名 seconds

设置 k1 30秒后过期:	expire k1 30

5. 查看键剩余生存时间(秒):ttl 键名

ttl k1		(-2,代表k1不存在或者已过期	-1,代表永不过期)

6. 移动当前库key到指定库中:move 键名 db

7. 查看key的类型:type 键名

3.1 String字符串

单值单value

1. set/get/del/append/strlen : 设置/取值/删键/追加值/值的长度

2. incr/decr/incrby/decrby : 加1/减1/加指定值/减指定值(必须是数字)

incr k1/decr k1/incrby k1 2/decrby k1 2

3. getrange/setrange : 获取指定范围值/指定范围设置值

getrange 键名 start  end :  包含start、end,-1 表示最后一个字符, -2 表示倒数第二个,以此类推。
setrange 键名 start  xxx :  指定start处 设值

4. setex/setnx

setex:setex k1 10 v1	创建k1 v1  10秒后k1过期
setnx: setnx k1 v1		 如果k1不存在就创建

5. mset/mget/msetnx : 批量set/批量get/批量判断设置(原子操作)

mset:	mset k1 v1 k2 v2 k3 v3
mget:	mget k1 k2 k3
msetnx:	msetnx k4 v4 k5 v5	(原子操作,k4、k5;有一个存在,就会导致失败)

6. getset : 先get在set

3.2 List列表

单值多value

1. lpush/rpush/lrange : 从左边追个存值/从右边追个存值/取值

lpush:	lpush list01 a b c d e		插入方向:--->e d c b a
rpush:  rpush list02 a b c d e       插入方向:a b c d e<---
lrange:	lrange list01 0 -1

2. lpop/rpop : 左弹出一个值/右弹出一个值

3. lindex : 按照索引获取值

4. llen : 列表的长度

5. ltirm 起始索引 结束索引: 截取后在赋值给key

6. rpoplpush 源list 目标list

list01: 1 2 3	
list02: 4 5 6 
rpoplpush  list02  list01 		6 1 2 3 (每次只弹出一个)

7. lset list 索引 值 : 修改指定索引值

8. linsert 键 before/after 列表中的值 插入的值

在list中指定值,之前或者之后插入值。 如果指定值多个重复,默认第一个

总结:它是一个字符串链表,left、right都可以插入添加。如果值全部移除了,对应的key也就消失了。链表的操作无论是头和尾效率都极高,对中间元素操作,效率低。

3.3 Hash(类似Map)

kv模式不变,v是一个键值对

1. hset 大key 小key value :添加

2. hmset 大key 小key value 小key value :添加多个

3. hget 大key 小key :获取指定小key对应的值

4. hmget 大key 小key 小key :获取多个指定小key指定的值

5. hgetall 大key :获取所有

6. hdel 大key 小key :删除指定的小key

7. hlen 大key :小key的个数

8. hkeys 大key :获取所有的小key

9. hvals 大key :获取所有小key对应的value

3.4 Set(集合)

类似List,没有重复元素

1. sadd 键名 值1 值2 ... :添加

2. srem key value : 删出

3. smembers 键名 :列出所有元素

4. scard 键名 :获取长度

5. srandmember 键名 数字 :随机获取几个元素

6. spop 键名 :随机弹出一个元素

3.5 Zset(sorted set 有序集合)

有顺序的set

四、配置文件详解(redis.conf)

4.1 redis.conf文件

redis.conf: D:\API\笔记\后端框架\redis\redis.conf

4.2 各部分详解
4.2.1 Unit单元
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
<!-- 1k与1kb 1m与1mb 1g与1gb 是不一样的 -->
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
<!-- 大小写不敏感 -->
# units are case insensitive so 1GB 1Gb 1gB are all the same.
4.2.2 Includes
################################## INCLUDES ###################################

# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings.  Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
<!-- 可以通过includes包含,redis.conf可以作为总闸,包含其他 -->
# include /path/to/local.conf
# include /path/to/other.conf
4.2.3 General
################################ GENERAL  #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# include /path/to/other.conf

################################ GENERAL  #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
<!-- 以守护进程(类似mysql服务在windows下运行)的方式运行,默认 no,已修改 -->
daemonize yes

# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
# default. You can specify a custom pid file location here.
<!-- redis服务启动之后,会在 /var/run/redis.pid 文件中有个redis的pid -->
pidfile /var/run/redis.pid

# Accept connections on the specified port, default is 6379.
# If port 0 is specified Redis will not listen on a TCP socket.
<!-- redis端口号 -->
port 6379

# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511

# By default Redis listens for connections from all the network interfaces
# available on the server. It is possible to listen to just one or multiple
# interfaces using the "bind" configuration directive, followed by one or
# more IP addresses.
#
# Examples:
<!-- 地址绑定 -->
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1

# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700

<!-- 客户端连接空闲多少秒 关闭连接 -->
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0

<!-- 单位秒,0-不会进行KeepAlive监测,建议设置成60秒,当多台redis时候(或集群)会设置此值 -->

# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
#    equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 60 seconds.
tcp-keepalive 0

<!-- redis的日志级别:debug、verbose、notice、warning -->
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice

<!-- 日志输入目录,使用""的话会在控制台标准输出, -->
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""

<!-- 系统日志是否输出到系统日志文件中,默认系统日志关闭 -->
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no

<!-- 系统日志标识,以redis开头 -->
# Specify the syslog identity.
# syslog-ident redis

<!-- 输出日志设备 -->
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0

<!-- redis 默认16个库 0-15号 -->
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using Select dbid where
# dbid is a number between 0 and 'databases'-1
databases 16
4.2.4 Snapshotting快照
################################ SNAPSHOTTING  ################################
#
# Save the DB on disk:
#
#   save (seconds) (changes)
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving completely by commenting out all "save" lines.
#
#   It is also possible to remove all the previously configured save
#   points by adding a save directive with a single empty string argument
#   like in the following example:
#
#   save ""
<!-- 以下三种发生其一就会备份dump.rdb, 多少秒内   多少key发生改变 。save "" 关闭rdb-->
save 900 1
save 300 10
save 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
<!-- 后台保存出错了,就停止前台写操作-->
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
<!-- 对快照进行压缩,默认LZF算法,性能提升影响不大-->
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
<!-- 对快照进行压缩后,会对快照进行校验,但是会影响cpu性能,无伤大雅,开启问题不大-->
rdbchecksum yes

# The filename where to dump the DB
<!-- 快照的文件名 -->
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
<!-- 快照文件保存在哪个目录下-->
dir ./
4.2.5 Replication复制
################################# REPLICATION #################################

# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# 1) Redis replication is asynchronous, but you can configure a master to
#    stop accepting writes if it appears to be not connected with at least
#    a given number of slaves.
# 2) Redis slaves are able to perform a partial resynchronization with the
#    master if the replication link is lost for a relatively small amount of
#    time. You may want to configure the replication backlog size (see the next
#    sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
#    network partition slaves automatically try to reconnect to masters
#    and resynchronize with them.
#
# slaveof (masterip) (masterport)

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth (master-password)

# When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
#    still reply to client requests, possibly with out of date data, or the
#    data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
#    an error "SYNC with master in progress" to all the kind of commands
#    but to INFO and SLAVEOF.
#
slave-serve-stale-data yes

# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
slave-read-only yes

# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New slaves and reconnecting slaves that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the slaves.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
#                 file on disk. Later the file is transferred by the parent
#                 process to the slaves incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
#              RDB file to slave sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more slaves
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new slaves arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple slaves
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no

# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the slaves.
#
# This is important since once the transfer starts, it is not possible to serve
# new slaves arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more slaves arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5

# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10

# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60

# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no

# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
# repl-backlog-size 1mb

# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600

# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
slave-priority 100

# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.
4.2.6 Security安全
说明:一般不会设置redis密码,redis安装在linux层面上。
	config get *		:查看config可以get哪些内容
	config get dir		:获取redis在哪个目录下启动的,有时候相关配置文件等可能会在这个目录下
设置:
	config get requirepass		:查看redis密码
	config set requirepass  密码 :设置redis密码
	auth  密码				  :客户端登陆密码
################################## SECURITY ###################################

# Require clients to issue AUTH (PASSWORD) before processing any other
# commands.  This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared

# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.
4.2.7 Limits限制
################################### LIMITS ####################################

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
<!-- maxClients 最大客户连接数量 -->
# maxclients 10000

# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
<!-- 缓存大小,默认最大缓存maxmemory <bytes>-->
# maxmemory (bytes)

<!--
	缓存策略:当达到内存最大容量时,缓存的过期策略(默认是 永不过期)。有以下6中策略:
        ①volatile-lru    : 最近最少使用。使用LRU算法移除key,只对设置了过期时间的键。
        ②allkeys-lru     : 最近最少使用。使用LRU算法移除key,对所有的键。
        ③volatile-random : 在过期集合中随机移除key,只对设置了过期时间的键。
        ④allkeys-random  : 随机移除key。
        ⑤volatile-ttl    : 移除ttl值最小的key,即那些最近要过期的key。
        ⑥noeviction      : 永不过期,不进行移除,针对写操作,只返回错误信息。
-->
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key according to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction

<!-- 样本值,5个就足够好good enough, LRU和TTL并非精确算法,而是估算值 -->
# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs a bit more CPU. 3 is very fast but not very accurate.
#
# maxmemory-samples 5
4.2.8 Append only mode
############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
<!-- 开启AOF持久化策略-->
appendonly yes

# The name of the append only file (default: "appendonly.aof")
<!-- AOF的文件名称-->
appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

<!-- 往AOF文件异步追加:
	always   : 同步持久化,每次发生数据变更会被立即记录到磁盘,性能较差,但数据完整性好
	everysec : 出场默认推荐,异步操作。每秒记录,如果一秒内宕机,有数据丢失。
	no       : 不要fsync,只要操作系统想刷新数据就行了。更快。
-->
# appendfsync always
# appendfsync no
appendfsync everysec

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
<!-- 重写时(也就是appendonly.aof文件大了,需要压缩重写)是否可以运用appendfsync,用默认 no 即可,保证数据安全性-->
no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
<!-- 重写的基准值:是上次rewire后大小的1倍,且文件 > 64 mb时触发-->
auto-aof-rewrite-percentage 100    <!-- 是上次rewire后大小的1倍-->
auto-aof-rewrite-min-size 64mb	  <!-- 且文件 > 64 mb-->

# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes
4.3 redis常用配置
redis.conf 常用配置项说明如下:
-------------------------------------------------------------------------------------------------------------------
1. Redis默认不是以守护进程的方式运行,可以通过该配置项修改,使用yes启用守护进程
   daemonize no
2. 当Redis以守护进程方式运行时,Redis默认会把pid写入/var/run/redis.pid文件,可以通过pidfile指定
   pidfile /var/run/redis.pid
3. 指定Redis监听端口,默认端口为6379
   port 6379
4. 绑定的主机地址
   bind 127.0.0.1
5. 当客户端闲置多长时间后关闭连接,如果指定为0,表示关闭该功能
   timeout 300
6. 指定日志记录级别,Redis总共支持四个级别:debug、verbose、notice、warning,默认为verbose
   loglevel verbose
7. 日志记录方式,默认为标准输出,如果配置Redis为守护进程方式运行,而这里又配置为日志记录方式为标准输出,则日志将会发送给/dev/null
   logfile stdout
8. 设置数据库的数量,默认数据库为0,可以使用SELECT dbid 切换
   databases 16
9. 指定在多长时间内,有多少次更新操作,就将数据同步到数据文件,可以多个条件配合
   Redis默认配置文件中提供了三个条件:
   save 900 1
   save 300 10
   save 60 10000
   分别表示900秒(15分钟)内有1个更改,300秒(5分钟)内有10个更改以及60秒内有10000个更改。
10. 指定存储至本地数据库时是否压缩数据,默认为yes,Redis采用LZF压缩,如果为了节省CPU时间,可以关闭该选项,但会导致数据库文件变的巨大
   rdbcompression yes
11. 指定本地数据库文件名,默认值为dump.rdb
   dbfilename dump.rdb
12. 指定本地数据库存放目录
   dir ./
13. 设置当本机为slav服务时,设置master服务的IP地址及端口,在Redis启动时,它会自动从master进行数据同步
   slaveof (masterip) (masterport)
14. 当master服务设置了密码保护时,slav服务连接master的密码
   masterauth (master-password)
15. 设置Redis连接密码,如果配置了连接密码,客户端在连接Redis时需要通过AUTH 密码,默认关闭
   requirepass foobared
16. 设置同一时间最大客户端连接数,默认无限制,Redis可以同时打开的客户端连接数为Redis进程可以打开的最大文件描述符数,如果设置 maxclients 0,表示不作限制。当客户端连接数到达限制时,Redis会关闭新的连接并向客户端返回max number of clients reached错误信息
   maxclients 128
17. 指定Redis最大内存限制,Redis在启动时会把数据加载到内存中,达到最大内存后,Redis会先尝试清除已到期或即将到期的Key,当此方法处理 后,仍然到达最大内存设置,将无法再进行写入操作,但仍然可以进行读取操作。Redis新的vm机制,会把Key存放内存,Value会存放在swap区
   maxmemory (bytes)
18. 指定是否在每次更新操作后进行日志记录,Redis在默认情况下是异步的把数据写入磁盘,如果不开启,可能会在断电时导致一段时间内的数据丢失。因为 redis本身同步数据文件是按上面save条件来同步的,所以有的数据会在一段时间内只存在于内存中。默认为no
   appendonly no
19. 指定更新日志文件名,默认为appendonly.aof
   appendfilename appendonly.aof
20. 指定更新日志条件,共有3个可选值: 
  no:表示等操作系统进行数据缓存同步到磁盘(快) 
  always:表示每次更新操作后手动调用fsync()将数据写到磁盘(慢,安全) 
  everysec:表示每秒同步一次(折衷,默认值)
  appendfsync everysec
21. 指定是否启用虚拟内存机制,默认值为no,简单的介绍一下,VM机制将数据分页存放,由Redis将访问量较少的页即冷数据swap到磁盘上,访问多的页面由磁盘自动换出到内存中(在后面的文章我会仔细分析Redis的VM机制)
   vm-enabled no
22. 虚拟内存文件路径,默认值为/tmp/redis.swap,不可多个Redis实例共享
   vm-swap-file /tmp/redis.swap
23. 将所有大于vm-max-memory的数据存入虚拟内存,无论vm-max-memory设置多小,所有索引数据都是内存存储的(Redis的索引数据 就是keys),也就是说,当vm-max-memory设置为0的时候,其实是所有value都存在于磁盘。默认值为0
   vm-max-memory 0
24. Redis swap文件分成了很多的page,一个对象可以保存在多个page上面,但一个page上不能被多个对象共享,vm-page-size是要根据存储的 数据大小来设定的,作者建议如果存储很多小对象,page大小最好设置为32或者64bytes;如果存储很大大对象,则可以使用更大的page,如果不 确定,就使用默认值
   vm-page-size 32
25. 设置swap文件中的page数量,由于页表(一种表示页面空闲或使用的bitmap)是在放在内存中的,,在磁盘上每8个pages将消耗1byte的内存。
   vm-pages 134217728
26. 设置访问swap文件的线程数,最好不要超过机器的核数,如果设置为0,那么所有对swap文件的操作都是串行的,可能会造成比较长时间的延迟。默认值为4
   vm-max-threads 4
27. 设置在向客户端应答时,是否把较小的包合并为一个包发送,默认为开启
  glueoutputbuf yes
28. 指定在超过一定的数量或者最大的元素超过某一临界值时,采用一种特殊的哈希算法
  hash-max-zipmap-entries 64
  hash-max-zipmap-value 512
29. 指定是否激活重置哈希,默认为开启(后面在介绍Redis的哈希算法时具体介绍)
  activerehashing yes
30. 指定包含其它的配置文件,可以在同一主机上多个Redis实例之间使用同一份配置文件,而同时各个实例又拥有自己的特定配置文件
  include /path/to/local.conf

五、持久化

redis 持久化 2 种方式:RDB、AOF
5.1 持久化-RDB

对应redis.conf位置:Snapshotting快照

5.1.1 是什么
	在指定的时间间隔内,将内存中的数据集快照 写入磁盘。也就是Snapshot快照,它恢复时是将快照文件直接读到内存里。

	Redis会单独创建(fork)一个子进程来进行持久化,会先将数据写入到一个临时文件种,待持久化过程都结束了,再用这个临时文件替换上次持久化好的文件。
	
	整个过程中,主进程是不进行任何IO操作的,这就确保了极高的性能。如果需要进行大规模数据的恢复,且对于数据恢复的完整性不是非常敏感,那RDB的方式比AOF更加的高效。RDB的缺点:最后一次持久化后的数据可能丢失。
	
		fork的作用是复制一个与当前进程一样的进程。新进程的所有数据(变量、环境变量、程序计数器等)数值都和原进程一致,但是是一个全新的进程,并作为原进程的子进程。
5.1.2 Rdb保存的文件名

dump.rdb :可以在配置文件中修改名字

redis启动的时候会自动加载这个dump.rdb文件

5.1.3 触发条件
1. 配置文件中:满足以下之一条件(满足条件就备份),save <多少秒内>  <有多少个key放生改变>

save 900 1
save 300 10
save 60 10000

2. 手动(在客户端中,执行flushall,会立马生成快照文件,但是文件是空的,无意义)
3. 手动(在客户端中,执行save命令,立马生成快照文件。save只管保存,其他不管,全部阻塞)
4. 手动(在客户端中,执行bgsave命令,redis会在后台异步进行快照操作,快照还可以响应客户端请求。可以通过lastsave命令获取最后一次成功执行快照的时间)
---------------------------------------------------------------------------------------------------------------------------------------
一般会把生成的快照文件 备份到别的机器。cp dump.rdb dump_copy.rdb
5.1.4 如何恢复
1. 获取redis启动路径 config get dir
2. 把备份的快照文件放入这个目录下即可
5.1.5 优缺点
优势:适合大规模数据恢复,对数据完整性和一致性要求不高。
缺点;可能会丢失最后一次快照的修改、fork的时候,内存的数据被克隆了一份,2倍的膨胀性
5.2 持久化-AOF
5.2.1 概念
	以日志的形式来记录每个写操作。只需追加文件不可以改写文件。 appendonly.aof文件(文件名配置文件中可以修改)
配置文件中AOF 默认关闭的。
	查看配置文件。
5.2.2 配置说明

1. dump.rdb 与 appendonly.aof共存问题

可以共存,但是默认加载的是appendonly.aof文件

2. appendonly.aof 文件 损坏修复

appendonly.aof 文件损坏,即redis 加载appendonly.aof文件 启动失败。可以用 redis-check-aof --fix 
5.2.3 重写机制(rewire)
概念:
	AOF采用文件追加方式,文件会越来越大,为了避免此情况,新增重写机制(rewire)。当aof文件超过配置文件设定的阈值,redis就会启动aof文件内容的压缩,只保留可以恢复数据的最小指令集,可以使用命令 bgrewireaof。
	
触发机制:
	redis会记录上次重写时的aof大小,默认配置是,当aof文件大小是上次rewire后大小的一倍 且文件大于64mb时触发。
5.2.4 优缺点
优势:可以每操作一步记录、每秒记录、不记录灵活。
缺点:aof文件会越来越大,恢复时效率低。

六、Redis事务

6.1 概念
	一次执行多个命令,本质是一组命令的集合。redis部分支持事务,不支持回滚。
	常用命令:
		MULTI : 标记一个事务块的开始。
		EXEC :  执行事务块内的所有命令。
		DISCARD : 取消事务,放弃事务块内的所有命令。
		WATCH key【key...】 : 监视一个或多个key,如果在执行这个事务之前这个key被其他命令所改动,那么事务将被打断。
6.2 五种情况

1. 正常执行

127.0.0.1:6379> multi
OK
127.0.0.1:6379> set k1 1
QUEUED
127.0.0.1:6379> set k2 2
QUEUED
127.0.0.1:6379> incr k1 
QUEUED
127.0.0.1:6379> incrby k2 2
QUEUED
127.0.0.1:6379> exec
1) OK
2) OK
3) (integer) 2
4) (integer) 4
127.0.0.1:6379> 

2. 放弃事务

127.0.0.1:6379> multi
OK
127.0.0.1:6379> set k1 1
QUEUED
127.0.0.1:6379> get k1 
QUEUED
127.0.0.1:6379> discard

3. 一个命令失败,导致所有命令都失败

127.0.0.1:6379> multi
OK
127.0.0.1:6379> set k1 1
QUEUED
127.0.0.1:6379> set k2 2
QUEUED
127.0.0.1:6379> set k3 // 错误命令
(error) ERR wrong number of arguments for 'set' command
127.0.0.1:6379> get k1 
QUEUED
127.0.0.1:6379> exec
(error) EXECABORT Transaction discarded because of previous errors.

4. 对的执行,错的抛出

127.0.0.1:6379> multi
OK
127.0.0.1:6379> set k1 1
QUEUED
127.0.0.1:6379> set email wang@163.com
QUEUED
127.0.0.1:6379> incr k1
QUEUED
127.0.0.1:6379> incr email// incr不能对字符串+1操作,与3情况的区别是:3在加入命令队列的时候就报错,而4是在执行的时候报错。所有redis部分支持事务
QUEUED
127.0.0.1:6379> exec
1) OK
2) OK
3) (integer) 2
4) (error) ERR value is not an integer or out of range
127.0.0.1:6379> keys *
1) "k1"
2) "email"
127.0.0.1:6379> 

5. watch监控

悲观锁、乐观锁(加版本号)、表锁、行锁

// 1. 信用卡余额、欠额					// 2. watch开始监控 balance,中途未被修改	// 3. watch开始监控balance,中途修改了				
127.0.0.1:6379> multi				    127.0.0.1:6379> watch balance               127.0.0.1:6379> watch balance
OK									  OK                                           OK
127.0.0.1:6379> set balance 100			127.0.0.1:6379> multi                        127.0.0.1:6379> set balance 200
QUEUED								  OK                                           OK
127.0.0.1:6379> set dept 0			    127.0.0.1:6379> decrby balance 20            127.0.0.1:6379> multi
QUEUED								  QUEUED                                       OK
127.0.0.1:6379> decrby balance 20		127.0.0.1:6379> incrby dept 20               127.0.0.1:6379> decrby balance 20
QUEUED								  QUEUED                                       QUEUED
127.0.0.1:6379> incrby dept 20           127.0.0.1:6379> exec                         127.0.0.1:6379> incrby dept 20
QUEUED                                   1) (integer) 60                              QUEUED
127.0.0.1:6379> exec                     2) (integer) 40                              127.0.0.1:6379> exec  //事务失效
1) OK                                                                                 (nil)
2) OK                                                           
3) (integer) 80
4) (integer) 20
-------------------------------------------------------------------------------------------------------------------
总结:watch指令,类似乐观锁,事务提交时,如果key的值已被别的客户端改变,则整个事务队列都不会被执行。

七、redis复制

7.1 概念
主从复制,主机数据更新后根据配置和策略,自动同步到备机的master/slave机制,Master以写为主,Slave以读为主。
7.2 操作

1. 配从(库)不配主(库)

2. 修改配置文件

1. 拷贝redis.conf文件
2. 开启 daemonize yes
3. pid文件名字
4. 指定端口
5. log文件名称
6. Dump.rdb名称

3. 从库配置:

slaveof  主库ip  主库端口号
----------------------------------------------
info replication	: 查看当前库的相关信息
每次与master断开后,都需要重新连接,除非配置到redis.conf文件种

4. 常用3种方式

  • 一主二仆(1个master 2个slave)

    Ⅰ.启动三个redis服务
    
    // 第一台 6379
    [root@db01 bin]# redis-server /myredis/redis6379.conf
    [root@db01 bin]# redis-cli -p 6379
    127.0.0.1:6379> info replication  # 查看当前库信息
    # Replication
    role:master						# 主库master
    connected_slaves:0	
    master_repl_offset:0
    repl_backlog_active:0
    repl_backlog_size:1048576
    repl_backlog_first_byte_offset:0
    repl_backlog_histlen:0
    
    // 第二台 6380
    [root@db01 bin]# redis-server /myredis/redis6380.conf 
    [root@db01 bin]# redis-cli -p 6380
    127.0.0.1:6380> info replication  # 查看当前库信息
    # Replication
    role:master						# 主库master
    connected_slaves:0
    master_repl_offset:0
    repl_backlog_active:0
    repl_backlog_size:1048576
    repl_backlog_first_byte_offset:0
    repl_backlog_histlen:0
    
    // 第三台 6381
    [root@db01 bin]# redis-server /myredis/redis6381.conf 
    [root@db01 bin]# redis-cli -p 6381 
    127.0.0.1:6381> info replication  # 查看当前库信息
    # Replication
    role:master						# 主库master
    connected_slaves:0
    master_repl_offset:0
    repl_backlog_active:0
    repl_backlog_size:1048576
    repl_backlog_first_byte_offset:0
    repl_backlog_histlen:0
    
    Ⅱ.对 6380 和 6381 执行  slaveof 127.0.0.1 6379 操作
    	# 此时,6380 和 6381 变成从库slave, 在次查看两个库的信息 info replication
    ----------------------------------------------------------------------------------------------------------------------
    问题:
    	a. 执行 slaveof 主机ip 主机port   从机立马拥有主机种的相关数据
    	b. 主机挂了,再次启动还是 master,它下面还将有2台slave。从机挂了,再次启动就变成master了,失效了。
    
  • 薪火相传

    A master
    B slaveof A_ip  A_port	
    C slaveof B_ip  B_port
    --------------------------------------------------------------------------------------------------------------------
    A 传 B 传 C ...
    
  • 反客为主

    原来: 
    	A master	
    	B slaveof A	
    	C slaveof A
    --------------------------------------------------------------------------------------------------------------------
    现在:
    	A 宕机了,
    	B 此时在B客户端下执行:slaveof no one	把B 变成 master
    	C slaveof B
    	此时BC是一个体系,与A无关系了。
    
  • 哨兵模式(sentinel)重要

    说明:	反客为主的自动模式,根据投票在从库中选举出主库。
    -------------------------------------------------------------------------------------------------------------------
    步骤:
    	①创建 sentinel.conf 文件
    	②填写内容:sentinel monitor 被监控数据库名字(自己起的名字)  ip  port  票数
    		票数说明:主机挂掉后slave投票看谁接替主机,得票多少后成为主机。
    	③启动哨兵 redis-sentinel /目录/sentinel.conf
    	
    问题:
    	如果原来的master回来,会变成slave。
    
    缺点:
    	复制延时
    
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值