一篇文章介绍redis

一文介绍redis

安装

  • 安装gcc

    yum install gcc
    
  • 下载redis

    wget https://download.redis.io/releases/redis-6.2.1.tar.gz
    
  • 解压

    tar xzf redis-6.2.1.tar.gz
    
  • 编译

    cd redis-6.2.1
    make
    
  • 启动redis

    src/redis-server
    
  • 测试插入一个值

    set foo bar
    

    这样就完成redis的初步安装了**,修改一些配置**,方便使用

    daemonize yes #后台启动

    注释绑定
    #bind 127.0.0.1 代表允许客户端通过机器的哪些网卡ip去访问

  • 重新启动

    src/redis‐server redis.conf
    
  • 进入客户端

    src/redis‐cli
    
  • 退出

    quit
    
  • 查看redis支持的最大连接数,在redis.conf文件中可修改,# maxclients 10000

     CONFIG GET maxclients
    
  • 查看redis服务运行信息

    info
    
  • redis服务运行信息分为 9 大块
    Server 服务器运行的环境参数
    Clients 客户端相关信息
    Memory 服务器运行内存统计数据
    Persistence 持久化信息
    Stats 通用统计数据
    Replication 主从复制相关信息
    CPU CPU 使用情况
    Cluster 集群信息
    KeySpace 键值对统计数量信息

常用数据类型与数据结构

参考 redis的5种数据类型和数据结构

常用命令

Redis常用命令集
1)连接操作命令

  • quit:关闭连接(connection)
  • auth:简单密码认证
  • help cmd: 查看cmd帮助,例如:help quit

2)持久化

  • save:将数据同步保存到磁盘
  • bgsave:将数据异步保存到磁盘
  • lastsave:返回上次成功将数据保存到磁盘的Unix时戳
  • shundown:将数据同步保存到磁盘,然后关闭服务

3)远程服务控制

  • info:提供服务器的信息和统计
  • monitor:实时转储收到的请求
  • slaveof:改变复制策略设置
  • config:在运行时配置Redis服务器

4)对value操作的命令

  • exists(key):确认一个key是否存在
  • del(key):删除一个key
  • type(key):返回值的类型
  • keys(pattern):返回满足给定pattern的所有key
  • randomkey:随机返回key空间的一个
  • keyrename(oldname, newname):重命名key
  • dbsize:返回当前数据库中key的数目
  • expire:设定一个key的活动时间(s)
  • ttl:获得一个key的活动时间
  • select(index):按索引查询
  • move(key, dbindex):移动当前数据库中的key到dbindex数据库
  • flushdb:删除当前选择数据库中的所有key
  • flushall:删除所有数据库中的所有key

5)String

  • set(key, value):给数据库中名称为key的string赋予值value
  • get(key):返回数据库中名称为key的string的value
  • getset(key, value):给名称为key的string赋予上一次的value
  • mget(key1, key2,…, key N):返回库中多个string的value
  • setnx(key, value):添加string,名称为key,值为value
  • setex(key, time, value):向库中添加string,设定过期时间time
  • mset(key N, value N):批量设置多个string的值
  • msetnx(key N, value N):如果所有名称为key i的string都不存在
  • incr(key):名称为key的string增1操作
  • incrby(key, integer):名称为key的string增加integer
  • decr(key):名称为key的string减1操作
  • decrby(key, integer):名称为key的string减少integer
  • append(key, value):名称为key的string的值附加value
  • substr(key, start, end):返回名称为key的string的value的子串

6)List

  • rpush(key, value):在名称为key的list尾添加一个值为value的元素
  • lpush(key, value):在名称为key的list头添加一个值为value的 元素
  • llen(key):返回名称为key的list的长度
  • lrange(key, start, end):返回名称为key的list中start至end之间的元素
  • ltrim(key, start, end):截取名称为key的list
  • lindex(key, index):返回名称为key的list中index位置的元素
  • lset(key, index, value):给名称为key的list中index位置的元素赋值
  • lrem(key, count, value):删除count个key的list中值为value的元素
  • lpop(key):返回并删除名称为key的list中的首元素
  • rpop(key):返回并删除名称为key的list中的尾元素
  • blpop(key1, key2,… key N, timeout):lpop命令的block版本。
  • brpop(key1, key2,… key N, timeout):rpop的block版本。
  • rpoplpush(srckey, dstkey):返回并删除名称为srckey的list的尾元素,并将该元素添加到名称为dstkey的list的头部

7)Set

  • sadd(key, member):向名称为key的set中添加元素member
  • srem(key, member) :删除名称为key的set中的元素member
  • spop(key) :随机返回并删除名称为key的set中一个元素
  • smove(srckey, dstkey, member) :移到集合元素
  • scard(key) :返回名称为key的set的基数
  • sismember(key, member) :member是否是名称为key的set的元素
  • sinter(key1, key2,…key N) :求交集
  • sinterstore(dstkey, (keys)) :求交集并将交集保存到dstkey的集合
  • sunion(key1, (keys)) :求并集
  • sunionstore(dstkey, (keys)) :求并集并将并集保存到dstkey的集合
  • sdiff(key1, (keys)) :求差集
  • sdiffstore(dstkey, (keys)) :求差集并将差集保存到dstkey的集合
  • smembers(key) :返回名称为key的set的所有元素
  • srandmember(key) :随机返回名称为key的set的一个元素

8)Hash

  • hset(key, field, value):向名称为key的hash中添加元素field

  • hget(key, field):返回名称为key的hash中field对应的value

  • hmget(key, (fields)):返回名称为key的hash中field i对应的value

  • hmset(key, (fields)):向名称为key的hash中添加元素field

  • hincrby(key, field, integer):将名称为key的hash中field的value增加integer

  • hexists(key, field):名称为key的hash中是否存在键为field的域

  • hdel(key, field):删除名称为key的hash中键为field的域

  • hlen(key):返回名称为key的hash中元素个数

  • hkeys(key):返回名称为key的hash中所有键

  • hvals(key):返回名称为key的hash中所有键对应的value

  • hgetall(key):返回名称为key的hash中所有的键(field)及其对应的value

  • 全量遍历
    keys:全量遍历键
    在这里插入图片描述

当redis数量比较大时,很影响性能,要尽可能避免使用

  • 渐进式遍历
    scan:渐进式遍历键
    SCAN cursor [MATCH pattern] [COUNT count]
    在这里插入图片描述

redis客户端

  • 引入依赖
<dependency>
      <groupId>redis.clients</groupId>
      <artifactId>jedis</artifactId>
      <version>2.7.1</version><!--版本号可根据实际情况填写-->
    </dependency>
  • java client
public static void main(String[] args) {
        JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
        jedisPoolConfig.setMaxTotal(20);
        jedisPoolConfig.setMaxIdle(10);
        jedisPoolConfig.setMinIdle(5);
        JedisPool jedisPool = new JedisPool(jedisPoolConfig, HOST, PORT);
        Jedis jedis = jedisPool.getResource();
        jedis.set("name", "xiaobin");
        System.out.println(jedis.get("name"));

        // 管道操作,允许一次发送多个命令,不用等待服务端响应,降低网络开销,但是redis在处理完所有命令前需要先缓存执行结果
        Pipeline pl = jedis.pipelined();
        for (int i = 0; i < 10; i++) {
            pl.incr("pipelineKey");
            pl.set("dabin" + i, "dabin");
        }
        List<Object> results = pl.syncAndReturnAll();
        System.out.println(results);
    }

redis集群架构

主从

一主多从,主节点可以读写,从结点只能执行读操作
从结点启动时,会全量同步一次主节点数据。后续主节点有数据更新时候,主节点通过长连接持续将写命令发送给从结点,保证主从数据一致
在这里插入图片描述
缺点:主节点挂了需要手动切换主节点,没法做到自动切换主节点

主从 搭建主要配置

replicaof 127.0.0.1 6379 # 从本机6379的redis实例复制数据,Redis 5.0之前使用slaveof
replica‐read‐only yes # 从节点只读

哨兵

在这里插入图片描述

哨兵会监听集群,当主节点变化了,哨兵会将主节点信息推送给客户端
客户端先和哨兵通信,哨兵将主节点信息推送给客户端,客户端下次访问直接访问该主节点
主要配置

sentinel monitor <master‐redis‐name> <master‐redis‐ip> <master‐redis‐port> <quorum>
quorum是一个数值,指定有多少个sentinel认为一个master失效时,master才失效
#比如 使用如下配置
sentinel monitor redismaster 192.168.1.1 6379 2 # 监控主节点192.168.1.1 6379,名字为redismaster(提供给客户端使用),至少2个哨兵结点认为master失效才失效

集群

在这里插入图片描述
主要配置

cluster‐enabled yes #启动集群模式
cluster‐config‐file nodes‐8033.conf #集群中节点信息保存文件目录
cluster‐node‐timeout 10000
  • 创建集群
    在redis 5.0后的版本可以使用命令创建集群
redis‐cli  ‐‐cluster create ‐‐cluster‐replicas 1 ip:port ip:port ip:port #1代表副本数量

数据是分片存储的,
支持水平扩展

持久化方案

RDB

在RDB方式下,根据配置文件redis.conf的策略,达到策略的某些条件时来自动持久化数据。(默认下,持久化到dump.rdb文件,并且在redis重启后,自动读取其中文件,据悉,通常情况下一千万的字符串类型键,1GB的快照文件,同步到内存中的 时间是20-30秒)
Unless specified otherwise, by default Redis will save the DB:

当满足以下三种情况时,将做持久化
  * After 3600 seconds (an hour) if at least 1 key changed 一小时内至少一条数据变化
  * After 300 seconds (5 minutes) if at least 100 keys changed 五分钟内至少100条数据变化
  * After 60 seconds if at least 10000 keys changed一分钟内至少一万条数据变化

缺点:
By default Redis will stop accepting writes if RDB snapshots are enabled(at least one save point) and the latest background save failed. This will make the user aware (in a hard way) that data is not persisting on disk properly, otherwise chances are that no one will notice and some disaster will happen.
默认情况下,如果启用了RDB快照,当正在保存快照或者最新的快照保存失败时将会停止接收写操作,而且用户一般很难意思到数据没有持久化。

################################ SNAPSHOTTING  ################################

# Save the DB to disk.
#
# save <seconds> <changes>
#
# Redis will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# Snapshotting can be completely disabled with a single empty string argument
# as in following example:
#
# save ""
#
# Unless specified otherwise, by default Redis will save the DB:
#   * After 3600 seconds (an hour) if at least 1 key changed
#   * After 300 seconds (5 minutes) if at least 100 keys changed
#   * After 60 seconds if at least 10000 keys changed
#
# You can set these explicitly by uncommenting the three following lines.
#
# save 3600 1
# save 300 100
# save 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# By default compression is enabled as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes

# Enables or disables full sanitation checks for ziplist and listpack etc when
# loading an RDB or RESTORE payload. This reduces the chances of a assertion or
# crash later on while processing commands.
# Options:
#   no         - Never perform full sanitation
#   yes        - Always perform full sanitation
#   clients    - Perform full sanitation only for user connections.
#                Excludes: RDB files, RESTORE commands received from the master
#                connection, and client connections which have the
#                skip-sanitize-payload ACL flag.
# The default should be 'clients' but since it currently affects cluster
# resharding via MIGRATE, it is temporarily set to 'no' by default.
#
# sanitize-dump-payload no

# The filename where to dump the DB
dbfilename dump.rdb

# Remove RDB files used by replication in instances without persistence
# enabled. By default this option is disabled, however there are environments
# where for regulations or other security concerns, RDB files persisted on
# disk by masters in order to feed replicas, or stored on disk by replicas
# in order to load them for the initial synchronization, should be deleted
# ASAP. Note that this option ONLY WORKS in instances that have both AOF
# and RDB persistence disabled, otherwise is completely ignored.
#
# An alternative (and sometimes better) way to obtain the same effect is
# to use diskless replication on both master and replicas instances. However
# in the case of replicas, diskless is not always an option.
rdb-del-sync-files no

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./

################################# REPLICATION #################################

AOF
  • 开启AOF
    appendonly yes
  • 策略
    appendfsync always
    appendfsync everysec 默认
    appendfsync no

开启AOF持久化后,每一条执行的指令,都会被记录到appendonly.aof文件中。但事实上,并不会立即将命令写入到硬盘文件中,而是写入到硬盘缓存,在接下来的策略中,配置多久来从硬盘缓存写入到硬盘文件。所以在一定程度一定条件下,还是会有数据丢失,不过你可以大大减少数据损失

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.

appendonly no

# The name of the append only file (default: "appendonly.aof")

appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

# appendfsync always
appendfsync everysec
# appendfsync no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.

no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes

# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
#
#   [RDB file][AOF tail]
#
# When loading, Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, then continues loading the AOF
# tail.
aof-use-rdb-preamble yes

redis使用场景

参考链接
redis常用命令
redis的5种数据类型和数据结构

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值