【一线大厂Java面试题解析+后端开发学习笔记+最新架构讲解视频+实战项目源码讲义】
**开源地址:https://docs.qq.com/doc/DSmxTbFJ1cmN1R2dB **
docker inspect
查看容器内部细节,很重要!
能看挂载相关信息,和网络等详细信息
docker cp
3.3.1 常用命令
-
attach Attach to a running container # 当前 shell 下 attach 连接指定运行镜像
-
build Build an image from a Dockerfile # 通过 Dockerfile 定制镜像
-
commit Create a new image from a container changes # 提交当前容器为新的镜像
-
cp Copy files/folders from the containers filesystem to the host path #从容器中拷贝指定文件或者目录到宿主机中
-
create Create a new container # 创建一个新的容器,同 run,但不启动容器
-
diff Inspect changes on a container’s filesystem # 查看 docker 容器变化
-
events Get real time events from the server # 从 docker 服务获取容器实时事件
-
exec Run a command in an existing container # 在已存在的容器上运行命令
-
export Stream the contents of a container as a tar archive # 导出容器的内容流作为一个 tar 归档文件[对应 import ]
-
history Show the history of an image # 展示一个镜像形成历史
-
images List images # 列出系统当前镜像
-
import Create a new filesystem image from the contents of a tarball # 从tar包中的内容创建一个新的文件系统映像[对应export]
-
info Display system-wide information # 显示系统相关信息
-
inspect Return low-level information on a container # 查看容器详细信息
-
kill Kill a running container # kill 指定 docker 容器
-
load Load an image from a tar archive # 从一个 tar 包中加载一个镜像[对应 save]
-
login Register or Login to the docker registry server # 注册或者登陆一个 docker 源服务器
-
logout Log out from a Docker registry server # 从当前 Docker registry 退出
-
logs Fetch the logs of a container # 输出当前容器日志信息
-
port Lookup the public-facing port which is NAT-ed to PRIVATE_PORT # 查看映射端口对应的容器内部源端口
-
pause Pause all processes within a container # 暂停容器
-
ps List containers # 列出容器列表
-
pull Pull an image or a repository from the docker registry server # 从docker镜像源服务器拉取指定镜像或者库镜像
-
push Push an image or a repository to the docker registry server # 推送指定镜像或者库镜像至docker源服务器
-
restart Restart a running container # 重启运行的容器
-
rm Remove one or more containers # 移除一个或者多个容器
-
rmi Remove one or more images # 移除一个或多个镜像[无容器使用该镜像才可删除,否则需删除相关容器才可继续或 -f 强制删除]
-
run Run a command in a new container # 创建一个新的容器并运行一个命令
-
save Save an image to a tar archive # 保存一个镜像为一个 tar 包[对应 load]
-
search Search for an image on the Docker Hub # 在 docker hub 中搜索镜像
-
start Start a stopped containers # 启动容器
-
stop Stop a running containers # 停止容器
-
tag Tag an image into a repository # 给源中镜像打标签
-
top Lookup the running processes of a container # 查看容器中运行的进程信息
-
unpause Unpause a paused container # 取消暂停容器
-
version Show the docker version information # 查看 docker 版本号
-
wait Block until a container stops, then print its exit code # 截取容器停止时的退出状态值
镜像服务的密码和阿里云账户的密码不是同一个密码,
上传成功
docker exec -it 1f1ed5798baa /bin/bash
docker commit -m=“ifconfig cmd add” -a=“pyy” 1f1ed5798baa:1.0
[root@VM-16-8-centos ~]# docker run -d -p 5000:5000 -v /myregistry:/tmp/registry --privileged=true registry
[root@VM-16-8-centos ~]# curl -XGET http://124.221.228.148:5000/v2/_catalog
{“repositories”:[“pyyubuntu”]}
===================================================================
镜像
是一种轻量级、可执行的独立软件包,它包含运行某个软件所需的所有内容,我们把应用程序和配置依赖打包好形成一个可交付的运行环境(包括代码、运行时需要的库、环境变量和配置文件等),这个打包好的运行环境就是image镜像文件。
只有通过这个镜像文件才能生成Docker容器实例(类似Java中new出来一个对象)。
分层的镜像
我们拉取镜像的时候好像是一层一层下载的,这是为什么呢?
UnionFS(联合文件系统):Union文件系统(UnionFS)是一种分层、轻量级并且高性能的文件系统,它支持对文件系统的修改作为一次提交来一层层的叠加,同时可以将不同目录挂载到同一个虚拟文件系统下(unite several directories into a single virtual filesystem)。Union 文件系统是 Docker 镜像的基础。镜像可以通过分层来进行继承,基于基础镜像(没有父镜像),可以制作各种具体的应用镜像。
特性:一次同时加载多个文件系统,但从外面看起来,只能看到一个文件系统,联合加载会把各层文件系统叠加起来,这样最终的文件系统会包含所有底层的文件和目录
Docker镜像加载原理:
docker的镜像实际上由一层一层的文件系统组成,这种层级的文件系统UnionFS。bootfs(boot file system)主要包含bootloader和kernel, bootloader主要是引导加载kernel, Linux刚启动时会加载bootfs文件系统,==在Docker镜像的最底层是引导文件系统bootfs。==这一层与我们典型的Linux/Unix系统是一样的,包含boot加载器和内核。当boot加载完成之后整个内核就都在内存中了,此时内存的使用权已由bootfs转交给内核,此时系统也会卸载bootfs。
rootfs (root file system) ,在bootfs之上。包含的就是典型 Linux 系统中的 /dev, /proc, /bin, /etc 等标准目录和文件。rootfs就是各种不同的操作系统发行版,比如Ubuntu,Centos等等。
对于一个精简的OS,rootfs可以很小,只需要包括最基本的命令、工具和程序库就可以了,因为底层直接用Host的kernel,自己只需要提供 rootfs 就行了。由此可见对于不同的linux发行版, bootfs基本是一致的, rootfs会有差别, 因此不同的发行版可以公用bootfs。
镜像分层最大的一个好处就是共享资源,方便复制迁移,就是为了复用。
比如说有多个镜像都从相同的 base 镜像构建而来,那么 Docker Host 只需在磁盘上保存一份 base 镜像;
同时内存中也只需加载一份 base 镜像,就可以为所有容器服务了。而且镜像的每一层都可以被共享。
看了好几遍docker了,这个真的很重要!
Docker镜像层都是只读的,容器层是可写的
当容器启动时,一个新的可写层被加载到镜像的顶部。这一层通常被称作“容器层”,“容器层”之下的都叫“镜像层”。
所有对容器的改动 - 无论添加、删除、还是修改文件都只会发生在容器层中。只有容器层是可写的,容器层下面的所有镜像层都是只读的。
新镜像
docker commit提交容器副本使之成为一个新的镜像
docker commit -m=“提交的描述信息” -a=“作者” 容器ID 要创建的目标镜像名:[标签名]
ubuntu安装vim命令
apt-get update
apt-get -y install vim
本地镜像发布阿里云
- 登录
$ docker login --username=用户名 registry.cn-hangzhou.aliyuncs.com
用于登录的用户名为阿里云账号全名,密码为开通服务时设置的密码。
- 推送
docker tag [ImageId] registry.cn-hangzhou.aliyuncs.com/pengyuyan_ubuntu/pengyuyan_repository:[镜像版本号]
docker push registry.cn-hangzhou.aliyuncs.com/pengyuyan_ubuntu/pengyuyan_repository:[镜像版本号]
下载阿里云镜像到本地
docker pull registry.cn-hangzhou.aliyuncs.com/pengyuyan_ubuntu/pengyuyan_repository:[镜像版本号]
本地镜像发布到私有库
私有库:
1 官方Docker Hub地址:https://hub.docker.com/,中国大陆访问太慢了且准备被阿里云取代的趋势,不太主流。
2 Dockerhub、阿里云这样的公共镜像仓库可能不太方便,涉及机密的公司不可能提供镜像给公网,所以需要创建一个本地私人仓库供给团队使用,基于公司内部项目构建镜像。
Docker Registry是官方提供的工具,可以用于构建私有镜像仓库
- 安装运行私有仓库
docker pull registry
docker run -d -p 5000:5000 -v /myregistry/:/tmp/registry --privileged=true registry
默认情况,仓库被创建在容器的/var/lib/registry目录下,建议自行用容器卷映射,方便于宿主机联调
- 给容器增加命令
docker run -it ubuntu /bin/bash
apt-get update
apt-get install net-tools
公式:
docker commit -m=“提交的描述信息” -a=“作者” 容器ID 要创建的目标镜像名:[标签名]
命令:在容器外执行,记得
docker commit -m=“ifconfig cmd add” -a=“pyy” a69d7c825c4f pyyubuntu:1.2
- 打标签,改配置
docker tag pyyubuntu:1.2 192.168.100.10:5000/pyyubuntu:1.2
vim /etc/docker/daemon.json
{
“registry-mirrors”: [“https://pengyuyan227.mirror.aliyuncs.com”],
“insecure-registries”: [“192.168.100.10:5000”]
}
- 上传
docker push 192.168.111.162:5000/zzyyubuntu:1.2
Docker中的镜像分层,支持通过扩展现有镜像,创建新的镜像。类似Java继承于一个Base基础类,自己再按需扩展。
新镜像是从 base 镜像一层一层叠加生成的。每安装一个软件,就在现有镜像的基础上增加一层
====================================================================
–privileged=true
Docker挂载主机目录访问如果出现cannot open directory .: Permission denied
解决办法:在挂载目录后多加一个–privileged=true参数即可
一句话:有点类似我们Redis里面的rdb和aof文件
将docker容器内的数据保存进宿主机的磁盘中
运行一个带有容器卷存储功能的容器实例
docker run -it --privileged=true -v /宿主机绝对路径目录:/容器内目录 镜像名
*将运用与运行的环境打包镜像,run后形成容器实例运行 ,但是我们对数据的要求希望是持久化的
Docker容器产生的数据,如果不备份,那么当容器实例删除后,容器内的数据自然也就没有了。
为了能保存数据在docker中我们使用卷。
特点:
1:数据卷可在容器之间共享或重用数据
2:卷中的更改可以直接实时生效,爽
3:数据卷中的更改不会包含在镜像的更新中
4:数据卷的生命周期一直持续到没有容器使用它为止
[root@VM-16-8-centos ~]# docker run -it --privileged=true -v /tmp/host_data:/tmp/docker_data --name=u1 ubuntu
root@2e431ba4f3bf:/# cd /tmp/docker_data/
root@2e431ba4f3bf:/tmp/docker_data# mkdir test_docker.txt
[root@VM-16-8-centos ~]# cd /tmp/host_data/
[root@VM-16-8-centos host_data]# ll
total 4
drwxr-xr-x 2 root root 4096 Apr 3 16:10 test_docker.txt
[root@VM-16-8-centos host_data]# mkdir test_host.txt
root@2e431ba4f3bf:/tmp/docker_data# ll
total 16
drwxr-xr-x 4 root root 4096 Apr 3 08:13 ./
drwxrwxrwt 1 root root 4096 Apr 3 08:10 …/
drwxr-xr-x 2 root root 4096 Apr 3 08:10 test_docker.txt/
drwxr-xr-x 2 root root 4096 Apr 3 08:13 test_host.txt/
可以发现,容器卷和挂载在本地的目录内容是一致的
查看改容器详细信息,也能查看到
docker inspect imageID
–volumes-from 要继承的容器名
[root@VM-16-8-centos ~]# docker run -it --privileged=true --volumes-from u1 --name u2 ubuntu
root@de0f937528ac:/# cd /tmp
root@de0f937528ac:/tmp# cd docker_data/
root@de0f937528ac:/tmp/docker_data# ll
total 16
drwxr-xr-x 4 root root 4096 Apr 3 08:13 ./
drwxrwxrwt 1 root root 4096 Apr 3 08:26 …/
drwxr-xr-x 2 root root 4096 Apr 3 08:10 test_docker.txt/
drwxr-xr-x 2 root root 4096 Apr 3 08:13 test_host.txt/
=======================================================================
[root@VM-16-8-centos ~]# docker pull tomcat
Using default tag: latest
latest: Pulling from library/tomcat
dbba69284b27: Pull complete
9baf437a1bad: Pull complete
6ade5c59e324: Pull complete
b19a994f6d4c: Pull complete
43c0aceedb57: Pull complete
24e7c71ec633: Pull complete
612cf131e488: Pull complete
dc655e69dd90: Pull complete
efe57b7441f6: Pull complete
8db51a0119f4: Pull complete
Digest: sha256:263f93ac29cb2dbba4275a4e647b448cb39a66334a6340b94da8bf13bde770aa
Status: Downloaded newer image for tomcat:latest
docker.io/library/tomcat:latest
[root@VM-16-8-centos ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
124.221.228.148:5000/pyyubuntu 1.1 138c010d2c99 About an hour ago 109MB
ubuntu 1.0 138c010d2c99 About an hour ago 109MB
tomcat latest b00440a36b99 37 hours ago 680MB
registry latest d3241e050fc9 4 days ago 24.2MB
ubuntu latest ff0fea8310f3 2 weeks ago 72.8MB
新版tomcat,首页不在webapp下了
把webapps.dist目录换成webapps
当然我们不修改也是可以的,只需要下载tomcat8即可
[root@VM-16-8-centos ~]# docker run --name test-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=a -d mysql:5.7
b0fbfe45fce1ef90b4caf946efacbef0e50a425a25dec1d8e15902244e43747b
[root@VM-16-8-centos ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b0fbfe45fce1 mysql:5.7 “docker-entrypoint.s…” 8 seconds ago Up 7 seconds 3306/tcp, 33060/tcp test-mysql
ce89351d51ec tomcat “catalina.sh run” 16 minutes ago Up 16 minutes 0.0.0.0:8080->8080/tcp funny_bose
de0f937528ac ubuntu “bash” 29 minutes ago Up 29 minutes u2
2e431ba4f3bf ubuntu “bash” 45 minutes ago Up 45 minutes u1
9dabfdf1b57d registry “/entrypoint.sh /etc…” About an hour ago Up About an hour 0.0.0.0:5000->5000/tcp sharp_brown
1f1ed5798baa ubuntu “/bin/bash” 6 hours ago Up 47 minutes vigorous_dewdney
[root@VM-16-8-centos ~]# docker exec -it b0fbfe45fce1 /bin/bash
root@b0fbfe45fce1:/# mysql -uroot -pa
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.37 MySQL Community Server (GPL)
Copyright © 2000, 2022, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.
mysql> create database test_mysql;
Query OK, 1 row affected (0.00 sec)
mysql> use test_mysql;
Database changed
mysql> create table docker_mysql (id int,name varchar(22));
Query OK, 0 rows affected (0.02 sec)
mysql> insert into docker_mysql values(1,‘zs’),(2,‘ls’);
Query OK, 2 rows affected (0.01 sec)
Records: 2 Duplicates: 0 Warnings: 0
mysql> select count(*) Sum from docker_mysql;
±----+
| Sum |
±----+
| 2 |
±----+
1 row in set (0.00 sec)
mysql> SHOW VARIABLES LIKE ‘character%’;
±-------------------------±---------------------------+
| Variable_name | Value |
±-------------------------±---------------------------+
| character_set_client | latin1 |
| character_set_connection | latin1 |
| character_set_database | latin1 |
| character_set_filesystem | binary |
| character_set_results | latin1 |
| character_set_server | latin1 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
±-------------------------±---------------------------+
8 rows in set (0.00 sec)
navicat测试连接
6.2.1 编码问题的解决
插入中文会报错
在本地写好my.cnf文件
[root@VM-16-8-centos conf]# cat my.cnf
[client]
default_character_set=utf-8
[mysqld]
collation_server=utf8_general_ci
character_set_server=utf8
[root@VM-16-8-centos conf]# pwd
/pyy/mysql/conf
重新启动mysql容器实例再重新进入并查看字符编码
复制一份好的redis配置文件,
6.3.1 Redis配置文件
开启redis验证 可选
requirepass 123
允许redis外地连接 必须
注释掉 # bind 127.0.0.1
daemonize no
将daemonize yes注释起来或者 daemonize no设置,因为该配置和docker run中-d参数冲突,会导致容器一直启动失败
开启redis数据持久化 appendonly yes 可选
[root@VM-16-8-centos ~]# cat /app/redis/redis.conf
redis configuration file example.
####### Main configuration start #######
#注释掉bind 127.0.0.1,使redis可以外部访问
#bind 127.0.0.1
端口号
port 6379
#给redis设置密码
requirepass redis123
##redis持久化 默认是no
appendonly yes
#开启protected-mode保护模式,需配置bind ip或者设置访问密码
#关闭protected-mode模式,此时外部网络可以直接访问
protected-mode no
#是否开启集群
#cluster-enabled no
#集群的配置文件,该文件自动生成
#cluster-config-file nodes.conf
#集群的超时时间
#cluster-node-timeout 5000
#用守护线程的方式启动
daemonize no
#防止出现远程主机强迫关闭了一个现有的连接的错误 默认是300
tcp-keepalive 300
####### Main configuration end #######
timeout 0
tcp-backlog 511
Note: these supervision methods only signal “process is ready.”
They do not enable continuous liveness pings back to your supervisor.
supervised no
If a pid file is specified, Redis writes it where specified at startup
and removes it at exit.
When the server runs non daemonized, no pid file is created if none is
specified in the configuration. When the server is daemonized, the pid file
is used even if not specified, defaulting to “/var/run/redis.pid”.
Creating a pid file is best effort: if Redis is not able to create it
nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_6379.pid
Specify the server verbosity level.
This can be one of:
debug (a lot of information, useful for development/testing)
verbose (many rarely useful info, but not a mess like the debug level)
notice (moderately verbose, what you want in production probably)
warning (only very important / critical messages are logged)
loglevel notice
Specify the log file name. Also the empty string can be used to force
Redis to log on the standard output. Note that if you use standard
output for logging but daemonize, logs will be sent to /dev/null
logfile “”
To enable logging to the system logger, just set ‘syslog-enabled’ to yes,
and optionally update the other syslog parameters to suit your needs.
syslog-enabled no
Specify the syslog identity.
syslog-ident redis
Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
syslog-facility local0
Set the number of databases. The default database is DB 0, you can select
a different one on a per-connection basis using SELECT where
dbid is a number between 0 and ‘databases’-1
databases 16
By default Redis shows an ASCII art logo only when started to log to the
standard output and if the standard output is a TTY. Basically this means
that normally a logo is displayed only in interactive sessions.
However it is possible to force the pre-4.0 behavior and always show a
ASCII art logo in startup logs by setting the following option to yes.
always-show-logo yes
################################ SNAPSHOTTING ################################
Save the DB on disk:
save
Will save the DB if both the given number of seconds and the given
number of write operations against the DB occurred.
In the example below the behaviour will be to save:
after 900 sec (15 min) if at least 1 key changed
after 300 sec (5 min) if at least 10 keys changed
after 60 sec if at least 10000 keys changed
Note: you can disable saving completely by commenting out all “save” lines.
It is also possible to remove all the previously configured save
points by adding a save directive with a single empty string argument
like in the following example:
save “”
save 900 1
save 300 10
save 60 10000
By default Redis will stop accepting writes if RDB snapshots are enabled
(at least one save point) and the latest background save failed.
This will make the user aware (in a hard way) that data is not persisting
on disk properly, otherwise chances are that no one will notice and some
disaster will happen.
If the background saving process will start working again Redis will
automatically allow writes again.
However if you have setup your proper monitoring of the Redis server
and persistence, you may want to disable this feature so that Redis will
continue to work as usual even if there are problems with disk,
permissions, and so forth.
stop-writes-on-bgsave-error yes
Compress string objects using LZF when dump .rdb databases?
For default that’s set to ‘yes’ as it’s almost always a win.
If you want to save some CPU in the saving child set it to ‘no’ but
the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
This makes the format more resistant to corruption but there is a performance
hit to pay (around 10%) when saving and loading RDB files, so you can disable it
for maximum performances.
RDB files created with checksum disabled have a checksum of zero that will
tell the loading code to skip the check.
rdbchecksum yes
The filename where to dump the DB
dbfilename dump.rdb
Remove RDB files used by replication in instances without persistence
enabled. By default this option is disabled, however there are environments
where for regulations or other security concerns, RDB files persisted on
disk by masters in order to feed replicas, or stored on disk by replicas
in order to load them for the initial synchronization, should be deleted
ASAP. Note that this option ONLY WORKS in instances that have both AOF
and RDB persistence disabled, otherwise is completely ignored.
An alternative (and sometimes better) way to obtain the same effect is
to use diskless replication on both master and replicas instances. However
in the case of replicas, diskless is not always an option.
rdb-del-sync-files no
The working directory.
The DB will be written inside this directory, with the filename specified
above using the ‘dbfilename’ configuration directive.
The Append Only File will also be created inside this directory.
Note that you must specify a directory here, not a file name.
dir ./
When a replica loses its connection with the master, or when the replication
is still in progress, the replica can act in two different ways:
1) if replica-serve-stale-data is set to ‘yes’ (the default) the replica will
still reply to client requests, possibly with out of date data, or the
data set may just be empty if this is the first synchronization.
2) if replica-serve-stale-data is set to ‘no’ the replica will reply with
an error “SYNC with master in progress” to all the kind of commands
but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
COMMAND, POST, HOST: and LATENCY.
replica-serve-stale-data yes
You can configure a replica instance to accept writes or not. Writing against
a replica instance may be useful to store some ephemeral data (because data
written on a replica will be easily deleted after resync with the master) but
may also cause problems if clients are writing to it because of a
misconfiguration.
Since Redis 2.6 by default replicas are read-only.
Note: read only replicas are not designed to be exposed to untrusted clients
on the internet. It’s just a protection layer against misuse of the instance.
Still a read only replica exports by default all the administrative commands
such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
security of read only replicas using ‘rename-command’ to shadow all the
administrative / dangerous commands.
replica-read-only yes
When diskless replication is used, the master waits a configurable amount of
time (in seconds) before starting the transfer in the hope that multiple
replicas will arrive and the transfer can be parallelized.
With slow disks and fast (large bandwidth) networks, diskless replication
works better.
repl-diskless-sync no
When diskless replication is enabled, it is possible to configure the delay
the server waits in order to spawn the child that transfers the RDB via socket
to the replicas.
This is important since once the transfer starts, it is not possible to serve
new replicas arriving, that will be queued for the next RDB transfer, so the
server waits a delay in order to let more replicas arrive.
The delay is specified in seconds, and by default is 5 seconds. To disable
it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5
In many cases the disk is slower than the network, and storing and loading
the RDB file may increase replication time (and even increase the master’s
Copy on Write memory and salve buffers).
However, parsing the RDB file directly from the socket may mean that we have
to flush the contents of the current database before the full rdb was
received. For this reason we have the following options:
“disabled” - Don’t use diskless load (store the rdb file to the disk first)
“on-empty-db” - Use diskless load only when it is completely safe.
“swapdb” - Keep a copy of the current db contents in RAM while parsing
the data directly from the socket. note that this requires
sufficient memory, if you don’t have it, you risk an OOM kill.
repl-diskless-load disabled
Disable TCP_NODELAY on the replica socket after SYNC?
If you select “yes” Redis will use a smaller number of TCP packets and
less bandwidth to send data to replicas. But this can add a delay for
the data to appear on the replica side, up to 40 milliseconds with
Linux kernels using a default configuration.
If you select “no” the delay for data to appear on the replica side will
be reduced but more bandwidth will be used for replication.
By default we optimize for low latency, but in very high traffic conditions
or when the master and replicas are many hops away, turning this to “yes” may
be a good idea.
repl-disable-tcp-nodelay no
The replica priority is an integer number published by Redis in the INFO
output. It is used by Redis Sentinel in order to select a replica to promote
into a master if the master is no longer working correctly.
A replica with a low priority number is considered better for promotion, so
for instance if there are three replicas with priority 10, 100, 25 Sentinel
will pick the one with priority 10, that is the lowest.
However a special priority of 0 marks the replica as not able to perform the
role of master, so a replica with priority of 0 will never be selected by
Redis Sentinel for promotion.
By default the priority is 100.
replica-priority 100
ACL LOG
The ACL Log tracks failed commands and authentication events associated
with ACLs. The ACL Log is useful to troubleshoot failed commands blocked
by ACLs. The ACL Log is stored in memory. You can reclaim memory with
ACL LOG RESET. Define the maximum entry length of the ACL Log below.
acllog-max-len 128
Using an external ACL file
Instead of configuring users here in this file, it is possible to use
a stand-alone file just listing users. The two methods cannot be mixed:
if you configure users here and at the same time you activate the exteranl
ACL file, the server will refuse to start.
The format of the external ACL user file is exactly the same as the
format that is used inside redis.conf to describe users.
aclfile /etc/redis/users.acl
Command renaming (DEPRECATED).
------------------------------------------------------------------------
WARNING: avoid using this option if possible. Instead use ACLs to remove
commands from the default user, and put them only in some admin user you
create for administrative purposes.
------------------------------------------------------------------------
It is possible to change the name of dangerous commands in a shared
environment. For instance the CONFIG command may be renamed into something
hard to guess so that it will still be available for internal-use tools
but not available for general clients.
Example:
rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
It is also possible to completely kill a command by renaming it into
an empty string:
rename-command CONFIG “”
Please note that changing the name of commands that are logged into the
AOF file or transmitted to replicas may cause problems.
################################### CLIENTS ####################################
Set the max number of connected clients at the same time. By default
this limit is set to 10000 clients, however if the Redis server is not
able to configure the process file limit to allow for the specified limit
the max number of allowed clients is set to the current file limit
minus 32 (as Redis reserves a few file descriptors for internal uses).
Once the limit is reached Redis will close all the new connections sending
an error ‘max number of clients reached’.
IMPORTANT: When Redis Cluster is used, the max number of connections is also
shared with the cluster bus: every node in the cluster will use two
connections, one incoming and another outgoing. It is important to size the
limit accordingly in case of very large clusters.
maxclients 10000
############################## MEMORY MANAGEMENT ################################
Set a memory usage limit to the specified amount of bytes.
When the memory limit is reached Redis will try to remove keys
according to the eviction policy selected (see maxmemory-policy).
If Redis can’t remove keys according to the policy, or if the policy is
set to ‘noeviction’, Redis will start to reply with errors to commands
that would use more memory, like SET, LPUSH, and so on, and will continue
to reply to read-only commands like GET.
This option is usually useful when using Redis as an LRU or LFU cache, or to
set a hard memory limit for an instance (using the ‘noeviction’ policy).
WARNING: If you have replicas attached to an instance with maxmemory on,
the size of the output buffers needed to feed the replicas are subtracted
from the used memory count, so that network problems / resyncs will
not trigger a loop where keys are evicted, and in turn the output
buffer of replicas is full with DELs of keys evicted triggering the deletion
of more keys, and so forth until the database is completely emptied.
In short… if you have replicas attached it is suggested that you set a lower
limit for maxmemory so that there is some free RAM on the system for replica
output buffers (but this is not needed if the policy is ‘noeviction’).
maxmemory
MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
is reached. You can select one from the following behaviors:
volatile-lru -> Evict using approximated LRU, only keys with an expire set.
allkeys-lru -> Evict any key using approximated LRU.
volatile-lfu -> Evict using approximated LFU, only keys with an expire set.
allkeys-lfu -> Evict any key using approximated LFU.
volatile-random -> Remove a random key having an expire set.
allkeys-random -> Remove a random key, any key.
volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
noeviction -> Don’t evict anything, just return an error on write operations.
LRU means Least Recently Used
LFU means Least Frequently Used
Both LRU, LFU and volatile-ttl are implemented using approximated
randomized algorithms.
Note: with any of the above policies, Redis will return an error on write
operations, when there are no suitable keys for eviction.
At the date of writing these commands are: set setnx setex append
incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
getset mset msetnx exec sort
The default is:
maxmemory-policy noeviction
LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
algorithms (in order to save memory), so you can tune it for speed or
accuracy. For default Redis will check five keys and pick the one that was
used less recently, you can change the sample size using the following
configuration directive.
The default of 5 produces good enough results. 10 Approximates very closely
true LRU but costs more CPU. 3 is faster but not very accurate.
maxmemory-samples 5
Starting from Redis 5, by default a replica will ignore its maxmemory setting
(unless it is promoted to master after a failover or manually). It means
that the eviction of keys will be just handled by the master, sending the
DEL commands to the replica as keys evict in the master side.
This behavior ensures that masters and replicas stay consistent, and is usually
what you want, however if your replica is writable, or you want the replica
to have a different memory setting, and you are sure all the writes performed
to the replica are idempotent, then you may change this default (but be sure
to understand what you are doing).
Note that since the replica by default does not evict, it may end using more
memory than the one set via maxmemory (there are certain buffers that may
be larger on the replica, or data structures may sometimes take more memory
and so forth). So make sure you monitor your replicas and make sure they
have enough memory to never hit a real out-of-memory condition before the
master hits the configured maxmemory setting.
replica-ignore-maxmemory yes
Redis reclaims expired keys in two ways: upon access when those keys are
found to be expired, and also in background, in what is called the
“active expire key”. The key space is slowly and interactively scanned
looking for expired keys to reclaim, so that it is possible to free memory
of keys that are expired and will never be accessed again in a short time.
The default effort of the expire cycle will try to avoid having more than
ten percent of expired keys still in memory, and will try to avoid consuming
more than 25% of total memory and to add latency to the system. However
it is possible to increase the expire “effort” that is normally set to
“1”, to a greater value, up to the value “10”. At its maximum value the
system will use more CPU, longer cycles (and technically may introduce
more latency), and will tollerate less already expired keys still present
in the system. It’s a tradeoff betweeen memory, CPU and latecy.
active-expire-effort 1
############################# LAZY FREEING ####################################
Redis has two primitives to delete keys. One is called DEL and is a blocking
deletion of the object. It means that the server stops processing new commands
in order to reclaim all the memory associated with an object in a synchronous
way. If the key deleted is associated with a small object, the time needed
in order to execute the DEL command is very small and comparable to most other
O(1) or O(log_N) commands in Redis. However if the key is associated with an
aggregated value containing millions of elements, the server can block for
a long time (even seconds) in order to complete the operation.
For the above reasons Redis also offers non blocking deletion primitives
such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
FLUSHDB commands, in order to reclaim memory in background. Those commands
are executed in constant time. Another thread will incrementally free the
object in the background as fast as possible.
DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
It’s up to the design of the application to understand when it is a good
idea to use one or the other. However the Redis server sometimes has to
delete keys or flush the whole database as a side effect of other operations.
Specifically Redis deletes objects independently of a user call in the
following scenarios:
1) On eviction, because of the maxmemory and maxmemory policy configurations,
in order to make room for new data, without going over the specified
memory limit.
2) Because of expire: when a key with an associated time to live (see the
EXPIRE command) must be deleted from memory.
3) Because of a side effect of a command that stores data on a key that may
already exist. For example the RENAME command may delete the old key
content when it is replaced with another one. Similarly SUNIONSTORE
or SORT with STORE option may delete existing keys. The SET command
itself removes any old content of the specified key in order to replace
it with the specified string.
4) During replication, when a replica performs a full resynchronization with
its master, the content of the whole database is removed in order to
load the RDB file just transferred.
In all the above cases the default is to delete objects in a blocking way,
like if DEL was called. However you can configure each case specifically
in order to instead release memory in a non-blocking way like if UNLINK
was called, using the following configuration directives.
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
It is also possible, for the case when to replace the user code DEL calls
with UNLINK calls is not easy, to modify the default behavior of the DEL
command to act exactly like UNLINK, using the following configuration
directive:
lazyfree-lazy-user-del no
The name of the append only file (default: “appendonly.aof”)
appendfilename “appendonly.aof”
The fsync() call tells the Operating System to actually write data on disk
instead of waiting for more data in the output buffer. Some OS will really flush
data on disk, some other OS will just try to do it ASAP.
Redis supports three different modes:
no: don’t fsync, just let the OS flush the data when it wants. Faster.
always: fsync after every write to the append only log. Slow, Safest.
everysec: fsync only one time every second. Compromise.
The default is “everysec”, as that’s usually the right compromise between
speed and data safety. It’s up to you to understand if you can relax this to
“no” that will let the operating system flush the output buffer when
it wants, for better performances (but if you can live with the idea of
some data loss consider the default persistence mode that’s snapshotting),
or on the contrary, use “always” that’s very slow but a bit safer than
everysec.
More details please check the following article:
http://antirez.com/post/redis-persistence-demystified.html
If unsure, use “everysec”.
appendfsync always
appendfsync everysec
appendfsync no
When the AOF fsync policy is set to always or everysec, and a background
saving process (a background save or AOF log background rewriting) is
performing a lot of I/O against the disk, in some Linux configurations
Redis may block too long on the fsync() call. Note that there is no fix for
this currently, as even performing fsync in a different thread will block
our synchronous write(2) call.
In order to mitigate this problem it’s possible to use the following option
that will prevent fsync() from being called in the main process while a
BGSAVE or BGREWRITEAOF is in progress.
This means that while another child is saving, the durability of Redis is
the same as “appendfsync none”. In practical terms, this means that it is
possible to lose up to 30 seconds of log in the worst scenario (with the
default Linux settings).
If you have latency problems turn this to “yes”. Otherwise leave it as
“no” that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no
Automatic rewrite of the append only file.
Redis is able to automatically rewrite the log file implicitly calling
BGREWRITEAOF when the AOF log size grows by the specified percentage.
This is how it works: Redis remembers the size of the AOF file after the
latest rewrite (if no rewrite has happened since the restart, the size of
the AOF at startup is used).
This base size is compared to the current size. If the current size is
bigger than the specified percentage, the rewrite is triggered. Also
you need to specify a minimal size for the AOF file to be rewritten, this
is useful to avoid rewriting the AOF file even if the percentage increase
is reached but it is still pretty small.
Specify a percentage of zero in order to disable the automatic AOF
rewrite feature.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb