2021/02/04 更新redis安装步骤
在软件开发项目中,会遇到搜索引擎、日志收集等需求,本教程在阿里云选择公共镜像 Ubuntu 20.04 后,部署 apt-fast, java, axel, NodeJS, postgres, Redis, ELK(Elasticsearch, Logstash, Kibana, 均为开箱即用) 服务并设置开机自启。阿里云 Ubuntu 20.04 系统镜像下自带Python3。
目录
提醒:文件修改前请自己做好备份
软件安装在 /usr/local目录下,日志文件放在**/var**目录下
相关服务版本信息
软件 | 版本 | 查询命令 |
---|---|---|
apt-fast | 2.0.3 (amd64) | apt-fast -v |
Java | openjdk 14.0.1 | java --version |
Axel | 2.17.5 (linux-gnu) | axel -V |
Node.js | 14.7.0 | node -v |
PostgreSQL | 12.3 | psql -V |
Redis | 6.0.6 | redis-server -v |
Elasticsearch | 7.8.0 | |
Elasticsearch-head | 5.0.0 | |
Elasticsearch-analysis-ik | 7.8.0 | |
Logstash | 7.8.0 | |
Kibana | 7.8.0 |
安装
创建目录
# 创建environment目录
mkdir /usr/local/environment
# 移动到environment目录
cd /usr/local/environment
# 创建软件目录
mkdir redis elk elk/elasticsearch elk/logstash elk/kibana
# 赋予读写权限
chmod 777 /usr/local/environment
更新一波软件
sudo apt-get upgrade
安装 Apt-fast
sudo apt install software-properties-common
sudo add-apt-repository ppa:apt-fast/stable
sudo apt-get -y install apt-fast
# 安装过程的配置,选择apt-get, 最大线程数换成16,选择no
安装 Java
sudo apt install openjdk-14-jdk-headless
安装 Axel
sudo apt-get install axel
安装 NodeJS
sudo apt install npm
安装 PostgreSQL
# Create the file repository configuration:
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
# Import the repository signing key:
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
# Update the package lists
# Install the latest version of PostgreSQL.
# If you want a specific version, use 'postgresql-12' or similar instead of 'postgresql':
sudo apt-get update && sudo apt-get install postgresql
#如果上面安装速度太慢,使用apt-fast安装
sudo apt-fast update
sudo apt-fast install postgresql
# 执行npm install速度慢,更换淘宝镜像
npm config set registry http://registry.npm.taobao.org
安装 Redis
# 进入redis安装目录
cd /usr/local/environment/redis
# 获取redis安装包
wget http://download.redis.io/releases/redis-6.0.6.tar.gz
# 解压(请使用对应的文件名)
tar -xvf redis-6.0.6.tar.gz
# 创建软链接(为了方便,非必须,如果没有创建,接下来的涉及到redis路径请用redis-6.0.5)
# ln -s 原始目录名 快速访问目录名
ln -s redis-6.0.6 redis
# 进入redis目录,对解压的Redis文件进行编译
cd redis
make
# 进入src文件夹进行Redis安装
# ( 注意: 6.0.10 及以上的版本不需要运行下面两条命令)
cd src
make install
redis安装包下载链接从 redis 官网获取。
安装 ELK
准备安装包(参考方式)
# 进入 ELK 目录
cd /usr/local/environment/elk
# 通过filezilla、FinalShell等工具将下载好的安装包传输到服务器
官网/下载源:
Elasticsearch, elasticsearch-analysis-ik, elasticsearch-head, Logstash, Kibana
百度网盘:
链接: 百度网盘 提取码: kdi9
PS: 7-zip可以(解压ZIP文件然后)压缩文件得到.tar.gz压缩包
分别将[elasticsearch, elasticsearch-analysis, elasticsearch-head], [logstash], [kibana] 的压缩包放在 elasticsearch, logstash, kibana 目录下
安装 Elasticsearch
# 进入 elasticsearch 目录
cd /usr/local/environment/elk/elasticsearch
# 解压安装包
tar -zxvf elasticsearch-7.8.0-linux-x86_64.tar.gz
tar -zxvf elasticsearch-head-5.0.0.tar.gz
tar -zxvf elasticsearch-analysis-ik-7.8.0.tar.gz
# 将 IK分词器 文件移动到elasticsearch的plugins目录下
mv elasticsearch-analysis-ik-7.8.0 ./elasticsearch-7.8.0/plugins
# 进入 elasticsearch-head 目录,安装 elasticsearch-head 依赖包
cd elasticsearch-head-5.0.0
npm install
安装 Logstash
# 进入 logstash 目录
cd /usr/local/environment/elk/logstash
# 解压安装包
tar -zxvf logstash-7.8.0.tar.gz
安装 Kibana
# 进入 Kibana 目录
cd /usr/local/environment/elk/kibana
# 解压安装包
tar -zxvf kibana-7.8.0-linux-x86_64.tar.gz
配置
PostgreSQL
创建用户
# 修改数据库用户
sudo su - postgres
# 连接数据库(连接后命令输入头变成 postgres=# )
psql
# 修改密码
\password
修改配置文件 postgresql.conf 允许所有ip访问
listen_addresses = '*'
修改认证权限配置文件 pg_hba.conf ,添加一下配置,并将所有的"peer"改为"md5"(密码认证)
# IPv4 local connections:
host all all 0.0.0.0/0 md5
(我的配置文件在/etc/postgresql/12/main目录下)
Redis
# 在/var/log目录下创建redis文件夹,存放redis日志文件
mkdir /var/log/redis
# 进入redis安装目录,创建conf、data文件夹
cd /usr/local/environment/redis/redis
mkdir data conf
# 进入conf目录,编写配置文件 redis-端口号.conf
# 例:redis-6379.conf
cd conf
touch redis-6379.conf
(除了带注释的行,其他均为默认设置,见redis目录下的redis.conf文件)
# 绑定的ip,请根据需求自行更换
# bind 127.0.0.1
# 远程访问 protected-mode no
protected-mode no
# 端口设置
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
# 以守护进程的方式启动,redis以服务的形式存在,日志不再打印到命令窗口中
daemonize yes
supervised no
pidfile /var/run/redis-6379.pid
loglevel notice
# 日志文件,使用绝对路径,防止出问题
logfile "/var/log/redis/6379.log"
databases 16
always-show-logo yes
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
# 当前服务文件保存位置,包含日志文件、持久化文件等,使用绝对路径,防止出问题
dir /var/log/redis
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
# 内存到达上限之后的处理策略
maxmemory-policy noeviction
# 1. volatile-lru:对设置了过期时间的key执行LRU逐出策略(默认值)
# 2. allkeys-lru : 对所有key使用LRU逐出策略
# 3. volatile-random:随机删除即将过期key(LFU)
# 4. allkeys-random:随机删除
# 5. volatile-ttl : 删除即将过期的
# 6. noeviction : 永不过期,返回错误
Elasticsearch
# 进入Elasticsearch安装目录
cd /usr/local/environment/elk/elasticsearch/elasticsearch-7.8.0
# 创建文件夹data、logs,存放elasticsearch数据,日志
mkdir data logs
# 进入Elasticsearch配置文件目录
cd /usr/local/environment/elk/elasticsearch/elasticsearch-7.8.0/config
# 编辑该目录下的elasticsearch.yml文件,在文件末尾加上下面配置(或者取消注释后修改)
#集群名称
cluster.name: SUSTech-Town-ES
#节点名称
node.name: node-st
#数据和日志的存储目录(若安装路径不同, 请注意更改)
path.data: /usr/local/environment/elk/elasticsearch/elasticsearch-7.8.0/data
path.logs: /usr/local/environment/elk/elasticsearch/elasticsearch-7.8.0/logs
#设置绑定的ip,0.0.0.0代表任意ip访问
network.host: 0.0.0.0
#端口
http.port: 9200
#设置在集群中的所有节点名称
cluster.initial_master_nodes: ["node-st"]
# 配置elasticsearch允许跨域访问
http.cors.enabled: true
http.cors.allow-origin: "*"
修改系统的进程内存限制,否则启动es会报错
/etc/security目录下的***limits.conf***文件,更新/添加以下数据
* soft nofile 65536
* hard nofile 131072
* soft nproc 4096
* hard nproc 4096
/etc目录下的***sysctl.conf***文件,添加以下配置
# elasticsearch
vm.max_map_count=262145
添加完成并保存后执行
sysctl -p
elasticsearch不允许以root用户启动,故新建其他用户来启动
# 创建用户组es
groupadd elk
# 创建新用户elk, -g elk 设置其用户组为elk, -p 123456 设置其密码123456
useradd elk -g elk -p 123456
# 更改 elk 文件夹及内部文件的所属用户及组为 elk:elk
chown -R elk:elk /usr/local/environment/elk
Kibana
修改/usr/local/environment/elk/kibana/kibana-7.8.0-linux-x86_64/config目录下的***kibana.yml***文件
server.port: 5601
# 允许所有ip访问
server.host: "0.0.0.0"
server.name: "SUSTechTown"
elasticsearch.hosts: ["http://localhost:9200"]
# 按需开启
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"
pid.file: /var/run/kibana.pid
# 语言中文
i18n.locale: "zh-CN"
# 加密,注意:参数值至少32位,否则启动会报错提示
xpack.encryptedSavedObjects.encryptionKey: SUSTechTownencryptedSavedObjects0123456789
xpack.security.encryptionKey: SUSTechTownencryptionKeysecurity01234567890
xpack.reporting.encryptionKey: SUSTechTownencryptionKeyreporting0123456789
# 给elasticsearch及其以下子目录赋予读写权限
chmod -R 777 /usr/local/environment
启动
PostgreSQL
命令行的各个参数解释说明:
- -U username 用户名,默认值postgres
- -d dbname 要连接的数据库名,默认值postgres。如果单指定-U,没指定-d参数,则默认访问与用户名名称相同的数据库。
- -h hostname 主机名,默认值localhost
- -p port 端口号,默认值5432
Redis
# 进入redis配置文件目录
cd /usr/local/environment/redis/redis/conf
# 指定配置文件开启redis服务
redis-server redis-6379.conf
PS: /usr/local/environment/redis/redis/data 目录下的日志 6379.log 如有 WARING 则按照其提示操作
(最初安装没有配置是有WARING的,操作提示很明了)
Elasticsearch
# 进入elasticsearch安装目录
cd /usr/local/environment/elk/elasticsearch/elasticsearch-7.8.0
# 切换用户
su elk
# 启动elasticsearch
./bin/elasticsearch
# 或者 后台启动
./bin/elasticsearch -d
# 启动elasticsearch-head(必须先启动elasticsearch)
cd /usr/local/environment/elk/elasticsearch/elasticsearch-head-5.0.0
npm run start
# 或者 后台启动
nohup npm run start &
如果出现一下报错,说明IK分词器插件安装不正确
java.lang.IllegalStateException: Could not load plugin descriptor for plugin directory [elasticsearch-analysis-ik-7.8.0]
Likely root cause: java.nio.file.NoSuchFileException: /usr/local/environment/elk/elasticsearch/elasticsearch-7.8.0/plugins/elasticsearch-analysis-ik-7.8.0/plugin-descriptor.properties
检查elasticsearch-analysis-ik-7.8.0文件夹(或者ik分词器压缩包)下是否存在plugin-descriptor.properties文件
Kibana
# 进入kibana安装目录
cd /usr/local/environment/elk/kibana/kibana-7.8.0-linux-x86_64
# 切换用户
su elk
# 启动kibana
./bin/kibana
# 或者 后台启动
nohup ./bin/kibana &
若出现一下报错无法启动
FATAL Error: EACCES: permission denied, open '/var/run/kibana.pid'
则到/var/run目录下新建文件kibana.pid或执行一下命令
touch /var/run/kibana.pid
开机自启
配置使用 systemctl 管理 redis, elasticsearch, elasticsearch-head, kibana
# 进入脚本文件目录
cd /etc/init.d
# 创建脚本文件
touch redis elasticsearch elasticsearch-head kibana
Redis
#!/bin/bash
#chkconfig: 22345 10 90
#description: Start and Stop redis
# redis安装目录
REDIS_PATH=/usr/local/environment/redis/redis-6.0.6
# 启动端口号
REDIS_PORT=6379
EXEC=$REDIS_PATH/src/redis-server
CLIEXEC=$REDIS_PATH/src/redis-cli
PIDFILE=/var/run/redis-6379.pid
# redis配置文件路径
CONF="$REDIS_PATH/conf/redis-6379.conf"
case "$1" in
start)
if [ -f $PIDFILE ];then
echo "$PIDFILE exists,process is already running or crashed"
else
echo "Starting Redis server..."
$EXEC $CONF
fi
;;
stop)
if [ ! -f $PIDFILE ];then
echo "$PIDFILE does not exist,process is not running"
else
PID=$(cat $PIDFILE)
echo "Stopping..."
$CLIEXEC -p $REDIS_PORT shutdown
while [ -x /proc/${PID} ]
do
echo "Waiting for Redis to shutdown..."
sleep 1
done
echo "Redis stopped"
fi
;;
restart)
"$0" stop
sleep 3
"$0" start
;;
*)
echo "Please use start or stop or restart as first argument"
;;
esac
# 修改文件权限
chmod +x /etc/init.d/redis
Elasticsearch
#!/bin/sh
#chkconfig: 2345 80 05
#description: elasticsearch
# 用于启动elasticsearch的用户
ES_USER=elk
# elasticsearch安装目录
ES_PATH=/usr/local/environment/elk/elasticsearch/elasticsearch-7.8.0
case "$1" in
start)
su $ES_USER << !
$ES_PATH/bin/elasticsearch -d;
!
echo "elasticsearch startup"
;;
stop)
es_pid=`ps aux|grep elasticsearch | grep -v 'grep elasticsearch' | awk '{print $2}'`
kill -9 $es_pid
echo "elasticsearch stopped"
;;
restart)
es_pid=`ps aux|grep elasticsearch | grep -v 'grep elasticsearch' | awk '{print $2}'`
kill -9 $es_pid
echo "elasticsearch stopped"
sleep 3
su $ES_USER << !
$ES_PATH/bin/elasticsearch -d;
!
echo "elasticsearch restarted"
;;
*)
echo "Usage: $0 {start|stop|restart|force-reload|status}"
;;
esac
# 修改文件权限
chmod +x /etc/init.d/elasticsearch
Elasticsearch-head
#!/bin/sh
#chkconfig: 2345 80 05
#description elasticsearch-head
# elasticsearch-head 启动端口
ES_HEAD_HOST=9100
# elasticsearch-head 安装路径
ES_HEAD_PATH=/usr/local/environment/elk/elasticsearch/elasticsearch-head-5.0.0
# 用于启动elasticsearch-head的用户
ES_HEAD_USER=elk
# nodejs 安装路径(可以通过此命令获得: which node)
export NODE_PATH=/usr/bin/node
export PATH=$PATH:$NODE_PATH/bin
case "$1" in
start)
su $ES_HEAD_USER<<!
cd $ES_HEAD_PATH;
nohup npm run start >$ES_HEAD_PATH/nohup.out 2>&1 &
!
echo "elasticsearch-head startup"
;;
stop)
es_head_pid=`lsof -i :9100 | awk '{print $2}' | sed -n '2p'`
kill -9 $es_head_pid
echo "elasticsearch-head stopped"
;;
restart)
es_head_pid=`lsof -i :9100 | awk '{print $2}' | sed -n '2p'`
kill -9 $es_head_pid
echo "elasticsearch-head stopped"
sleep 3
su $ES_HEAD_USER<<!
cd $ES_HEAD_PATH;
nohup npm run start >$ES_HEAD_PATH/nohup.out 2>&1 &
!
echo "elasticsearch-head restarted"
;;
*)
echo "Usage: $0 {start|stop|restart|force-reload|status}"
esac
# 修改文件权限
chmod +x /etc/init.d/elasticsearch-head
Kibana
#!/bin/sh
#
# /etc/init.d/kibana4_init -- startup script for kibana4
### BEGIN INIT INFO
# Provides: kibana4_init
# Required-Start: $network $remote_fs $named
# Required-Stop: $network $remote_fs $named
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Starts kibana4_init
# Description: Starts kibana4_init using start-stop-daemon
### END INIT INFO
#!/bin/sh
#chkconfig: 2345 80 05
#description: kibana
# 用于启动 kibana 的用户
KIBANA_USER=elk
# kibana安装目录
KIBANA_PATH=/usr/local/environment/elk/kibana/kibana-7.8.0-linux-x86_64
case "$1" in
start)
su $KIBANA_USER << !
nohup $KIBANA_PATH/bin/kibana &
!
echo "kibana startup"
;;
stop)
kibana_pid=`ps aux|grep kibana | grep -v 'grep kibana' | awk '{print $2}'`
kill -9 $kibana_pid
echo "kibana stopped"
;;
restart)
kibana_pid=`ps aux|grep kibana | grep -v 'grep kibana' | awk '{print $2}'`
kill -9 $kibana_pid
echo "kibana stopped"
su $KIBANA_USER << !
nohup $KIBANA_PATH/bin/kibana &
!
echo "kibana restarted"
;;
*)
echo "Usage: $0 {start|stop|restart|force-reload|status}"
;;
esac
# 修改文件权限
chmod +x /etc/init.d/kibana
此时可以通过以下命令启动、停止redis服务,查看服务状态
systemctl start redis
systemctl stop redis
systemctl status redis
设置开机启动
打开 /lib/systemd/system 目录下的 rc.local.service 文件,末尾加入以下内容
[Install]
WantedBy=multi-user.target
Alias=rc-local.service
在 /etc 目录下新建 rc.loacl 文件并加上权限
sudo touch /etc/rc.local
sudo chmod +x /etc/rc.local
编写 rc.loacl 文件内容
#!/bin/sh
echo never > /sys/kernel/mm/transparent_hugepage/enabled
# 服务自启
systemctl start redis
systemctl start elasticsearch
systemctl start elasticsearch-head
systemctl start kibana
exit 0
重启命令测试
reboot
其他
Ubuntu如何删除添加的仓库
我们在安装某些软件的时候需要提前设置他的仓库,但是装好后每次运行apt-get update 都会从重新去扫描这个仓库,没报错还好,但有些仓库会报错。因此我们需要删除没用的仓库。
方法:
1.用代码,这里不写了。
2.不用代码,直接在/etc/apt/sources.list.d/文件夹里面找到需要删除的仓库
PostgreSQL相关操作
# 查看PostgreSQL状态
service postgresql status
# 重启PostgreSQL
service postgresql restart
# client命令行登录(下列参数自己替换)
psql -U {USERNAME} -d {DATABASE} -h {HOST} -p {PORT}
# 登录数据库之后的操作
# 建立新的数据库用户
create user zhangsan with password '123456';
# 为新用户建立数据库
create database testdb owner zhangsan;
# 把新建的数据库权限赋予新用户
grant all privileges on database testdb to zhangsan;
#创建超级用户
create user checker with superuser;
\password checker;
Linux相关操作
# 查看后台 redis 进程
ps -ef | grep redis
# 查看后台 elasticsearch 进程
ps -ef | grep elasticsearch
# 查看 nginx 安装目录
nginx -t