【Linux】Centos 7.9 新机器的一般部署方案

查看机器与系统信息


1、查看系统信息:

Linux版本:cat /etc/redhat-release
内核版本:cat /proc/version

2、查看 内存、硬盘、CPU 信息

# 硬盘
df -h

# 内存
free -h

# 网卡信息(IP 、MAC),需要先 yum install -y net-tools
ifconfig

# cpu信息
cat /proc/cpuinfo 
cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c # cpu型号
cat /proc/cpuinfo | grep "physical id" | sort | uniq|wc -l # 物理cpu个数
cat /proc/cpuinfo | grep "processor" |wc -l # 逻辑cpu个数
cat /proc/cpuinfo | grep "cores"|uniq # cpu核数

基本的安装与配置工作

3、安装 wget gcc lsof net-tools vim

yum -y install wget gcc lsof net-tools vim

4、yum 换源

yum repolist

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

yum clean all && yum makecache && yum repolist

5、安装 Anaconda3(python环境)

mkdir -p /usr/software && cd /usr/software

wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-2021.05-Linux-x86_64.sh


# bash命令执行安装脚本
bash Anaconda3-2021.05-Linux-x86_64.sh

echo 'export PATH="/root/anaconda3/bin:$PATH"' >> /etc/profile
source  /etc/profile

# pip 换国内源
mkdir  ~/.pip
vim  ~/.pip/pip.conf 

# 编辑
[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple
[install]
trusted-host = https://pypi.tuna.tsinghua.edu.cn
# 保存

7、时间与时区设置

(1)命令验证

# 显示的是 Local time
date 
# Fri Dec 18 01:20:50 CST 2020

timedatectl

# 显示如下
**************************************************
      Local time: Mon 2021-02-01 14:28:52 CST
  Universal time: Mon 2021-02-01 06:28:52 UTC
        RTC time: Mon 2021-02-01 06:28:52
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: n/a
NTP synchronized: no
 RTC in local TZ: no
      DST active: n/a
**************************************************

正常的时间与时区显示:

Local time 是本地时间(北京时间),且使用的是 CST

Universal time 和 RTC time 比北京时间早8小时

Time zone 是 Asia/Shanghai +8

(2)python验证

import time
import datetime
time.time()
datetime.datetime.now()

(3)如果只是时区不对,则设置时区

timedatectl  set-timezone Asia/Shanghai

(4)调整时间同步

yum install -y chrony && systemctl enable --now chronyd

(5) ntp 调整时间同步

yum -y install ntp
sudo systemctl start ntpd
sudo systemctl enable ntpd

8、安装tmux

yum install http://galaxy4.net/repo/galaxy4-release-7-current.noarch.rpm
yum install tmux
tmux -V

为 安装 Docker 和 K8s 做配置

hostnamectl set-hostname k8s-173

echo "192.168.10.203 k8s-173" >> /etc/hosts

systemctl status firewalld.service 
systemctl stop firewalld.service
systemctl disable firewalld.service

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
free -h

sestatus 
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sestatus 

reboot
# 需要重启生效

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

# 开放端口 6443 2379-2380 10250 10251 10252 30000-32767

# 查看网卡配置是否正确,否则修改配置并重启网卡使生效
cat /etc/sysconfig/network-scripts/ifcfg-enp0s3

安装 Docker

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y update
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce docker-ce-cli containerd.io
docker version
mkdir -p /etc/docker && cd /etc/docker

vim daemon.json

{
    "registry-mirrors": ["https://xxxxxx.mirror.aliyuncs.com"],
	"insecure-registries": ["120.xxx.xx.189:5000"],
	"exec-opts": ["native.cgroupdriver=systemd"],
	"storage-driver": "overlay2",
    "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
	"log-driver": "json-file",
	"log-opts": {
		"max-size": "10m",
		"max-file": "1"
	}

}

# 保存

vim /lib/systemd/system/docker.service

# 第14行,修改
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --default-ulimit core=0:0

systemctl daemon-reload
systemctl restart docker
systemctl status docker
systemctl enable docker

docker info
systemctl status docker -l

# 如果启动失败
systemctl status docker.service
journalctl -xe

安装 K8s

ceph 、rancher、harbor

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubectl kubelet kubeadm

kubelet --version
kubeadm version
kubectl version --client

systemctl enable kubelet && systemctl start kubelet

# 如有必要则需要修改 kubelet 使用的 cgroup 驱动程序
# 修改 /etc/sysconfig/kubelet
# KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
# 由于我们已经在 docker 的 daemon.json 把 docker 的 cgroup 驱动配置为 与 k8s 默认的一致,因此这一步不需要

Master

kubeadm config images list
# 或者
kubeadm config images list --kubernetes-version=v1.20.2


kubeadm init --kubernetes-version=1.20.2  \
--apiserver-advertise-address=192.168.10.147   \
--image-repository registry.aliyuncs.com/google_containers  \
--pod-network-cidr=172.16.0.0/16

# 初始化结果
*************************************************************
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.147:6443 --token vl3f6v.bvjochg4yudditaw \
    --discovery-token-ca-cert-hash sha256:xxx 
*************************************************************

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)

kubectl get node
# or
kubectl get node -o wide
mkdir -p /usr/ezrealer_conf
wget https://docs.projectcalico.org/manifests/calico.yaml
vim calico.yaml

# 从末尾查带192的,并修改
?172

- name: CALICO_IPV4POOL_CIDR
   value: "172.16.0.0/16"
# 保存
kubectl apply -f calico.yaml

# Pods 的 Status变为Ready
kubectl get nodes

Node

在其他机器上执行 join 命令 加入 k8s 集群

kubeadm token list # 查看现有 token
kubeadm token create # 过期(24h)的话,重新生成 token

# 查看 discovery-token-ca-cert-hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'


kubeadm join 192.168.10.147:6443 --token vl3f6v.bvjochg4yudditaw \
    --discovery-token-ca-cert-hash sha256:b6336c5016bfad9e5b4536fe3dce5247d9e804148b626795cdbe00517e3329b6
kubectl get node -o wide
kubectl get pod --all-namespaces
kubectl get pods --all-namespaces 

Registry

https://www.cnblogs.com/edisonchou/p/docker_registry_repository_setup_introduction.html

docker pull registry
docker images
docker run -d -p 5000:5000 --name images_registry \
-v /usr/docker/ImagesRegistry:/var/lib/registry \
--restart=always registry:latest

docker ps
docker inspect images_registry | grep IPAddress
curl http://172.17.0.2:5000/v2/_catalog


# 需要其他机器的 daemon.json 配置     "insecure-registries" : [ "your-server-ip:5000" ] 
docker tag python:3.8 172.17.0.2:5000/python:3.8
docker push 172.17.0.2:5000/python:3.8
curl http://172.17.0.2:5000/v2/_catalog

docker pull 172.17.0.2:5000/python:3.8
docker images

curl http://172.17.0.2:5000/v2/_catalog
curl  http://172.17.0.2:5000/v2/python/tags/list
docker exec -it  registry sh -c 'cat /etc/docker/registry/config.yml'

Harbor https://ivanzz1001.github.io/records/post/docker/2018/04/20/docker-harbor-architecture

安装 Mysql 8.0

yum localinstall https://repo.mysql.com//mysql80-community-release-el7-1.noarch.rpm

yum module disable mysql

yum -y update

yum -y install mysql-community-server

vim /etc/my.cnf

# Setted by Ezrealer (Ezrealer@qq.com)
port=18759

character-set-server = utf8mb4

collation-server = utf8mb4_general_ci

bind-address = 0.0.0.0

#只能用IP地址检查客户端的登录,不用主机名
skip_name_resolve = 1


default-time_zone='+8:00'

innodb_buffer_pool_size = 8G
thread_cache_size = 64
innodb_log_file_size = 4G

max_connections = 2000
interactive_timeout = 1800
wait_timeout = 1800

# expire_logs_days = 7 即将废弃
binlog_expire_logs_seconds

# 保存

systemctl start mysqld.service  

systemctl status mysqld.service

grep "password" /var/log/mysqld.log

mysql -uroot -p password

ALTER USER 'root'@'localhost' IDENTIFIED BY 'mypasswd';

use mysql;

update user set host='%' where user='root';

FLUSH PRIVILEGES;

ALTER USER 'root'@'%' IDENTIFIED BY 'mypasswd' PASSWORD EXPIRE NEVER;

ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'mypasswd';

FLUSH PRIVILEGES;

exit;

# 查看服务日志
cat /var/log/mysqld.log

安装 JAVA

cd /usr/software
# wget https://repo.huaweicloud.com/java/jdk/13+33/jdk-13_linux-x64_bin.tar.gz
wget https://repo.huaweicloud.com/java/jdk/8u151-b12/jdk-8u151-linux-x64.tar.gz
tar -zxvf jdk-8u151-linux-x64.tar.gz -C /usr/local/
cd /usr/local
mv jdk1.8.0_151 jdk1.8
vim /etc/profile

export JAVA_HOME=/usr/local/jdk1.8
CLASSPATH=$JAVA_HOME/lib/
PATH=$PATH:$JAVA_HOME/bin
export PATH JAVA_HOME CLASSPATH
# 保存
source /etc/profile
java -version

安装 kafka

安装与配置 Zookeeper

mkdir /usr/software
cd /usr/software
wget https://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.6.3/apache-zookeeper-3.6.3-bin.tar.gz
tar -zxvf apache-zookeeper-3.6.3-bin.tar.gz -C /usr/local/
cd /usr/local/
mv apache-zookeeper-3.6.3-bin/ zookeeper-3.6.3
cd /usr/local/zookeeper-3.6.3/conf/
cp zoo_sample.cfg zoo.cfg

vim zoo.cfg

clientPort=2181
dataDir=/usr/local/zookeeper-3.6.3/data
dataLogDir=/usr/local/zookeeper-3.6.3/logs

server.1=192.168.10.48:2888:3888
server.2=192.168.10.49:2888:3888
server.3=192.168.10.50:2888:3888
# 保存

mkdir /usr/local/zookeeper-3.6.3/data
mkdir /usr/local/zookeeper-3.6.3/logs


scp -r /usr/local/zookeeper-3.6.3/ root@ip:/usr/local/zookeeper-3.6.3/

echo "1" > /usr/local/zookeeper-3.6.3/data/myid
echo "2" > /usr/local/zookeeper-3.6.3/data/myid
echo "3" > /usr/local/zookeeper-3.6.3/data/myid

cd /usr/local/zookeeper-3.6.3/bin
./zkServer.sh start

tail -f zookeeper.out
./zkServer.sh status
./zkServer.sh restart

./zkCli.sh -server localhost:12999

cd /usr/local/zookeeper-3.6.3/bin
ls path
get path
stat path
delete /nodename

./zkServer.sh stop
./zkServer.sh restart

安装与配置 kafka

1、安装

cd /usr/software
wget http://mirrors.hust.edu.cn/apache/kafka/2.7.0/kafka_2.13-2.7.0.tgz
tar -zxvf kafka_2.13-2.7.0.tgz -C /usr/local/

2、配置

jvm

cd /usr/local/kafka_2.13-2.7.0/bin
vim kafka-server-start.sh

# Generic jvm settings you want to add
if [ -z "$KAFKA_OPTS" ]; then
  KAFKA_OPTS="-Xmx8g -Xms8g -XX:MetaspaceSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50
 -XX:MaxMetaspaceFreeRatio=85"
fi

server.conf

cd /usr/local/kafka_2.13-2.7.0/config

vim server.properties

broker.id=0
listeners=PLAINTEXT://0.0.0.0:9096
advertised.listeners=PLAINTEXT://120.xxx.xx.xxx:9096
num.network.threads=8
num.io.threads=16

log.dirs=/usr/local/kafka_2.13-2.7.0/logs
num.partitions=6
log.retention.hours=168
zookeeper.connect=192.168.10.48:2181,192.168.10.49:2181,192.168.10.50:2181

socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
delete.topic.enable=true
message.max.bytes=7340032
auto.create.topics.enable=false
# 保存
mkdir /usr/local/kafka_2.13-2.7.0/logs

scp -r /usr/local/kafka_2.13-2.7.0/ root@ip:/usr/local/kafka_2.13-2.7.0/

3、启动

./kafka-server-start.sh ../config/server.properties
./kafka-server-start.sh -daemon ../config/server.properties
./kafka-server-stop.sh  
ps -ef|grep Kafka|awk '{print $2}'|xargs kill -9

./kafka-topics.sh --create --zookeeper localhost:2121 --replication-factor 1 --partitions 1 --topic ezrealer
./kafka-topics.sh --list --zookeeper localhost:2121    
./kafka-console-producer.sh --broker-list localhost:9292 --topic ezrealer
./kafka-console-consumer.sh --bootstrap-server localhost:9292 --topic ezrealer --from-beginning 

4、producer 与 consumer 的 配置

producer

buffer.memory 67108864
batch.size  10485760
max.request.size=6291456
linger.ms 100 
max.request.size
Compression.type
acks 1
retries 10	
retry.backoff.ms 500 (重试间隔)

consumer

session.timeout.ms = 10s
heartbeat.interval.ms = 3s
max.poll.records 单次 call 的数据返回量
max.poll.interval.ms 默认是 5 分钟。如果5分钟内无法消费完 poll 方法返回的消息,那么消费者会自动离组。
fetch.message.max.bytes  8388608
max.partition.fetch.bytes 8388608

kafka 常用命令

1、创建、查看与修改主题

./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 4 --topic test
./kafka-topics.sh --zookeeper 127.0.0.1:2181 --list
./kafka-topics.sh --describe --zookeeper zk:2181 --topic topicName
./kafka-topics.sh --zookeeper zk:2181 -topic topicName --alter --config retention.ms=2678400000

2、 查看消费者消费情况

./kafka-consumer-groups.sh --bootstrap-server localhost:9096 --describe --group my_consumer_group

3、 查看主题的数据量

./kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list localhost:9096 -topic tbl_stream_16432 --time -1

4、删除主题

./kafka-topics.sh --zookeeper localhost:2181 --delete --topic test

重置kafka

1、停止 kafka 服务,并删掉 kafka 中的 topic 数据

cd /kafka_2.11-1.1.0/bin
./kafka-server-stop.sh
# 或
ps -ef|grep Kafka|awk '{print $2}'|xargs kill -9

# 删除kafka 全部 topic 的数据目录;
# (server.properties文件log.dirs配置,默认为“/tmp/kafka-logs”)
cd /kafka_2.11-1.1.0/data
rm -r *

2、删掉 zookeeper 中 kafka 相关数据,并重启 zookeeper

cd /zookeeper-3.4.13/bin/
./zkCli.sh
ls /
rmr /brokers
rmr /consumers

./zkServer.sh restart

3、启动kafka

# 启动kafka
./kafka-server-start.sh ../config/server.properties

# 后台启动kafka
nohup ./kafka-server-start.sh ../config/server.properties 1>/dev/null 2>&1 &

kafka 可视化

cd /usr/software
wget https://github.com/yahoo/CMAK/archive/3.0.0.5.tar.gz
tar -zxvf 3.0.0.5.tar.gz -C /usr/local/
cd /usr/local/CMAK-3.0.0.5
curl https://bintray.com/sbt/rpm/rpm | sudo tee /etc/yum.repos.d/bintray-sbt-rpm.repo
yum install sbt
sbt clean dist

# 如果很慢,可以
cd ~/.sbt/
vim repositories

[repositories]
#local
public: http://maven.aliyun.com/nexus/content/groups/public/
typesafe:http://dl.bintray.com/typesafe/ivy-releases/ , [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext], bootOnly
ivy-sbt-plugin:http://dl.bintray.com/sbt/sbt-plugin-releases/, [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext]
sonatype-oss-releases

sonatype-oss-snapshots

# 保存

cd /usr/local/CMAK-3.0.0.5/conf
vim application.conf 

安装 Redis


单机安装 Redis

1、安装

cd /usr/software
wget https://download.redis.io/releases/redis-6.2.1.tar.gz
tar -zxvf redis-6.2.1.tar.gz -C /usr/local/
cd /usr/local/redis-6.2.1
make

# 如果gcc 版本低于 5.3,此处会报错,Centos7 默认的 gcc版本是4.8
gcc -v
yum -y install centos-release-scl
yum -y install devtoolset-9-gcc devtoolset-9-gcc-c++ devtoolset-9-binutils
source /opt/rh/devtoolset-9/enable
echo "source /opt/rh/devtoolset-9/enable" >>/etc/profile
gcc -v
make distclean

make
# make install PREFIX=/usr/local/redis-6.0.10 # 将可执行文件安装至指定的目录下
make test

2、配置与启动

vim redis.conf

bind 127.0.0.1 内网IP 外网IP
protected-mode no
port 17197
tcp-backlog 2048
timeout 900
daemonize yes
pidfile "redis_17197.pid"
logfile "redis_17197.log"
save 86400 1
stop-writes-on-bgsave-error no
rdbcompression no
rdbchecksum no
requirepass mypasswd
maxmemory 64gb
maxclients 100000

io-threads 4

io-threads-do-reads no # 因为原作者认为读多线程的效果一般般
# 保存

# 启动
cd src
./redis-server ../redis.conf

# 关闭
./redis-cli -h 0.0.0.0 -p 17197 -a mypasswd shutdown

3、做成系统服务启动

cd /usr/local/redis-6.2.1/utils
vim systemd-redis_server.service


cp systemd-redis_server.service /usr/lib/systemd/system/redis.service
chmod 754 /usr/lib/systemd/system/redis.service
systemctl daemon-reload && systemctl enable redis.service && systemctl start redis.service && systemctl status redis.service

安装 Redis-Cluster (3主3从)

1、在三台机器(172,174,175)分别执行上面的安装步骤

2、配置(以172为例)

cd /usr/local/redis-6.2.1

mkdir redis_17197
mkdir redis_17198

vim redis.conf

# 集群相关配置
port 17197
dir /usr/local/redis-6.2.1/redis_17197 # log 和 dump备份等都放入这里
logfile redis_17197_log.txt
dbfilename dump_17197.rdb
pidfile /usr/local/redis-6.2.1/redis_17197/17197.pid
cluster-enabled yes
cluster-config-file nodes-17197.conf
cluster-node-timeout 15000
masterauth mypasswd

bind 0.0.0.0
protected-mode no
tcp-backlog 2048
timeout 1800
daemonize yes

save 86400 1
stop-writes-on-bgsave-error no
rdbcompression no
rdbchecksum no
requirepass mypasswd
maxmemory 64gb

io-threads 4

io-threads-do-reads no
# 保存

mv redis.conf ./redis_17197/

3、替换 配置文件的端口,生成新的文件

cd /usr/local/redis-6.2.1/redis_17197
sed 's/17197/17198/g' ./redis.conf > ../redis_17198/redis.conf

4、在其他两台机器也做同样的配置

5、创建集群

系统自动给 每个机器的17197端口的节点设为主节点,17198端口的节点设置为其他机器的从节点,非常合理!

./redis-cli --cluster create 192.168.3.172:17197 192.168.3.172:17198 192.168.3.173:17197 192.168.3.173:17198 192.168.3.175:17197 192.168.3.175:17198 --cluster-replicas 1 -a mypasswd

# 出现每个节点是主从分配和 16384 个槽位的分配, 输入yes确定
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.3.174:17198 to 192.168.3.172:17197
Adding replica 192.168.3.175:17198 to 192.168.3.174:17197
Adding replica 192.168.3.172:17198 to 192.168.3.175:17197

yes

[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

6、登录与验证集群

./redis-cli -c -h  192.168.3.172  -p 17197 
auth mypasswd
cluster nodes
cluster info
info

7、Python 操作 Redis 集群的 Demo

from rediscluster import RedisCluster

startup_nodes = [{'host': '192.168.3.172', 'port': 17197}, {'host': '192.168.3.172', 'port': 17198},
                        {'host': '192.168.3.174', 'port': 17197}, {'host': '192.168.3.174', 'port': 17198},
                        {'host': '192.168.3.175', 'port': 17197}, {'host': '192.168.3.175', 'port': 17198}]

rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True,password = 'mypasswd')

rc.set("foo", "bar")
print(rc.get("foo"))

8、Redis 集群的 key的设计

Redis Cluster 提供了一个 hash tag 的机制,可以让我们把一组 key 映射到同一个 slot。

例如:user1000.following 这个 key 保存用户 user1000 关注的用户;user1000.followers 保存用户 user1000 的粉丝。

这两个 key 有一个共同的部分 user1000,可以指定对这个共同的部分做 slot 映射计算,这样他们就可以在同一个槽中了

设计方式:{user1000}.following 和 {user1000}.followers

就是把共同的部分使用 { } 包起来,计算 slot 值时,如果发现了花括号,就会只对其中的部分进行计算。

Multi-Key 这一点是 Redis Cluster 对于我们日常使用中最大的限制,一定要注意,如果多key不在同一个 slot 中就会报错。

安装 Git

Git 下载网站:https://mirrors.edge.kernel.org/pub/software/scm/git/

whereis git

sudo yum install -y wget
sudo yum install -y gcc-c++
sudo yum install -y zlib-devel perl-ExtUtils-MakeMaker
sudo yum install curl-devel expat-devel gettext-devel openssl-devel zlib-develperl-devel

yum remove git

cd /usr/software
wget https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.30.1.tar.gz

tar -zxvf cd git-2.30.1.tar.gz 

cd git-2.30.1

yum install install autoconf automake libtool

make configure

sudo make prefix=/usr/local/git install

echo 'export PATH=/usr/local/git/bin:$PATH' >> /etc/profile
source  /etc/profile

git --version
git config --global user.name "Ezrealer"

git config --global user.email Ezrealer@qq.com

git config --list

ssh-keygen -t rsa -C "Ezrealer@qq.com"

Enter > Enter > Enter 

# 输出
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.

# 查看公钥
cat /root/.ssh/id_rsa.pub

可将公钥设置到个人的 github 或者 gitee,并在服务器上拉取 gitee 上的项目

mkdir -p /usr/EzrealerGitRepo
cd /usr/EzrealerGitRepo
git clone git@gitee.com:Ezrealer/MyProgram.git
# 更新
git pull origin master
  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值