Centos7.9部署单节点K8S环境

Centos7.9部署单节点K8S环境

通过Centos extras镜像源安装K8S环境,优点是方便快捷,缺点是版本较低,安装后的版本为1.5.2。

1. 准备工作

  1. 关闭selinux
[root@localhost ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

  1. 关闭防火墙
[root@localhost ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

2. 安装kubernetes和etcd

  1. yum下载安装

yum install etcd kubernetes -y

如果提示docker组件冲突,需要卸载现有的docker组件:

...
Error: docker-ce-cli conflicts with 2:docker-1.13.1-210.git7d71120.el7.centos.x86_64
Error: docker-ce conflicts with 2:docker-1.13.1-210.git7d71120.el7.centos.x86_64
 You could try using --skip-broken to work around the problem
...
# 卸载环境现有的docker环境
yum remove docker*`
  1. 修改kube-apiserver配置文件
[root@localhost ~]# cat /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"    # #127.0.0.1改成0.0.0.0

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"   # 修改策略

# Add your own!
KUBE_API_ARGS=""
  1. 启动配置服务

启动服务:

systemctl start etcd
systemctl start docker
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy

配置服务自启动:

systemctl enable etcd
systemctl enable docker
systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler
systemctl enable kubelet
systemctl enable kube-proxy

查看环境信息:

# 查看k8s版本,使用该种方式部署的k8s版本较低
[root@localhost ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
# 查看节点列表
[root@localhost ~]# kubectl get nodes
NAME        STATUS    AGE
127.0.0.1   Ready     3h

3. 部署示例应用

创建两个yaml文件,分别用于部署nginx的deployment和service。

[root@localhost ~]# cat nginx-deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: default
  labels:
    web: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15
        ports:
        - containerPort: 80
# 创建pod
[root@localhost ~] kubectl create -f nginx-deploy.yaml
[root@localhost ~]# cat nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-demo
spec:
  type: NodePort
  ports:
   - port: 80
     nodePort: 30080
  selector:
    app: nginx
# 创建sevice
[root@localhost ~] kubectl create -f nginx-svc.yaml

# 查看创建的k8s资源
[root@localhost ~]# kubectl get pod
NAME                                READY     STATUS    RESTARTS   AGE
nginx-deployment-3856710913-4wcvd   1/1       Running   1          1h
nginx-deployment-3856710913-nmf32   1/1       Running   1          1h
nginx-deployment-3856710913-v0mcz   1/1       Running   1          1h
[root@localhost ~]# kubectl get svc
NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   10.254.0.1       <none>        443/TCP        3h
nginx-demo   10.254.146.200   <nodes>       80:30080/TCP   1h

访问测试,使用浏览器或者curl命令访问nginx:

[root@localhost ~]# curl http://192.168.226.133:30080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

测试访问pod成功。

4. 部署遇到的问题记录

4.1 docker服务无法启动–Error starting daemon: layer does not exist

docker服务无法启动,查看docker服务报错Error starting daemon: layer does not exist

解决办法:

使用下面的脚本清空/var/lib/docker,脚本内容如下:

#!/bin/sh
set -e

dir="$1"

if [ -z "$dir" ]; then
	{
		echo 'This script is for destroying old /var/lib/docker directories more safely than'
		echo '  "rm -rf", which can cause data loss or other serious issues.'
		echo
		echo "usage: $0 directory"
		echo "   ie: $0 /var/lib/docker"
	} >&2
	exit 1
fi

if [ "$(id -u)" != 0 ]; then
	echo >&2 "error: $0 must be run as root"
	exit 1
fi

if [ ! -d "$dir" ]; then
	echo >&2 "error: $dir is not a directory"
	exit 1
fi

dir="$(readlink -f "$dir")"

echo
echo "Nuking $dir ..."
echo '  (if this is wrong, press Ctrl+C NOW!)'
echo

( set -x; sleep 10 )
echo

dir_in_dir() {
	inner="$1"
	outer="$2"
	[ "${inner#$outer}" != "$inner" ]
}

# let's start by unmounting any submounts in $dir
#   (like -v /home:... for example - DON'T DELETE MY HOME DIRECTORY BRU!)
for mount in $(awk '{ print $5 }' /proc/self/mountinfo); do
	mount="$(readlink -f "$mount" || true)"
	if dir_in_dir "$mount" "$dir"; then
		( set -x; umount -f "$mount" )
	fi
done

# now, let's go destroy individual btrfs subvolumes, if any exist
if command -v btrfs > /dev/null 2>&1; then
	root="$(df "$dir" | awk 'NR>1 { print $NF }')"
	root="${root#/}" # if root is "/", we want it to become ""
	for subvol in $(btrfs subvolume list -o "$root/" 2>/dev/null | awk -F' path ' '{ print $2 }' | sort -r); do
		subvolDir="$root/$subvol"
		if dir_in_dir "$subvolDir" "$dir"; then
			( set -x; btrfs subvolume delete "$subvolDir" )
		fi
	done
fi

# finally, DESTROY ALL THINGS
( set -x; rm -rf "$dir" )

将脚本保持为docker-recovery.sh,运行命令sh docker-recovery.sh /var/lib/docker进行修复。

4.2 拉取pod-infrastructure镜像失败

部署创建pod时,会从红帽的官方镜像仓库拉取基础容器pod,报错如下:

Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.

details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"

Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""

解决办法:

  1. node节点,包括master节点执行如下操作

查看/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt 是一个软链接,但是链接过去后并没有真实的/etc/rhsm。使用yum进行安装:

[root@localhost ~]# ls -alh /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt
lrwxrwxrwx. 1 root root 27 Jun 10 05:34 /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt -> /etc/rhsm/ca/redhat-uep.pem
[root@localhost ~]# yum install *rhsm* -y
  1. 下载安装证书
[root@localhost ~]# wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
[root@localhost ~]# rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem
  1. 拉取镜像
[root@localhost ~]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest                                                                                                                                                                                                    Trying to pull repository registry.access.redhat.com/rhel7/pod-infrastructure ...
latest: Pulling from registry.access.redhat.com/rhel7/pod-infrastructure
26e5ed6899db: Pull complete
66dbe984a319: Pull complete
9138e7863e08: Pull complete
Digest: sha256:47db25d46e39f338142553f899cedf6b0ad9f04c6c387a94b6b0964b7d1b7678
Status: Downloaded newer image for registry.access.redhat.com/rhel7/pod-infrastructure:latest
  • 5
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

lldhsds

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值