Docker集群管理工具 -- Kubernetes 部署&使用说明

本文详细介绍了如何在Docker环境中部署和使用Kubernetes(k8s)集群。首先,我们进行了环境准备,接着分别部署了Master和Node节点。通过在Master节点上查看k8s集群状态并创建Flannel覆盖网络,确保了集群的正常运行。然后,我们在Master节点上部署了一个nginx Pod,并使用复制器启动了两个备份实例,同时配置了内部和外部访问的Service。最后,通过curl命令验证了nginx服务的正确性和高可用性。
摘要由CSDN通过智能技术生成

1.  环境准备

ip地址             主机名          节点功能

172.18.41.205       k8s-master         Master, etcd

172.18.41.206       k8s-node02          Node02,registry

172.18.41.207       k8s-node01          Node01

[root@k8s-master ~]# cat /etc/redhat-release

CentOS Linux release 7.6.1810 (Core) 

 

设置三个节点的主机名

[root@k8s-master ~]# hostnamectl --static set-hostname  k8s-master

[root@k8s-node01 ~]# hostnamectl --static set-hostname  k8s-node01

[root@k8s-node02 ~]# hostnamectl --static set-hostname  k8s-node02

 

分别绑定三个节点的hosts

[root@k8s-master01 ~]# cat /etc/hosts

.........
172.18.41.205 k8s-master
172.18.41.207 k8s-node1
172.18.41.206 k8s-node2

 

关闭三台机器上的防火墙和selinux

[root@k8s-master ~]# systemctl disable firewalld.service

[root@k8s-master ~]# systemctl stop firewalld.service

[root@k8s-master ~]# firewall-cmd --state

not running

 

[root@k8s-master ~]# setenforce 0

[root@k8s-master ~]# cat /etc/sysconfig/selinux

........

SELINUX=disabled

[root@k8s-master ~]# getenforce                

Disabled

2. 部署Mater节点 

1) 安装docker

[root@k8s-master ~]# yum install -y docker

[root@k8s-master ~]# docker --version

Docker version 1.13.1, build 07f3374/1.13.1

  

2) 安装etcd

k8s运行依赖etcd,需要先部署etcd,下面采用yum方式安装:

[root@k8s-master ~]# yum install -y etcd

  

yum安装的etcd默认配置文件在/etc/etcd/etcd.conf,编辑配置文件:

[root@k8s-master ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak

[root@k8s-master ~]# >/etc/etcd/etcd.conf

[root@k8s-master ~]# vim /etc/etcd/etcd.conf

#[member]

#节点名称

ETCD_NAME=k8s-master

#数据存放位置                                                    

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"                

#ETCD_WAL_DIR=""

#ETCD_SNAPSHOT_COUNT="10000"

#ETCD_HEARTBEAT_INTERVAL="100"

#ETCD_ELECTION_TIMEOUT="1000"

#ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"

#监听客户端地址

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"           

#ETCD_MAX_SNAPSHOTS="5"

#ETCD_MAX_WALS="5"

#ETCD_CORS=""

#

#[cluster]

#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"

# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."

#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"

#ETCD_INITIAL_CLUSTER_STATE="new"

#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

#通知客户端地址

ETCD_ADVERTISE_CLIENT_URLS="http://172.18.41.205:2379,http://172.18.41.205:4001"            

#ETCD_DISCOVERY=""

#ETCD_DISCOVERY_SRV=""

#ETCD_DISCOVERY_FALLBACK="proxy"

#ETCD_DISCOVERY_PROXY=""

  

启动etcd并验证状态

[root@k8s-master ~]# systemctl start etcd

[root@k8s-master01 ~]# ps -ef | grep etcd 

etcd     24710     1  0 16:46 ?        00:00:23 /usr/bin/etcd --name=k8s-master --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001
root     27073 10462  0 18:28 pts/0    00:00:00 grep --color=auto etcd    

  

[root@k8s-master ~]# lsof -i:2379         

COMMAND PID USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME

etcd    977 etcd    6u  IPv6 259685220      0t0  TCP *:2379 (LISTEN)

etcd    977 etcd   13u  IPv4 259683141      0t0  TCP localhost:54160->localhost:2379 (ESTABLISHED)

etcd    977 etcd   14u  IPv6 259683142      0t0  TCP localhost:2379->localhost:54160 (ESTABLISHED)

  

[root@k8s-master ~]# lsof -i:4001         

COMMAND PID USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME

etcd    977 etcd    7u  IPv6 259685221      0t0  TCP *:newoak (LISTEN)

etcd    977 etcd   11u  IPv4 259683140      0t0  TCP localhost:56102->localhost:newoak (ESTABLISHED)

etcd    977 etcd   15u  IPv6 259688733      0t0  TCP localhost:newoak->localhost:56102 (ESTABLISHED)

  

测试etcd

[root@k8s-master ~]# etcdctl set testdir/testkey0 10

10

[root@k8s-master ~]# etcdctl get testdir/testkey0

10

[root@k8s-master ~]# etcdctl -C http://172.18.41.205:2379 cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://172.18.41.205:2379
cluster is healthy

  

[root@k8s-master ~]# etcdctl -C http://172.18.41.205:4001 cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://172.18.41.205:2379
cluster is healthy

  

4) 安装kubernets

[root@k8s-master ~]# yum install -y kubernetes

  

配置并启动kubernetes

在kubernetes master上需要运行以下组件:Kubernets API Server、Kubernets Controller Manager、Kubernets Scheduler

  

[root@k8s-master ~]# cp /etc/kubernetes/apiserver /etc/kubernetes/apiserver.bak

[root@k8s-master ~]# >/etc/kubernetes/apiserver

[root@k8s-master ~]# vim /etc/kubernetes/apiserver

###

# kubernetes system config

#

# The following values are used to configure the kube-apiserver

#

     

# The address on the local server to listen to.

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

     

# The port on the local server to listen on.

KUBE_API_PORT="--port=8080"

     

# Port minions listen on

# KUBELET_PORT="--kubelet-port=10250"

     

# Comma separated list of nodes in the etcd cluster

KUBE_ETCD_SERVERS="--etcd-servers=http://172.18.41.205:2379"

     

# Address range to use for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.18.0.0/16"

     

# default admission control policies

#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

     

# Add your own!

KUBE_API_ARGS=""

  

[root@k8s-master ~]# cp /etc/kubernetes/config /etc/kubernetes/config.bak

[root@k8s-master ~]# >/etc/kubernetes/config

[root@k8s-master ~]# vim /etc/kubernetes/config

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

#   kube-apiserver.service

#   kube-controller-manager.service

#   kube-scheduler.service

#   kubelet.service

#   kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

   

# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"

   

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"

   

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://172.18.41.205:8080"

  

启动服务并设置开机自启动

[root@k8s-master01 ~]# systemctl enable kube-apiserver.service

[root@k8s-master01 ~]# systemctl start kube-apiserver.service

[root@k8s-master01 ~]# systemctl enable kube-controller-manager.service

[root@k8s-master01 ~]# systemctl start kube-controller-manager.service

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值