kubernetes 安装

1 准备三台虚拟机,安装centos7.4

[root@kub1 member]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.44.111 kub1
192.168.44.112 kub2
192.168.44.113 kub3

192.168.44.111 这一台机器作为master,同时也是node。
192.168.44.112,192.168.44.113这两台作为node。

2.yum install -y etcd
可以通过这个命令安装

3.安装好后修改配置: vi /etc/etcd/etcd.conf
下面是修改192.168.44.111这一台。其他两台配置类似,可以把192.168.44.111上的etcd.conf拷贝到其两台后再修改,主要改ip,把111改为对应的112,113。
ETCD_NAME=“etcd-1”,其他两台这样自己起个名字。

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="http://192.168.44.111:2380"
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.44.111:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd-1"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.44.111:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.44.111:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd-1=http://192.168.44.111:2380,etcd-2=http://192.168.44.112:2380,etcd-3=http://192.168.44.113:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"

4.systemctl start etcd 启动etcd

5.etcdctl member list 查看etcd成员

[root@kub1 member]# etcdctl member list
3dc2f745012d601f: name=etcd-3 peerURLs=http://192.168.44.113:2380 clientURLs=http://192.168.44.113:2379 isLeader=true
84c01ef6bb2359f5: name=etcd-2 peerURLs=http://192.168.44.112:2380 clientURLs=http://192.168.44.112:2379 isLeader=false
a96abb0b319a06cb: name=etcd-1 peerURLs=http://192.168.44.111:2380 clientURLs=http://192.168.44.111:2379 isLeader=false

如果要初始化etcd集群,进行以下操作

1  关闭所有etcd: systemctl stop etcd 
2 删除所有 etcd 节点中 ETCD_DATA_DIR 配置目录下的内容,
我这里是ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
3.启动所有etcd: systemctl start etcd

6.在master(kub1)上安装服务

yum install -y  kubernetes-master kubernetes-node  ntp flannel docker

7.在另外两台(kub2,kub3)上安装服务,和上面的区别是没有kubernetes-master

yum install -y kubernetes-node ntp flannel docker

下面是修改配置文件

这部分只在master上配置

8.vi /etc/kubernetes/apiserver
只在master(kub1)节点上改这个配置,也只有安装了kubernetes-master才有apiserver这个服务

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.44.111:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit"

# Add your own!
KUBE_API_ARGS=""

9.vi /etc/kubernetes/scheduler
同样只在master上配置

# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="--address=0.0.0.0"

这部分在三台上都配置

10.vi /etc/kubernetes/config
在kub1,kub2,kub3三台上都修改这个配置,主要是KUBE_MASTER是kub1的ip,因为这台是master,这样三台机器都能访问master的8080端口。

KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.44.111:8080"

11.vi /etc/sysconfig/flanneld
FLANNEL_ETCD_PREFIX,这个是etcd存放网络配置的参数,
也就是kubernetes的网络配置存放在etcd的"/atomic.io/network"这个目录下。
在三台机器上都配置这个

# Flanneld configuration options  
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.44.111:2379,http://192.168.44.112:2379,http://192.168.44.113:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

12.vi /etc/kubernetes/proxy
在三台机器上都配置这个

# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS="--bind-address=0.0.0.0"

13.vi /etc/kubernetes/kubelet
在三台机器上都配置这个,
注意KUBELET_ADDRESS和KUBELET_HOSTNAME要写每个机器自己的ip

###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS="--bind-address=0.0.0.0"
[root@kub1 ~]# cat /etc/kubernetes/kubelet 
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.44.111"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.44.111"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.44.111:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

14.在etcd上配置网络
"/atomic.io/network"这个目录就是上面/etc/sysconfig/flanneld中配置的,在这个目录下设置config。
‘{“Network”: “172.16.0.0/16”}’,它的意思是kubernetes集群中的所有pod在"172.16.0.0/16"这个网络下。

etcdctl set /atomic.io/network/config '{"Network": "172.16.0.0/16"}'

然后再设置每台机器的子网络(这些子网络都是在"172.16.0.0/16"这个网络下),
比如下面第一行的意思是192.168.44.111(kub1)这台机器的网络是"172.16.31.0/24",也就是这台机器上的pod的ip在"172.16.31.0/24"这个网络下。

etcdctl set /atomic.io/network/subnets/172.16.31.0-24 '{"PublicIP": "192.168.44.111"}'
etcdctl set /atomic.io/network/subnets/172.16.32.0-24 '{"PublicIP": "192.168.44.112"}'
etcdctl set /atomic.io/network/subnets/172.16.33.0-24 '{"PublicIP": "192.168.44.113"}'

14 . 在master(kub1)上启动服务

for i in flanneld kube-proxy kubelet docker kube-apiserver kube-controller-manager kube-scheduler;do systemctl restart $i; systemctl enable $i;done

在另个两台(kub2,kub3)上启动服务

for i in flanneld kube-proxy kubelet docker;do systemctl restart $i;systemctl enable $i;systemctl status $i ;done

15.查看node

[root@kub1 ~]# kubectl get nodes
NAME             STATUS    AGE
192.168.44.111   Ready     1h
192.168.44.112   Ready     1h
192.168.44.113   Ready     1h

16.运行pod
–replicas=3,会在每台机器上都运行一个pod

kubectl run nginx --image=nginx --port=80  --replicas=3

可能pod在pull image时报下面错

Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""

网上说可以 yum install -y *rhsm* 解决,
但是我发现还是报错
通过以下操作解决

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm

rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem

也可能是网络问题,拉取镜像比较慢

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值