kubernetes小集群搭建第一次实验

kubernetes小集群搭建第一次实验


master:192.168.79.31  同时作为node
node:192.168.79.31


1、master、node各执行yum install etcd kubernetes flannel net-tools -y


2、配置高可用etcd,配置文件/etc/etcd/etcd.conf内容如下:


master:
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.79.31:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.79.31:2379,http://127.0.0.1:2379"
ETCD_NAME="etcd1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.79.31:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.79.31:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.79.31:2380,etcd2=http://192.168.79.32:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"


node:
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.79.32:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.79.32:2379,http://127.0.0.1:2379"
ETCD_NAME="etcd2"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.79.32:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.79.32:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.79.31:2380,etcd2=http://192.168.79.32:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="exist"


先后启动master、node端etcd
systemctl start etcd


查看etcd集群状态


[root@localhost etcd]# etcdctl cluster-health
member 43507d214234bd6 is healthy: got healthy result from http://127.0.0.1:2379
member 9f23e693b7a08af is healthy: got healthy result from http://127.0.0.1:2379
cluster is healthy


[root@localhost etcd]# etcdctl member list
43507d214234bd6: name=etcd1 peerURLs=http://192.168.79.31:2380 clientURLs=http://127.0.0.1:2379,http://192.168.79.31:2379 isLeader=true
9f23e693b7a08af: name=etcd2 peerURLs=http://192.168.79.32:2380 clientURLs=http://127.0.0.1:2379,http://192.168.79.32:2379 isLeader=false


3、分别配置flannel


配置文件/etc/sysconfig/flanneld内容如下:


FLANNEL_ETCD_ENDPOINTS="http://192.168.79.31:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"


在master端执行
[root@localhost etcd]# etcdctl set /atomic.io/network/config '{ "Network": "10.1.0.0/16" }'
{ "Network": "10.1.0.0/16" }


分别启动flanneld
systemctl start flanneld


ip addr show查看:


master:
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none
    inet 10.1.3.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::ae1d:b93f:e015:78d4/64 scope link flags 800
       valid_lft forever preferred_lft forever


node:
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none
    inet 10.1.55.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::1293:d18e:e750:d84e/64 scope link flags 800
       valid_lft forever preferred_lft forever


4、配置docker


阿里云镜像加速配置:
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://xxxxx.mirror.aliyuncs.com"]  #阿里云获取
}
EOF


启动并停止docker:
systemctl start docker
systemctl stop docker


设置docker0网桥的IP地址
/usr/libexec/flannel/mk-docker-opts.sh -i
source /run/flannel/subnet.env
ifconfig docker0 ${FLANNEL_SUBNET}


启动docker
systemctl start docker


docker0状态:


master:
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.1.3.1  netmask 255.255.255.0  broadcast 0.0.0.0


node:
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.1.55.1  netmask 255.255.255.0  broadcast 0.0.0.0


master端ping node端docker0
[root@localhost etcd]# ping 10.1.55.1
PING 10.1.55.1 (10.1.55.1) 56(84) bytes of data.
64 bytes from 10.1.55.1: icmp_seq=1 ttl=62 time=0.923 ms
64 bytes from 10.1.55.1: icmp_seq=2 ttl=62 time=0.581 ms


查看etcd中flanneld设定


[root@localhost etcd]# etcdctl ls /atomic.io/network/subnets
/atomic.io/network/subnets/10.1.3.0-24
/atomic.io/network/subnets/10.1.55.0-24


5、配置master:


apiserver:
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.79.31:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS=""


controller-manager:
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://192.168.79.31:8080"


scheduler:
KUBE_SCHEDULER_ARGS="--master=http://192.168.79.31:8080"


kubelet:
KUBELET_ADDRESS="--address=127.0.0.1"
KUBELET_HOSTNAME="--hostname-override=192.168.79.31"
KUBELET_API_SERVER="--api-servers=http://192.168.79.31:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=docker.io/mritd/pause-amd64:latest"
KUBELET_ARGS=""


proxy:
KUBE_PROXY_ARGS="--master=http://192.168.79.31:8080"


分别启动


6、配置node:


kubelet:
KUBELET_ADDRESS="--address=127.0.0.1"
KUBELET_HOSTNAME="--hostname-override=192.168.79.31"
KUBELET_API_SERVER="--api-servers=http://192.168.79.31:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=docker.io/mritd/pause-amd64:latest"
KUBELET_ARGS=""


proxy:
KUBE_PROXY_ARGS="--master=http://192.168.79.31:8080"


分别启动


7、查看


[root@localhost kubernetes]# kubectl get node
NAME            STATUS    AGE
192.168.79.31   Ready     11m
192.168.79.32   Ready     2m


8、测试
[root@localhost kubernetes]# kubectl run nginx --image=nginx:1.7.9 --replicas=2
deployment "nginx" created


[root@localhost kubernetes]# kubectl get deployment nginx
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx     2         0         0            0           16s


[root@localhost kubernetes]# kubectl describe deployment nginx
Events:
  FirstSeen LastSeen Count From SubObjectPath Type Reason Message
  --------- -------- ----- ---- ------------- -------- ------ -------
  49s 49s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-2298518471 to 2


[root@localhost kubernetes]# kubectl describe replicaset nginx-2298518471
Events:
  FirstSeen LastSeen Count From SubObjectPath Type Reason Message
  --------- -------- ----- ---- ------------- -------- ------ -------
  1m 4s 26 {replicaset-controller } Warning FailedCreate Error creating: No API token found for service account "default", retry after the token is automatically created and added to the service account


发现报错:取消apiserver --admission-control中的ServiceAccount,重启kube-apiserver服务后解决


Events:
  FirstSeen LastSeen Count From SubObjectPath Type Reason Message
  --------- -------- ----- ---- ------------- -------- ------ -------
  3m 32s 30 {replicaset-controller } Warning FailedCreate Error creating: No API token found for service account "default", retry after thetoken is automatically created and added to the service account
  27s 27s 1 {replicaset-controller } Normal SuccessfulCreate Created pod: nginx-2298518471-8kmtw
  27s 27s 1 {replicaset-controller } Normal SuccessfulCreate Created pod: nginx-2298518471-24h86


[root@localhost kubernetes]# kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
nginx-2298518471-24h86   1/1       Running   0          8m
nginx-2298518471-8kmtw   1/1       Running   0          8m



































  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值