kubernetes集群搭建

kubernetes集群搭建
一、平台环境
操作系统:CentOS 7.0
二、环境安装
1、系统安装和防火墙设置
 所有主机安装CentOS 7.0
 关闭防火墙:
  # systemctl disable firewalld
  # systemctl stop firewalld
2、安装和配置Etcd
   (1)安装Etcd
    在Master主控节点上安装Etcd:
    #yum install etcd
   (2)配置etcd
 修改/etc/etcd/etcd.conf中:
 ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
 ETCD_ADVERTISE_CLIENT_URLS="http://192.168.153.142:2379"(安装etcd的主机ip:端口号)
   (3)启动etcd
 # systemctl daemon-reload
 # systemctl enable etcd.service
 # systemctl start etcd.service
   (4)验证etcd是否正确启动
 # etcdctl clueter-health,若出现cluster is healthy 即etcd正确启动
3、安装kubernetes(所有节点)
   # yum install kubernetes
4、配置主控节点
   (1)/etc/kubernetes/config文件的配置
 # logging to stderr means we get it in the systemd journal
 KUBE_LOGTOSTDERR="--logtostderr=true"
 # journal message level, 0 is debug
 KUBE_LOG_LEVEL="--v=0"
 # Should this cluster be allowed to run privileged docker containers
 KUBE_ALLOW_PRIV="--allow-privileged=false"
 # How the controller-manager, scheduler, and proxy find the apiserver
 # 此处为主控节点的IP和apiserver对应的监听端口
 KUBE_MASTER="--master=http://192.168.153.142:8080"
   (2)/etc/kubernetes/apiserver文件的配置 
 ###
 # kubernetes system config
 #
 # The following values are used to configure the kube-apiserver
 #
 # The address on the local server to listen to.
 KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
 # The port on the local server to listen on.
 # KUBE_API_PORT="--port=8080" 
 # Port minions listen on
 # KUBELET_PORT="--kubelet-port=10250"
 # Comma separated list of nodes in the etcd cluster
 # 此处为安装etcd的主机的IP和对应的监听端口
 KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.153.142:2379"
 # Address range to use for services
 KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
 # default admission control policies
 KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
 # Add your own!
 KUBE_API_ARGS=""
   (3)/etc/kubernetes/controller-manager文件的配置
 暂无配置
   (4)/etc/kubernetes/scheduler文件的配置
 暂无配置
   (5)启动kubernetes服务
 # systemctl daemon-reload
 # systemctl enable kube-apiserver.service kube-controller-manager.service kube-scheduler.service 
 # systemctl start kube-apiserver.service kube-controller-manager.service kube-scheduler.service
5、配置从属节点
   (1)/etc/kubernetes/config文件的配置
 ###
 # kubernetes system config
 #
 # The following values are used to configure various aspects of all
 # kubernetes services, including
 #
 #   kube-apiserver.service
 #   kube-controller-manager.service
 #   kube-scheduler.service
 #   kubelet.service
 #   kube-proxy.service
 # 此处为安装etcd的主机的IP和对应的监听端口
 KUBE_ETCD_SERVERS="--etcd_servers=http://192.168.153.142:2379"
 # logging to stderr means we get it in the systemd journal
 KUBE_LOGTOSTDERR="--logtostderr=true"
 # journal message level, 0 is debug
 KUBE_LOG_LEVEL="--v=0"
 # Should this cluster be allowed to run privileged docker containers
 KUBE_ALLOW_PRIV="--allow-privileged=false"
 # How the controller-manager, scheduler, and proxy find the apiserver
 # 此处为主控节点的IP和apiserver对应的监听端口
 KUBE_MASTER="--master=http://192.168.153.142:8080"
   (2)/etc/kubernetes/kubelet文件的配置
 ###
 # kubernetes kubelet (minion) config
 # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
 KUBELET_ADDRESS="--address=0.0.0.0"
 # The port for the info server to serve on
 # KUBELET_PORT="--port=10250"
 # You may leave this blank to use the actual hostname
 #设置本node的名称
 KUBELET_HOSTNAME="--hostname-override=192.168.153.141"
 # location of the api-server
 #api-server的位置,即主控节点的IP和apiserver对应的监听端口
 KUBELET_API_SERVER="--api-servers=http://192.168.153.142:8080"
 # pod infrastructure container
 KUBELET_POD_INFRA_CONTAINER="--pod-infra-container- image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
 # Add your own!
 KUBELET_ARGS=""
   (3)启动kubernetes服务
 # systemctl daemon-reload
 # systemctl enable docker.service kubelet.service kube-proxy.service
 # systemctl start docker.service kubelet.service kube-proxy.service
6、验证是否成功搭建kubernetes集群
       在Master上输入下面命令:
 # kubectl get nodes
 NAME              STATUS    AGE
 192.168.153.143   Ready     19m
       状态为ready表示Node已经成功注册并且状态为可用
kubernetes集群网络配置(使用flannel)
1、安装flannel
     需要在每台Node上安装flannel:
 # yum -y install flannel
2、配置flannel
 # gedit /etc/sysconfig/flanneld
 # Flanneld configuration options 
 # etcd url location.  Point this to the server where etcd runs
 # 此处为安装etcd的主机的IP和对应的监听端口
 FLANNEL_ETCD_ENDPOINTS="http://192.168.153.142:2379"
 # etcd config key.  This is the configuration key that flannel queries
 # For address range assignment
 FLANNEL_ETCD_PREFIX="/coreos.com/network"
 # Any additional options that you want to pass
 #FLANNEL_OPTIONS=""
3、在etcd中添加一条网络配置记录
 # etcdctl set /coreos.com/network/config '{"Network": "10.1.0.0/16"}'
4、flannel将覆盖docker0网桥,如果Docker服务已启动则停止Docker服务
5、启动flanneld服务
 # systemctl start flanneld
6、设置Docker0网桥的IP地址
 # source /run/flannel/subnet.env
 # ifconfig docker0 ${FLANNEL_SUBNET}
7、检查docker0的IP地址属于flannel0的子网
 # ip addr
 docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
     link/ether 02:42:e1:44:ac:f2 brd ff:ff:ff:ff:ff:ff
     inet 10.1.15.1/24 brd 10.1.15.255 scope global docker0
       valid_lft forever preferred_lft forever
 flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state  UNKNOWN qlen 500
     link/none
     inet 10.1.15.0/16 scope global flannel0
        valid_lft forever preferred_lft forever
8、重启Docker服务
 #systemctl restart
9、验证是否成功配置
      使用ping命令验证各Node上docker0之间的相互访问,若成功ping通,则配置完成。
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值