构建微服务实验环境(四):Kubernetes 集群

构建微服务实验环境(四):Kubernetes 集群

1、Kubernetes 安装

Kubernetes 集群安装单 manager 集群,几乎是全自动的。关键是安装包、镜像在 Google 服务器上难以获取。所以,安装建议参考以下文档:

1.1 安装环境

我们将准备三台虚拟机,例如:

hostnameip
k8s-master1192.168.56.150
k8s-worker1192.168.56.151
k8s-worker2192.168.56.152

* 机器第一块网卡是 NAT;ip 属于第二块网卡;
* 系统: centos 7 core 3.10+;
* 内存与CPU:IG RAM 1CPU。

1.2 目标

  • 在机器上安装一个安全的Kubernetes集群,理解相关组件
  • 在群集上安装pod网络,以便应用程序组件(pod)可以相互通信
  • 在集群上安装一个微服务示例应用程序(袜子店)

1.3 准备环境-RPM

所有机器上安装以下软件包:

在主机上安装kubelet和kubeadm
* docker:Kubernetes 依赖的容器运行时。推荐使用 v1.12。
* kubelet:Kubernetes 最核心的组成部分。它在集群中的所有计算机上运行,管理pod和容器。
* kubectl:控制集群运行的 Cli。只需要在管理机上,但是在其他节点上也可以使用它。
* kubeadm:集群安装、管理的 Cli。

docker 安装

curl https://releases.rancher.com/install-docker/1.12.sh | sh

systemctl start docker && systemctl enable docker
systemctl status docker

RPM 包不翻墙的获取

  • 博客1 提供了百度盘下载编译好的 v1.6.2 版本
  • 博客2 自己编译

建议你用 git clone 发行版,kubernetes/release

由于 git 镜像的发行版没有提供分支版本,目前最新版本 v1.6.3 (源码最新版本 v1.6.4)

安装 RPM

将编译后 x86_64 目录中文件,scp 到要安装的 centos 上, 然后在 x86_64 目录下执行

setenforce 0
yum localinstall *.rpm
systemctl enable kubelet && systemctl start kubelet
  • 官网 setenforce 0 是为了让容器有权访问 host 文件系统
  • 启动的 kubelet 这时并不正常,这不是故障,而是等待后面 Kubeadm 的配置

修改 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 如下:

[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_YOUR_ARGS=--pod-infra-container-image=pmlpml/pause-amd64:3.0"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS $KUBELET_YOUR_ARGS

其中,
(1)添加镜像位置,从能下载的位置下载。--pod-infra-container-image 说明从 pmlpml/pause-amd64:3.0 下载镜像;
(2)删除 cgroup 配置,参考: https://github.com/kubernetes/release/issues/306

其他工作包括:
RHEL / CentOS 7上的一些用户报告了由于iptables被绕过而导致路由不正确的问题。请(3)确保 net.bridge.bridge-nf-call-iptables在sysctl配置中设置为1,例如。

**修改 /usr/lib/sysctl.d/00-system.conf,将 net.bridge.bridge-nf-call-iptables 改成 1。 之后修改当前内核状态

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

为了(4)确保让 hostname -i 返回集群用网卡地址,请在 /etc/hosts 添加主机地址,例如:

echo "192.168.56.150 k8s-master1" >> /etc/hosts

以上内容多数在官方文档 Limitations 中有说明。否则,你就会体验所谓的“坑”,坑你几个小时正常。

最后 (5)重新 reload 服务

systemctl daemon-reload

1.4 准备环境-Images

Kubernetes 集群启动需要以下镜像:

gcr.io/google_containers/kube-controller-manager-amd64:v1.6.3
gcr.io/google_containers/kube-apiserver-amd64:v1.6.3
gcr.io/google_containers/kube-scheduler-amd64:v1.6.3
gcr.io/google_containers/kube-proxy-amd64:v1.6.3

gcr.io/google_containers/etcd-amd64:3.0.17
gcr.io/google_containers/pause-amd64:3.0

gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1

gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0
quay.io/coreos/flannel:v0.7.1-amd64

分别是 kube 集群服务、 状态存储服务与 pod 管理、dns 服务、addons(容器网络与卷) 与 监控服务。在1.6.x 中仅 kube 集群服务版本是变化的。这些镜像说明,见官方安装文档。

去哪儿去下载

  • 博客1: 我做好了,在阿里云上。
  • 博客2: 使用 git,docker-hub 自己转储。强力建议你这样做,以后好多镜像都需要这样,docker-hub 速度不错。

下载镜像

为了避免安装过程中下载镜像(你会以为死机),请使用以下代码预先下载镜像:

images=(kube-controller-manager-amd64:v1.6.3 kube-apiserver-amd64:v1.6.3 kube-scheduler-amd64:v1.6.3 kube-proxy-amd64:v1.6.3 \
etcd-amd64:3.0.17 pause-amd64:3.0 \
k8s-dns-sidecar-amd64:1.14.1 k8s-dns-kube-dns-amd64:1.14.1 k8s-dns-dnsmasq-nanny-amd64:1.14.1)

for imageName in ${images[@]} ; do
  docker pull pmlpml/$imageName
  docker tag pmlpml/$imageName gcr.io/google_containers/$imageName
  docker rmi pmlpml/$imageName
done

docker pull pmlpml/flannel:v0.7.1-amd64
docker tag pmlpml/flannel:v0.7.1-amd64 quay.io/coreos/flannel:v0.7.1-amd64
docker rmi pmlpml/flannel:v0.7.1-amd64

docker pull pmlpml/kubernetes-dashboard-amd64:v1.6.0
docker tag pmlpml/$kubernetes-dashboard-amd64:v1.6.0 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0
docker rmi pmlpml/$kubernetes-dashboard-amd64:v1.6.0 

注意:客户机并不需要容器网络、监控等服务。

注意:到这里,集群每台机器都需要安装内容已完成。如果你用虚拟机,这时是最佳复制虚拟机的时机,设置主机与网卡,并用 hostname -i 检查每部主机(host)网络的正确。

1.5 安装管理机(master)

启动管理机(master),使用 kubeadm init 指令

export KUBE_REPO_PREFIX="pmlpml"
export KUBE_ETCD_IMAGE="pmlpml/etcd-amd64:3.0.17"
kubeadm init --apiserver-advertise-address=192.168.56.150 --kubernetes-version=v1.6.3 --pod-network-cidr=10.96.0.0/12

其中:
* export 环境变量指定了 Kube 需要的镜像位置。详细见 kubeadm 参考手册 环境变量说明
* kubeadm init 常见有三个必须注意参数:
* --apiserver-advertise-address 如果你的主机互联网卡不是第一块网卡。不指定会导致工作节点无法加入。
* --kubernetes-version 指定 kube 的核心容器的版本。默认 v1.6.0 ,强力建议你指定。
* --pod-network-cidr 和前面 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 里面声明的KUBELET_DNS_ARGS 挂钩,请一同修改。

需要大约 1-3 分钟时间,最后的显示为:

[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 41.801260 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 2.502383 seconds
[token] Using token: 555ee0.264ca9faa30bc701
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token df28c6.6299a66518cd9f0a 192.168.56.150:6443

注意: 请保存好最后 kubeadm join --token df28c6.6299a66518cd9f0a 192.168.56.150:6443 以后节点加入集群都需要这个密钥。

然后拷贝 Master 节点上的 /etc/kubernetes/admin.conf 文件到本机的 ~/.kube/config

cp /etc/kubernetes/admin.conf ~/.kube/config

现在,你可以使用 kubectl get nodes 查看节点了。

1.6 安装 add-on 覆盖网络(overlay-network)

官网提供了许多网络组件,各有优点。这里使用flannel为例。

创建安全策略文件:kube-flannel-rbac.yml

# Create the clusterrole and clusterrolebinding:
# $ kubectl create -f kube-flannel-rbac.yml
# Create the pod using the same namespace used by the flannel serviceaccount:
# $ kubectl create --namespace kube-system -f kube-flannel.yml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system

pod 服务配置文件 kube-flannel-ds.yml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "type": "flannel",
      "delegate": {
        "isDefaultGateway": true
      }
    }
  net-conf.json: |
    {
      "Network": "10.96.0.0/12",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      containers:
      - name: kube-flannel
        image: pmlpml/flannel:v0.7.0-amd64
        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      - name: install-cni
        image: pmlpml/flannel:v0.7.0-amd64
        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

这些文件原本可使用官方的,因为你需要修改 image 的位置(两处),或网络配置。所以你只能自己创建并修改。

启动服务:

kubectl create -f kube-flannel-rbac.yml
kubectl create -f kube-flannel-ds.yml

kubectl get all --all-namespaces 检查, dns 和 flannel 都正常启动,结果例如:

NAMESPACE     NAME                                     READY     STATUS    RESTARTS   AGE
kube-system   po/etcd-k8s-master1                      1/1       Running   0          1h
kube-system   po/kube-apiserver-k8s-master1            1/1       Running   0          1h
kube-system   po/kube-controller-manager-k8s-master1   1/1       Running   0          1h
kube-system   po/kube-dns-61458177-hxn3j               3/3       Running   0          1h
kube-system   po/kube-flannel-ds-dpqx7                 2/2       Running   0          53m
kube-system   po/kube-proxy-3sks9                      1/1       Running   0          1h
kube-system   po/kube-scheduler-k8s-master1            1/1       Running   0          1h

NAMESPACE     NAME             CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
default       svc/kubernetes   10.96.0.1    <none>        443/TCP         1h
kube-system   svc/kube-dns     10.96.0.10   <none>        53/UDP,53/TCP   1h

NAMESPACE     NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deploy/kube-dns   1         1         1            1           1h

NAMESPACE     NAME                   DESIRED   CURRENT   READY     AGE
kube-system   rs/kube-dns-61458177   1         1         1         1h

1.7 加入节点

采用 1.4 步骤后产生的主机,修改 ip,主机名,/etc/hosts。然后,执行:

export KUBE_REPO_PREFIX="pmlpml"
kubeadm join --token df28c6.6299a66518cd9f0a 192.168.56.150:6443

结果返回:

... ...
Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

同样部署 k8s-worker2。

1.8 部署 dashboard 服务

同样编写服务授权和配置文件。dashboard-rbac.yml

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dashboard-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin 
subjects:
- kind: ServiceAccount
  name: default
  namespace: kube-system

服务配置:dashboard-dc.yml

# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.6 (RBAC enabled).
#
# Example usage: kubectl create -f <this_file>

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: kubernetes-dashboard
  template:
    metadata:
      labels:
        app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: pmlpml/kubernetes-dashboard-amd64:v1.6.0
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 9090
  selector:
    app: kubernetes-dashboard

注意:修改 image 的位置。

启动服务:

kubectl create -f dashboard-rbac.yml
kubectl create -f dashboard-dc.yml

检查服务状态:

kubectl describe --namespace kube-system service kubernetes-dashboard

返回中找 NodePort:? 但 node 的 ip 地址呢。请使用以下命令,查找 pod 的 node。例如:

kubectl get pods --namespace=kube-system
kubectl describe pods kubernetes-dashboard-???????? --namespace=kube-system

1.9 部署微服务应用

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值