基于ubuntu16.04使用kubernetes-v1.13.4-helm部署openstack-o版

一、环境描述

1.本次测试共准备3台机器虚拟机,需要均可连接外网,3台为kubernetes集群(1master节点、2node节点)。
*注:kubernetes集群主机最低配置为16GB内存+8核cpu+50G存储
2. 操作系统版本均为ubuntu 16.04
3.各镜像或相关介质版本:
Kubernetes:v1.13.4
kubernetes-cni:0.6.0
docker:18.06.1-0ubuntu1.2~16.04.1
k8s-calico:v3.1.6
k8s-coredns:1.2.6
k8s-etcd:3.2.24
k8s-pause:3.1
helm:2.13.0

二、前期准备

*注:以下操作不加说明的情况下,默认在所有节点上执行

1.修改主机名

master节点:

vi /etc/hostname 
master

node节点:

vi /etc/hostname
node1

vi /etc/hostname
node2

完成后重启

2.添加dns

vi /etc/resolv.conf
nameserver 8.8.8.8
nameserver 114.114.114.114

3.#配置阿里源

*注:本步可以执行,也可以不执行,更换阿里源是为方便前期准备工作可以尽快完成,但是在开始安装kubernetes时需要换回默认apt源

mv /etc/apt/sources.list /etc/apt/sources.list.bak
vi /etc/apt/sources.list
deb http://mirrors.aliyun.com/ubuntu/ xenial main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial main

deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates main

deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates universe

deb http://mirrors.aliyun.com/ubuntu/ xenial-security main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security main
deb http://mirrors.aliyun.com/ubuntu/ xenial-security universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security universe

更新源:

apt-get update

4.安装部署所需软件

apt-get update
apt-get install --no-install-recommends -y \
   ca-certificates \
   git \
   make \
   jq \
   nmap \
   curl \
   uuid-runtime \
   python-pip

*注:如果出现如下报错
Traceback (most recent call last):
File “/usr/bin/pip2”, line 5, in
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
可执行apt-get install python-pkg-resources python-setuptools --reinstall

5.配置apt https规则

apt-get install -y apt-transport-https

6.修改sudo权限

vi /etc/sudoers
root    ALL=(ALL) NOPASSWD: ALL

7.关闭swap

swapoff -a
vi /etc/fstab
注释掉带swap的一行

8.添加hosts

vi /etc/hosts
10.xxx.xxx.xxx master
10.xxx.xxx.xxx node1
10.xxx.xxx.xxx node2

9.添加信任

*注:不要忘记配置自身信任

ssh-keygen -t rsa
ssh-copy-id root@10.xxx.xxx.xxx(三个节点互通)
mkdir /etc/openstack-helm
cp ~/.ssh/id_rsa /etc/openstack-helm/deploy-key.pem

10.安装docker

apt-get install docker.io=18.06.1-0ubuntu1.2~16.04.1 -y

10.下载源码

下载官方openstack-helm代码

cd /opt
git clone https://git.openstack.org/openstack/openstack-helm-infra.git /opt/openstack-helm-infra
git clone https://git.openstack.org/openstack/openstack-helm.git /opt/openstack-helm

11.下载镜像

vi pull_k8s_images.sh 
#下载镜像
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-kube-dns-amd64:1.14.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-sidecar-amd64:1.14.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.13.4
#修改镜像名称
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.0 gcr.io/google_containers/defaultbackend:1.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 gcr.io/google_containers/etcd:3.2.24
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-kube-dns-amd64:1.14.5 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-sidecar-amd64:1.14.5 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.4 gcr.io/google_containers/kube-apiserver:v1.13.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.4 gcr.io/google_containers/kube-controller-manager:v1.13.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.4 gcr.io/google_containers/kube-scheduler:v1.13.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.0 gcr.io/kubernetes-helm/tiller:v2.13.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64: v1.13.4 k8s.gcr.io/kube-proxy: v1.13.4
#删除旧的镜像
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-kube-dns-amd64:1.14.5
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-sidecar-amd64:1.14.5
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.13.4

sh pull_k8s_images.sh

下载以下镜像
mkdir /k8s_images
cd /k8s_images
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.4/bin/linux/amd64/kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.4/bin/linux/amd64/kubelet
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.4/bin/linux/amd64/kubeadm
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz
wget https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz 

12.安装apache(仅在master节点执行)

由于国外的资源网站国内需要FQ才可访问或者下载速度太慢,所以,在手动下载所需介质后,将资源发布到本地

apt-get install -y apache2
systemctl restart apache2

将刚才通过wget下载的资源移动到/var/www/html/下

mv / k8s_images/* /var/www/html/

13.修改源码

①vi /opt/openstack-helm-infra/roles/build-helm-packages/defaults/main.yml
url:
  #google_helm_repo: https://storage.googleapis.com/kubernetes-helm
  google_helm_repo: http://10.xxx.xxx.xxx
 #google_helm_repo是拉取helm镜像的地址,改成apache对外暴露的访问地址

②vi /opt/openstack-helm-infra/roles/build-images/defaults/main.yml
 #google_kubernetes_repo: https://storage.googleapis.com/kubernetes-release/release/{{ version.kubernetes }}/bin/linux/amd64
 #google_helm_repo: https://storage.googleapis.com/kubernetes-helm
 #cni_repo: https://github.com/containernetworking/plugins/releases/download/{{ version.cni }}
 #添加以下,地址同上,为apache对外暴露地址
  google_kubernetes_repo: http://10.xxx.xxx.xxx
  google_helm_repo: http://10.xxx.xxx.xxx
  cni_repo: http://10.xxx.xxx.xxx

③vi /opt/openstack-helm-infra/roles/build-images/tasks/main.yaml
      args:
        chdir: "{{ kubeadm_aio_path.stdout }}/"
        executable: /bin/bash
    - name: Kubeadm-AIO image build path
      when: not proxy.http
      shell: |-
              set -e
              docker build --no-cache \
                --network host \
                --force-rm \
在 docker build后添加--no-cache

④vi /opt/openstack-helm-infra/tools/gate/devel/start.sh
#注释下面这行
#ara generate html ${LOGS_DIR}/ara

⑤vi /opt/openstack-helm-infra/tools/images/kubeadm-aio/Dockerfile
#以下添加的https://pypi.tuna.tsinghua.edu.cn/simple是更换pip为清华源
    pip --no-cache-dir install -i https://pypi.tuna.tsinghua.edu.cn/simple --upgrade pip==18.1 ;\
    hash -r ;\
    pip --no-cache-dir install -i https://pypi.tuna.tsinghua.edu.cn/simple --upgrade setuptools ;\
    pip --no-cache-dir install -i https://pypi.tuna.tsinghua.edu.cn/simple --upgrade \
    # NOTE(srwilkers): Pinning ansible to 2.5.5, as pip installs 2.6 by default.
    # 2.6 introduces a new command flag (init) for the docker_container module
    # that is incompatible with what we have currently. 2.5.5 ensures we match
    # what's deployed in the gates
    #pip --no-cache-dir install --upgrade pip==18.1 ;\
    #hash -r ;\
    #pip --no-cache-dir install --upgrade setuptools ;\
    #pip --no-cache-dir install --upgrade \

三、安装kubernetes

以下操作不加说明的情况下默认在master节点上执行

1.创建库存文件

对照自身环境修改以下中的参数

#!/bin/bash
set -xe
cat > /opt/openstack-helm-infra/tools/gate/devel/multinode-inventory.yaml <<EOF
all:
  children:
    primary:
      hosts:
        node_one:
          ansible_port: 22
          #master节点IP
          ansible_host: 10.xxx.xxx.xxx
          ansible_user: root
          ansible_ssh_private_key_file: /etc/openstack-helm/deploy-key.pem
          ansible_ssh_extra_args: -o StrictHostKeyChecking=no
    nodes:
      hosts:
        node_two:
          ansible_port: 22
          #node1节点IP
          ansible_host: 10.xxx.xxx.xxx
          ansible_user: root
          ansible_ssh_private_key_file: /etc/openstack-helm/deploy-key.pem
          ansible_ssh_extra_args: -o StrictHostKeyChecking=no
        node_three:
          ansible_port: 22
          #node3节点IP
          ansible_host: 10.xxx.xxx.xxx
          ansible_user: root
          ansible_ssh_private_key_file: /etc/openstack-helm/deploy-key.pem
          ansible_ssh_extra_args: -o StrictHostKeyChecking=no
EOF

2.创建环境文件

#!/bin/bash
set -xe
function net_default_iface {
 sudo ip -4 route list 0/0 | awk '{ print $5; exit }'
}
cat > /opt/openstack-helm-infra/tools/gate/devel/multinode-vars.yaml <<EOF
kubernetes_network_default_device: $(net_default_iface)

3.部署kubernetes

cd /opt/openstack-helm-infra
make dev-deploy setup-host multinode
make dev-deploy k8s multinode

四、安装openstack

以下命令都假定它们是从/opt/openstack-helm目录下用root用户运行的
*注意事项:
①在部署过程中不要使用默认apt源以外的源,依赖包会有出入
②部署过程中会经常会出现超时或者其他报错,很可能是因为从外网拉取镜像过慢导致,可多执行几次部署命令
③可以通过kubectl get pod -n 或者kubectl get pod –all-namespaces来查看pod状态
通过kubectl describe pod -n 来查看pod的详细状态
通过kubectl log -f -n openstack来查看状态为running的pod的日志

1. 在主机上设置客户端并组装图表

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/010-setup-client.sh

手动执行如下:

#!/bin/bash
set -xe

sudo -H -E pip install "cmd2<=0.8.7"
sudo -H -E pip install python-openstackclient python-heatclient --ignore-installed

sudo -H mkdir -p /etc/openstack
sudo -H chown -R $(id -un): /etc/openstack
tee /etc/openstack/clouds.yaml << EOF
clouds:
  openstack_helm:
    region_name: RegionOne
    identity_api_version: 3
    auth:
      username: 'admin'
      password: 'password'
      project_name: 'admin'
      project_domain_name: 'default'
      user_domain_name: 'default'
      auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
EOF

#NOTE: Build charts
make all

2.部署入口控制器

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/020-ingress.sh

手动执行如下:

#!/bin/bash
set -xe

#NOTE: Deploy global ingress
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
tee /tmp/ingress-kube-system.yaml << EOF
pod:
  replicas:
    error_page: 2
deployment:
  mode: cluster
  type: DaemonSet
network:
  host_namespace: true
EOF
helm upgrade --install ingress-kube-system ${OSH_INFRA_PATH}/ingress \
  --namespace=kube-system \
  --values=/tmp/ingress-kube-system.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_INGRESS_KUBE_SYSTEM}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh kube-system

#NOTE: Display info
helm status ingress-kube-system

#NOTE: Deploy namespaced ingress controllers
for NAMESPACE in openstack ceph; do
  # Allow $OSH_EXTRA_HELM_ARGS_INGRESS_ceph and $OSH_EXTRA_HELM_ARGS_INGRESS_openstack overrides
  OSH_EXTRA_HELM_ARGS_INGRESS_NAMESPACE="OSH_EXTRA_HELM_ARGS_INGRESS_${NAMESPACE}"
  #NOTE: Deploy namespace ingress
  tee /tmp/ingress-${NAMESPACE}.yaml << EOF
pod:
  replicas:
    ingress: 2
    error_page: 2
EOF
  helm upgrade --install ingress-${NAMESPACE} ${OSH_INFRA_PATH}/ingress \
    --namespace=${NAMESPACE} \
    --values=/tmp/ingress-${NAMESPACE}.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${!OSH_EXTRA_HELM_ARGS_INGRESS_NAMESPACE}

  #NOTE: Wait for deploy
  ./tools/deployment/common/wait-for-pods.sh ${NAMESPACE}

  #NOTE: Display info
  helm status ingress-${NAMESPACE}
done

3.部署ceph

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/030-ceph.sh

手动执行如下:
#!/bin/bash

set -xe

#NOTE: Deploy command
[ -s /tmp/ceph-fs-uuid.txt ] || uuidgen > /tmp/ceph-fs-uuid.txt
CEPH_PUBLIC_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)"
CEPH_CLUSTER_NETWORK="${CEPH_PUBLIC_NETWORK}"
CEPH_FS_ID="$(cat /tmp/ceph-fs-uuid.txt)"
#NOTE(portdirect): to use RBD devices with kernels < 4.5 this should be set to 'hammer'
LOWEST_CLUSTER_KERNEL_VERSION=$(kubectl get node  -o go-template='{{range .items}}{{.status.nodeInfo.kernelVersion}}{{"\n"}}{{ end }}' | sort -V | tail -1)
if [ "$(echo ${LOWEST_CLUSTER_KERNEL_VERSION} | awk -F "." '{ print $1 }')" -lt "4" ] || [ "$(echo ${LOWEST_CLUSTER_KERNEL_VERSION} | awk -F "." '{ print $2 }')" -lt "15" ]; then
  echo "Using hammer crush tunables"
  CRUSH_TUNABLES=hammer
else
  CRUSH_TUNABLES=null
fi
NUMBER_OF_OSDS="$(kubectl get nodes -l ceph-osd=enabled --no-headers | wc -l)"
tee /tmp/ceph.yaml << EOF
endpoints:
  ceph_mon:
    namespace: ceph
network:
  public: ${CEPH_PUBLIC_NETWORK}
  cluster: ${CEPH_CLUSTER_NETWORK}
deployment:
  storage_secrets: true
  ceph: true
  rbd_provisioner: true
  cephfs_provisioner: true
  client_secrets: false
bootstrap:
  enabled: true
conf:
  ceph:
    global:
      fsid: ${CEPH_FS_ID}
  pool:
    crush:
      tunables: ${CRUSH_TUNABLES}
    target:
      osd: ${NUMBER_OF_OSDS}
      pg_per_osd: 100
  storage:
    osd:
      - data:
          type: directory
          location: /var/lib/openstack-helm/ceph/osd/osd-one
        journal:
          type: directory
          location: /var/lib/openstack-helm/ceph/osd/journal-one

EOF

: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
for CHART in ceph-mon ceph-osd ceph-client ceph-provisioners; do
  helm upgrade --install ${CHART} ${OSH_INFRA_PATH}/${CHART} \
    --namespace=ceph \
    --values=/tmp/ceph.yaml \
    ${OSH_EXTRA_HELM_ARGS} \
    ${OSH_EXTRA_HELM_ARGS_CEPH_DEPLOY}

  #NOTE: Wait for deploy
  ./tools/deployment/common/wait-for-pods.sh ceph 1200

  #NOTE: Validate deploy
  MON_POD=$(kubectl get pods \
    --namespace=ceph \
    --selector="application=ceph" \
    --selector="component=mon" \
    --no-headers | awk '{ print $1; exit }')
  kubectl exec -n ceph ${MON_POD} -- ceph -s
done

4.激活openstack命名空间以便能够使用Ceph

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/040-ceph-ns-activate.sh

手动执行如下:

#!/bin/bash
set -xe

#NOTE: Deploy command
CEPH_PUBLIC_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)"
CEPH_CLUSTER_NETWORK="${CEPH_PUBLIC_NETWORK}"
tee /tmp/ceph-openstack-config.yaml <<EOF
endpoints:
  ceph_mon:
    namespace: ceph
network:
  public: ${CEPH_PUBLIC_NETWORK}
  cluster: ${CEPH_CLUSTER_NETWORK}
deployment:
  ceph: false
  rbd_provisioner: false
  cephfs_provisioner: false
  client_secrets: true
bootstrap:
  enabled: false
EOF

: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm upgrade --install ceph-openstack-config ${OSH_INFRA_PATH}/ceph-provisioners \
  --namespace=openstack \
  --values=/tmp/ceph-openstack-config.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_CEPH_NS_ACTIVATE}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status ceph-openstack-config

5.部署MariaDB

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/050-mariadb.sh

手动执行如下:

#!/bin/bash
set -xe

#NOTE: Deploy command
tee /tmp/mariadb.yaml << EOF
pod:
  replicas:
    server: 3
    ingress: 3
EOF
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm upgrade --install mariadb ${OSH_INFRA_PATH}/mariadb \
    --namespace=openstack \
    --values=/tmp/mariadb.yaml \
    ${OSH_EXTRA_HELM_ARGS} \
    ${OSH_EXTRA_HELM_ARGS_MARIADB}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status mariadb

6.部署RabbitMQ

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/060-rabbitmq.sh

手动执行如下:

#!/bin/bash
set -xe

#NOTE: Deploy command
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
: ${OSH_EXTRA_HELM_ARGS:=""}
helm upgrade --install rabbitmq ${OSH_INFRA_PATH}/rabbitmq \
    --namespace=openstack \
    ${OSH_EXTRA_HELM_ARGS} \
    ${OSH_EXTRA_HELM_ARGS_RABBITMQ}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status rabbitmq

7.部署Memcached

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/070-memcached.sh

手动执行如下:

#!/bin/bash
set -xe

#NOTE: Lint and package chart
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
make -C ${OSH_INFRA_PATH} memcached

tee /tmp/memcached.yaml <<EOF
manifests:
  network_policy: true
network_policy:
  memcached:
    ingress:
      - from:
        - podSelector:
            matchLabels:
              application: keystone
        - podSelector:
            matchLabels:
              application: heat
        - podSelector:
            matchLabels:
              application: glance
        - podSelector:
            matchLabels:
              application: cinder
        - podSelector:
            matchLabels:
              application: congress
        - podSelector:
            matchLabels:
              application: barbican
        - podSelector:
            matchLabels:
              application: ceilometer
        - podSelector:
            matchLabels:
              application: horizon
        - podSelector:
            matchLabels:
              application: ironic
        - podSelector:
            matchLabels:
              application: magnum
        - podSelector:
            matchLabels:
              application: mistral
        - podSelector:
            matchLabels:
              application: nova
        - podSelector:
            matchLabels:
              application: neutron
        - podSelector:
            matchLabels:
              application: senlin
        ports:
        - protocol: TCP
          port: 11211
EOF
#NOTE: Deploy command
: ${OSH_EXTRA_HELM_ARGS:=""}
helm upgrade --install memcached ${OSH_INFRA_PATH}/memcached \
    --namespace=openstack \
    --values=/tmp/memcached.yaml \
    ${OSH_EXTRA_HELM_ARGS} \
    ${OSH_EXTRA_HELM_ARGS_MEMCACHED}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status memcached

8.部署Keystone

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/080-keystone.sh

手动执行如下:

#!/bin/bash
set -xe

#NOTE: Deploy command
helm upgrade --install keystone ./keystone \
    --namespace=openstack \
    --set pod.replicas.api=2 \
    ${OSH_EXTRA_HELM_ARGS} \
    ${OSH_EXTRA_HELM_ARGS_KEYSTONE}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status keystone
export OS_CLOUD=openstack_helm
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack endpoint list
# Delete the test pod if it still exists
kubectl delete pods -l application=keystone,release_group=keystone,component=test --namespace=openstack --ignore-not-found
helm test keystone --timeout 900

9.为对象存储部署Rados网关

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/090-ceph-radosgateway.sh

手动执行如下:

#!/bin/bash
set -xe

#NOTE: Deploy command
CEPH_PUBLIC_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)"
CEPH_CLUSTER_NETWORK="$(./tools/deployment/multinode/kube-node-subnet.sh)"
tee /tmp/radosgw-openstack.yaml <<EOF
endpoints:
  identity:
    namespace: openstack
  object_store:
    namespace: openstack
  ceph_mon:
    namespace: ceph
network:
  public: ${CEPH_PUBLIC_NETWORK}
  cluster: ${CEPH_CLUSTER_NETWORK}
deployment:
  ceph: true
  rgw_keystone_user_and_endpoints: true
bootstrap:
  enabled: false
conf:
  rgw_ks:
    enabled: true
network_policy:
  ceph:
    ingress:
      - from:
        - podSelector:
            matchLabels:
              application: glance
        - podSelector:
            matchLabels:
              application: cinder
        - podSelector:
            matchLabels:
              application: libvirt
        - podSelector:
            matchLabels:
              application: nova
        - podSelector:
            matchLabels:
              application: ceph
        - podSelector:
            matchLabels:
              application: ingress
        ports:
        - protocol: TCP
          port: 8088
manifests:
  network_policy: true
EOF

: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm upgrade --install radosgw-openstack ${OSH_INFRA_PATH}/ceph-rgw \
  --namespace=openstack \
  --set manifests.network_policy=true \
  --values=/tmp/radosgw-openstack.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_HEAT}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status radosgw-openstack

#NOTE: Run Tests
export OS_CLOUD=openstack_helm
helm test radosgw-openstack

10.部署Glance

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/100-glance.sh

手动执行如下:

#!/bin/bash
set -xe

#NOTE: Deploy command
: ${OSH_OPENSTACK_RELEASE:="newton"}
#NOTE(portdirect), this could be: radosgw, rbd, swift or pvc
: ${GLANCE_BACKEND:="swift"}
tee /tmp/glance.yaml <<EOF
storage: ${GLANCE_BACKEND}
pod:
  replicas:
    api: 2
    registry: 2
EOF
if [ "x${OSH_OPENSTACK_RELEASE}" == "xnewton" ]; then
# NOTE(portdirect): glance APIv1 is required for heat in Newton
  tee -a /tmp/glance.yaml <<EOF
conf:
  glance:
    DEFAULT:
      enable_v1_api: true
      enable_v2_registry: true
manifests:
  deployment_registry: true
  ingress_registry: true
  pdb_registry: true
  service_ingress_registry: true
EOF
fi
helm upgrade --install glance ./glance \
  --namespace=openstack \
  --values=/tmp/glance.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_GLANCE}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status glance
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack image list
openstack image show 'Cirros 0.3.5 64-bit'
# Delete the test pod if it still exists
kubectl delete pods -l application=glance,release_group=glance,component=test --namespace=openstack --ignore-not-found
helm test glance --timeout 900

11.部署Cinder

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/110-cinder.sh

手动执行如下:

#!/bin/bash

#NOTE: Deploy command
tee /tmp/cinder.yaml << EOF
pod:
  replicas:
    api: 2
    volume: 1
    scheduler: 1
    backup: 1
conf:
  cinder:
    DEFAULT:
      backup_driver: cinder.backup.drivers.swift
EOF
helm upgrade --install cinder ./cinder \
  --namespace=openstack \
  --values=/tmp/cinder.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_CINDER}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack volume type list
# Delete the test pod if it still exists
kubectl delete pods -l application=cinder,release_group=cinder,component=test --namespace=openstack --ignore-not-found
helm test cinder --timeout 900

12.部署OpenvSwitch

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/120-openvswitch.sh

手动执行如下:

#!/bin/bash

#NOTE: Deploy command
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm upgrade --install openvswitch ${OSH_INFRA_PATH}/openvswitch \
  --namespace=openstack \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_OPENVSWITCH}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
helm status openvswitch

13.部署Libvirt

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/130-libvirt.sh

手动执行如下:

#!/bin/bash

#NOTE: Deploy libvirt
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
helm upgrade --install libvirt ${OSH_INFRA_PATH}/libvirt \
  --namespace=openstack \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_LIBVIRT}

#NOTE(portdirect): We don't wait for libvirt pods to come up, as they depend
# on the neutron agents being up.

#NOTE: Validate Deployment info
helm status libvirt

14.部署Compute Kit (Nova and Neutron)

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/140-compute-kit.sh

手动执行如下:

#!/bin/bash

#NOTE: Deploy nova
tee /tmp/nova.yaml << EOF
labels:
  api_metadata:
    node_selector_key: openstack-helm-node-class
    node_selector_value: primary
pod:
  replicas:
    api_metadata: 1
    placement: 2
    osapi: 2
    conductor: 2
    consoleauth: 2
    scheduler: 1
    novncproxy: 1
EOF

function kvm_check () {
  POD_NAME="tmp-$(cat /dev/urandom | env LC_CTYPE=C tr -dc a-z | head -c 5; echo)"
  cat <<EOF | kubectl apply -f - 1>&2;
apiVersion: v1
kind: Pod
metadata:
  name: ${POD_NAME}
spec:
  hostPID: true
  restartPolicy: Never
  containers:
  - name: util
    securityContext:
      privileged: true
    image: docker.io/busybox:latest
    command:
      - sh
      - -c
      - |
        nsenter -t1 -m -u -n -i -- sh -c "kvm-ok >/dev/null && echo yes || echo no"
EOF
  end=$(($(date +%s) + 900))
  until kubectl get pod/${POD_NAME} -o go-template='{{.status.phase}}' | grep -q Succeeded; do
    now=$(date +%s)
    [ $now -gt $end ] && echo containers failed to start. && \
        kubectl get pod/${POD_NAME} -o wide && exit 1
  done
  kubectl logs pod/${POD_NAME}
  kubectl delete pod/${POD_NAME} 1>&2;
}

if [ "x$(kvm_check)" == "xyes" ]; then
  echo 'OSH is not being deployed in virtualized environment'
  helm upgrade --install nova ./nova \
      --namespace=openstack \
      --values=/tmp/nova.yaml \
      ${OSH_EXTRA_HELM_ARGS} \
      ${OSH_EXTRA_HELM_ARGS_NOVA}
else
  echo 'OSH is being deployed in virtualized environment, using qemu for nova'
  helm upgrade --install nova ./nova \
      --namespace=openstack \
      --values=/tmp/nova.yaml \
      --set conf.nova.libvirt.virt_type=qemu \
      --set conf.nova.libvirt.cpu_mode=none \
      ${OSH_EXTRA_HELM_ARGS} \
      ${OSH_EXTRA_HELM_ARGS_NOVA}
fi

#NOTE: Deploy neutron, for simplicity we will assume the default route device
# should be used for tunnels
function network_tunnel_dev () {
  POD_NAME="tmp-$(cat /dev/urandom | env LC_CTYPE=C tr -dc a-z | head -c 5; echo)"
  cat <<EOF | kubectl apply -f - 1>&2;
apiVersion: v1
kind: Pod
metadata:
  name: ${POD_NAME}
spec:
  hostNetwork: true
  restartPolicy: Never
  containers:
  - name: util
    image: docker.io/busybox:latest
    command:
    - 'ip'
    - '-4'
    - 'route'
    - 'list'
    - '0/0'
EOF
  end=$(($(date +%s) + 900))
  until kubectl get pod/${POD_NAME} -o go-template='{{.status.phase}}' | grep -q Succeeded; do
    now=$(date +%s)
    [ $now -gt $end ] && echo containers failed to start. && \
        kubectl get pod/${POD_NAME} -o wide && exit 1
  done
  kubectl logs pod/${POD_NAME} | awk '{ print $5; exit }'
  kubectl delete pod/${POD_NAME} 1>&2;
}

NETWORK_TUNNEL_DEV="$(network_tunnel_dev)"
tee /tmp/neutron.yaml << EOF
network:
  interface:
    tunnel: "${NETWORK_TUNNEL_DEV}"
labels:
  agent:
    dhcp:
      node_selector_key: openstack-helm-node-class
      node_selector_value: primary
    l3:
      node_selector_key: openstack-helm-node-class
      node_selector_value: primary
    metadata:
      node_selector_key: openstack-helm-node-class
      node_selector_value: primary
pod:
  replicas:
    server: 2
conf:
  neutron:
    DEFAULT:
      l3_ha: False
      max_l3_agents_per_router: 1
      l3_ha_network_type: vxlan
      dhcp_agents_per_network: 1
  plugins:
    ml2_conf:
      ml2_type_flat:
        flat_networks: public
    openvswitch_agent:
      agent:
        tunnel_types: vxlan
      ovs:
        bridge_mappings: public:br-ex
EOF

if [ -n "$OSH_OPENSTACK_RELEASE" ]; then
  if [ -e "./neutron/values_overrides/${OSH_OPENSTACK_RELEASE}.yaml" ] ; then
    echo "Adding release overrides for ${OSH_OPENSTACK_RELEASE}"
    OSH_RELEASE_OVERRIDES_NEUTRON="--values=./neutron/values_overrides/${OSH_OPENSTACK_RELEASE}.yaml"
  fi
fi

helm upgrade --install neutron ./neutron \
    --namespace=openstack \
    --values=/tmp/neutron.yaml \
    ${OSH_RELEASE_OVERRIDES_NEUTRON} \
    ${OSH_EXTRA_HELM_ARGS} \
    ${OSH_EXTRA_HELM_ARGS_NEUTRON}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack compute service list
openstack network agent list
# Delete the test pods if they still exist
kubectl delete pods -l application=nova,release_group=nova,component=test --namespace=openstack --ignore-not-found
kubectl delete pods -l application=neutron,release_group=neutron,component=test --namespace=openstack --ignore-not-found
helm test nova --timeout 900
helm test neutron --timeout 900

15.部署Heat

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/150-heat.sh

手动执行如下:

#!/bin/bash

#NOTE: Deploy command
tee /tmp/heat.yaml << EOF
pod:
  replicas:
    api: 2
    cfn: 2
    cloudwatch: 2
    engine: 2
EOF
helm upgrade --install heat ./heat \
  --namespace=openstack \
  --values=/tmp/heat.yaml \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_HEAT}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack orchestration service list
# Delete the test pod if it still exists
kubectl delete pods -l application=heat,release_group=heat,component=test --namespace=openstack --ignore-not-found
helm test heat --timeout 900

16.部署Barbican

可直接执行脚本,也可手动分部执行

./tools/deployment/multinode/160-barbican.sh

手动执行如下:

#!/bin/bash

#NOTE: Deploy command
helm upgrade --install barbican ./barbican \
  --namespace=openstack \
  --set pod.replicas.api=2 \
  ${OSH_EXTRA_HELM_ARGS} \
  ${OSH_EXTRA_HELM_ARGS_BARBICAN}

#NOTE: Wait for deploy
./tools/deployment/common/wait-for-pods.sh openstack

#NOTE: Validate Deployment info
export OS_CLOUD=openstack_helm
openstack service list
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
# Delete the test pod if it still exists
kubectl delete pods -l application=barbican,release_group=barbican,component=test --namespace=openstack --ignore-not-found
helm test barbican

五、暴露服务

1.修改horizon的values.yaml

vi /opt/openstack-helm/horizon/values.yaml
endpoints:
...
…
    host_fqdn_override:
      default: null
##添加以下
      public: keystone.os.foo.org
	…
	…
  dashboard:
    name: horizon
    hosts:
      default: horizon-int
      public: horizon
    host_fqdn_override:
      default: null
  	   ##添加以下
      public: horizon.os.foo.org

2.部署horizon

export FQDN=os.foo.org

helm install --name=horizon ./horizon --namespace=openstack \
  --set network.node_port.enabled=true \
  --set endpoints.dashboard.host_fqdn_override.public=horizon.$FQDN \
  --set endpoints.identity.host_fqdn_override.public=keystone.$FQDN 

3.配置hosts

*注:需要访问horizon的主机需要在host文件中配置
windows修改:
C:\Windows\System32\drivers\etc\hosts

#master节点IP

10.xxx.xxx.xxx horizon.os.foo.org

六、完成配置

在web中访问http://horizon.os.foo.org 用户名密码为:admin/password
在这里插入图片描述

参考:

https://docs.openstack.org/openstack-helm/latest/install/multinode.html#

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值