Tungsten Fabric SDN — within AWS EKS

目录

Contrail within AWS EKS

AWS EKS 原生使用的 CNI 是 Amazon VPC CNI(amazon-vpc-cni-k8s),支持原生的 Amazon VPC networking。可实现自动创建 elastic network interfaces(弹性网络接口),并将其连接到 Amazon EC2 节点。同时,还实现了从 VPC 为每个 Pod 和 Service 分配专用的 IPv4/IPv6 地址。

Contrail within AWS EKS 用于在 AWS EKS 环境中安装 Contrail Networking 作为 CNI。

软件版本:

  • AWS CLI 1.16.156 及以上
  • EKS 1.16 及以上
  • Kubernetes 1.18 及以上
  • Contrail 2008 及以上

官方文档:https://www.juniper.net/documentation/en_US/contrail20/topics/task/installation/how-to-install-contrail-aws-eks.html
视频教程:https://www.youtube.com/playlist?list=PLBO-FXA5nIK_Xi-FbfxLFDCUx4EvIy6_d

AWS EKS Quick Start + Contrail SDN CNI

  • AWS EKS Quick Start
    在这里插入图片描述
  • AWS EKS Quick Start + Contrail SDN CNI
    在这里插入图片描述

AWS EKS Quick Start + Contrail SDN CNI 部署包括下列内容:

  1. 一个跨 3 个 AZ 的高度可用的架构。
  2. 一个包含了 Public 和 Private Subnet 的 VPC。
  3. 在 Public Subnet 上托管 NAT Gateway,继而允许 Private Subnet 可以访问 Internet。
  4. 在一个 Public Subnet 内的 Auto Scaling Group 中托管 Bastion Host(Linux 堡垒机),继而允许对 Private Subnet 中的 EC2 instances 进行 SSH 访问。Bastion Host 还配置有 kubectl 命令行。
  5. 提供 Kubernetes Control Plane 的 Amazon EKS cluster。
  6. 在 Private Subnet 中托管一组 Kubernetes Worker nodes。
  7. 在 Kubernetes Worker nodes 中部署了 Contrail Networking SDN Controller & vRouter。
  8. 通过 Contrail SDN 支持 BGP Control Plane 和 MPLSoUDP Overlay Data Plane。

Install Contrail Networking as the CNI for EKS

  1. 安装 AWS CLI v2(文档:https://docs.aws.amazon.com/zh_cn/cli/latest/userguide/cli-chap-welcome.html)
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install

$ aws --version
aws-cli/2.7.23 Python/3.9.11 Linux/3.10.0-1160.66.1.el7.x86_64 exe/x86_64.centos.7 prompt/off

$ aws configure
AWS Access Key ID [None]: XX
AWS Secret Access Key [None]: XX
Default region name [None]: us-east-1
Default output format [None]: json

这里选择使用 us-east-1 作为 Default Region。注意,根据个人账户的情况,可能无法使用 ap-east-1 Region。

  1. 安装 Kubectl v1.21(文档:https://docs.aws.amazon.com/zh_cn/eks/latest/userguide/install-kubectl.html)
# 检查是否已经安装 kubectl,如果安装了则需要先删除。
$ kubectl version | grep Client | cut -d : -f 5

# 安装指定版本的 kubectl。
$ curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.21.2/2021-07-05/bin/linux/amd64/kubectl
$ chmod +x ./kubectl
$ mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
$ kubectl version --short --client
Client Version: v1.21.2-13+d2965f0db10712
  1. 创建 EC2 密钥对(文档:https://docs.aws.amazon.com/zh_cn/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
$ aws ec2 create-key-pair \
    --key-name ContrailKey \
    --key-type rsa \
    --key-format pem \
    --query "KeyMaterial" \
    --output text > ./ContrailKey.pem

$ chmod 400 ContrailKey.pem

在这里插入图片描述

  1. Download the EKS deployer:
wget https://s3-eu-central-1.amazonaws.com/contrail-one-click-deployers/EKS-Scripts.zip -O EKS-Scripts.zip
unzip EKS-Scripts.zip
cd contrail-as-the-cni-for-aws-eks/
  1. 编辑 variables.sh 文件中的变量。
    1. CLOUDFORMATIONREGION:指定 CloudFormation 的 AWS Region,这里使用和 EC2 一致的 us-east-1。CloudFormation 会使用 Quickstart Tools 将 EKS 部署到该 Region。
    2. S3QUICKSTARTREGION:制定 S3 bucket 的 AWS Region,这里使用和 EC2 一致的 us-east-1。
    3. JUNIPERREPONAME:指定允许访问 Contrail image repository 的 Username。
    4. JUNIPERREPOPASS:指定允许访问 Contrail image repository 的 Password。
    5. RELEASE:指定 Contrail 的 Release Contrail container image tag。
    6. EC2KEYNAME:指定 AWS Region 中现有的 keyname。
    7. BASTIONSSHKEYPATH:指定本地存放 AWS EC2 SSH Key 的路径。
$ vi variables.sh

###############################################################################
#complete the below variables for your setup and run the script
###############################################################################
#this is the aws region you are connected to and want to deploy EKS and Contrail into
export CLOUDFORMATIONREGION="us-east-1"
#this is the region for my quickstart, only change if you plan to deploy your own quickstart
export S3QUICKSTARTREGION="us-east-1"
export LOGLEVEL="SYS_DEBUG"
#example Juniper docker login, change to yours
export JUNIPERREPONAME="XX"
export JUNIPERREPOPASS="XX"
export RELEASE="2008.121"
export K8SAPIPORT="443"
export PODSN="10.0.1.0/24"
export SERVICESN="10.0.2.0/24"
export FABRICSN="10.0.3.0/24"
export ASN="64513"
export MYEMAIL="example@mail.com"
#example key, change these two to your existing ec2 ssh key name and private key file for the region
#also don't forget to chmod 0400 [your private key]
export EC2KEYNAME="ContrailKey"
export BASTIONSSHKEYPATH="/root/aws/ContrailKey.pem"
  1. Deploy the cloudformation-resources.sh file:
$ vi cloudformation-resources.sh
...
if [ $(aws iam list-roles --query "Roles[].RoleName" | grep CloudFormation-Kubernetes-VPC | sed 's/"//g' | sed 's/,//g' | xargs) = "CloudFormation-Kubernetes-VPC" ]; then
    #export ADDROLE="false"
    export ADDROLE="Disabled"
else
    #export ADDROLE="true"
    export ADDROLE="Enabled"
fi
...


$ ./cloudformation-resources.sh
./cloudformation-resources.sh: 第 2 行:[: =: 期待一元表达式
{
    "StackId": "arn:aws:cloudformation:us-east-1:805369193666:stack/awsqs-eks-cluster-resource/8aedfea0-1d22-11ed-8904-0a73b9f64f57"
}
{
    "StackId": "arn:aws:cloudformation:us-east-1:805369193666:stack/awsqs-kubernetes-helm-resource/5e6503e0-1d24-11ed-90da-12f2079f0ffd"
}
waiting for cloudformation stack awsqs-kubernetes-helm-resource to complete
waiting for cloudformation stack awsqs-kubernetes-helm-resource to complete
waiting for cloudformation stack awsqs-kubernetes-helm-resource to complete
waiting for cloudformation stack awsqs-kubernetes-helm-resource to complete
waiting for cloudformation stack awsqs-kubernetes-helm-resource to complete
waiting for cloudformation stack awsqs-kubernetes-helm-resource to complete
waiting for cloudformation stack awsqs-kubernetes-helm-resource to complete
All Done

在这里插入图片描述

  1. 修改 contrail-as-the-cni-for-aws-eks/quickstart-amazon-eks/templates 下的 YAML 文件,使其支持较新的 K8s v1.21 版本。

  2. 创建 Amazon-EKS-Contrail-CNI CloudFormation Template 的 S3 bucket。

$ vi mk-s3-bucket.sh
...
#S3REGION="eu-west-1"
S3REGION="us-east-1"

$ ./mk-s3-bucket.sh
************************************************************************************
This script is for the admins, you do not need to run it.
It creates a public s3 bucket if needed and pushed the quickstart git repo up to it
************************************************************************************
ok lets get started...
Are you an admin on the SRE aws account and want to push up the latest quickstart to S3? [y/n] y
ok then lets proceed...
Creating the s3 bucket
make_bucket: aws-quickstart-XX
...
...
********************************************

Your quickstart bucket name will be
********************************************
https://s3-us-east-1.amazonaws.com/aws-quickstart-XX
********************************************************************************************************************
**I recommend going to the console, highlighting your quickstart folder directory and clicking action->make public**
**otherwise you may see permissions errors when running from other accounts                                       **
********************************************************************************************************************

在这里插入图片描述

  1. From the AWS CLI, deploy the EKS quickstart stack.
$ ll quickstart-amazon-eks/
$ ll *.patch
-rw-r--r-- 1 root root  2163 106 2020 patch1-amazon-eks-iam.patch
-rw-r--r-- 1 root root  1610 106 2020 patch2-amazon-eks-master.patch
-rw-r--r-- 1 root root 12204 106 2020 patch3-amazon-eks-nodegroup.patch
-rw-r--r-- 1 root root   381 106 2020 patch4-amazon-eks.patch
-rw-r--r-- 1 root root  2427 106 2020 patch5-functions.patch


$ vi eks-ubuntu.sh

source ./variables.sh
aws cloudformation create-stack \
  --capabilities CAPABILITY_IAM \
  --stack-name Amazon-EKS-Contrail-CNI \
  --disable-rollback \
  --template-url https://aws-quickstart-XX.s3.amazonaws.com/quickstart-amazon-eks/templates/amazon-eks-master.template.yaml \
  --parameters \
ParameterKey=AvailabilityZones,ParameterValue="${CLOUDFORMATIONREGION}a\,${CLOUDFORMATIONREGION}b\,${CLOUDFORMATIONREGION}c" \
ParameterKey=KeyPairName,ParameterValue=$EC2KEYNAME \
ParameterKey=RemoteAccessCIDR,ParameterValue="0.0.0.0/0" \
ParameterKey=NodeInstanceType,ParameterValue="m5.2xlarge" \
ParameterKey=NodeVolumeSize,ParameterValue="100" \
ParameterKey=NodeAMIOS,ParameterValue="UBUNTU-EKS-HVM" \
ParameterKey=QSS3BucketRegion,ParameterValue=${S3QUICKSTARTREGION} \
ParameterKey=QSS3BucketName,ParameterValue="aws-quickstart-XX" \
ParameterKey=QSS3KeyPrefix,ParameterValue="quickstart-amazon-eks/" \
ParameterKey=VPCCIDR,ParameterValue="100.72.0.0/16" \
ParameterKey=PrivateSubnet1CIDR,ParameterValue="100.72.0.0/25" \
ParameterKey=PrivateSubnet2CIDR,ParameterValue="100.72.0.128/25" \
ParameterKey=PrivateSubnet3CIDR,ParameterValue="100.72.1.0/25" \
ParameterKey=PublicSubnet1CIDR,ParameterValue="100.72.1.128/25" \
ParameterKey=PublicSubnet2CIDR,ParameterValue="100.72.2.0/25" \
ParameterKey=PublicSubnet3CIDR,ParameterValue="100.72.2.128/25" \
ParameterKey=NumberOfNodes,ParameterValue="5" \
ParameterKey=MaxNumberOfNodes,ParameterValue="5" \
ParameterKey=EKSPrivateAccessEndpoint,ParameterValue="Enabled" \
ParameterKey=EKSPublicAccessEndpoint,ParameterValue="Enabled"
while [[ $(aws cloudformation describe-stacks --stack-name Amazon-EKS-Contrail-CNI --query "Stacks[].StackStatus" --output text) != "CREATE_COMPLETE" ]];
do
     echo "waiting for cloudformation stack Amazon-EKS-Contrail-CNI to complete. This can take up to 45 minutes"
     sleep 60
done
echo "All Done"


$ ./eks-ubuntu.sh
{
    "StackId": "arn:aws:cloudformation:us-east-1:805369193666:stack/Amazon-EKS-Contrail-CNI/3e6b2c80-1d25-11ed-9c50-0ae926948d21"
}
waiting for cloudformation stack Amazon-EKS-Contrail-CNI to complete. This can take up to 45 minutes

NOTE:contrail-as-the-cni-for-aws-eks 提供的 quickstart-amazon-eks(https://github.com/aws-quickstart/quickstart-amazon-eks)经过了 Contrail 二次开发的,有 5 个 patches 文件。

quickstart-amazon-eks 提供了大量的 CloudFormation EKS Template 文件,我们使用到的 amazon-eks-master.template.yaml。

  • CloudFormation Stack
    在这里插入图片描述

  • EKS Cluster(Control Plane)
    在这里插入图片描述

  • EC2 Worker Nodes(Data Plane)
    在这里插入图片描述

  • VPC Network
    在这里插入图片描述

  • VPC Subnet
    在这里插入图片描述

  • VPC Route
    在这里插入图片描述

  • VPC NAT Gateway
    在这里插入图片描述

  • VPC Internet Gateway
    在这里插入图片描述

  • VPC Network for EKS cluster
    在这里插入图片描述

  • VPC Network for Worker Nodes
    在这里插入图片描述

  • Worker Node ENIs
    在这里插入图片描述

  1. 配置 kubeclt
$ aws eks get-token --cluster-name EKS-TPTKAJ8Z
$ aws eks update-kubeconfig --region us-east-1 --name EKS-TPTKAJ8Z

# 测试
$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   43m
  1. From the Kubernetes CLI, verify your cluster parameters
$ kubectl get nodes
NAME                           STATUS   ROLES    AGE   VERSION
ip-100-72-0-121.ec2.internal   Ready    <none>   34m   v1.14.8
ip-100-72-0-145.ec2.internal   Ready    <none>   35m   v1.14.8
ip-100-72-1-73.ec2.internal    Ready    <none>   34m   v1.14.8

$ kubectl get pods -A -o wide
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE   IP             NODE                           NOMINATED NODE   READINESS GATES
kube-system   aws-node-fttlp             1/1     Running   0          35m   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   aws-node-h5qgk             1/1     Running   0          35m   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   aws-node-xxspc             1/1     Running   0          35m   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   coredns-66cb55d4f4-jw754   1/1     Running   0          44m   100.72.0.248   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   coredns-66cb55d4f4-qxb92   1/1     Running   0          44m   100.72.0.52    ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   kube-proxy-7gb5m           1/1     Running   0          35m   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   kube-proxy-88shx           1/1     Running   0          35m   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   kube-proxy-kfb4w           1/1     Running   0          35m   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
  1. Upgrade the worker nodes to the latest EKS version:
$ kubectl apply -f upgrade-nodes.yaml

$ kubectl get nodes -A
NAME                           STATUS   ROLES    AGE   VERSION
ip-100-72-0-121.ec2.internal   Ready    <none>   46m   v1.16.15
ip-100-72-0-145.ec2.internal   Ready    <none>   46m   v1.16.15
ip-100-72-1-73.ec2.internal    Ready    <none>   46m   v1.16.15
  1. After confirming that the EKS version is updated on all nodes, delete the upgrade pods:
$ kubectl delete -f upgrade-nodes.yaml
  1. Apply the OS fixes for the EC2 worker nodes for Contrail Networking:
$ kubectl apply -f cni-patches.yaml
  1. Deploy Contrail Networking as the CNI for EKS
$ ./deploy-me.sh
The pod subnet assigned in eks is 10.0.1.0/24
The service subnet assigned in eks is 10.0.2.0/24
The fabric subnet assigned in eks is 10.0.3.0/24
the EKS cluster is EKS-TPTKAJ8Z
The EKS API is 7F1DBBFB9B553DCA211DBCF9FCCBD2CA.gr7.us-east-1.eks.amazonaws.com
EKS node pool, node 1 private ip 100.72.0.121
EKS node pool, node 2 private ip 100.72.0.145
EKS node pool, node 3 private ip 100.72.1.73
The Bastion public IP is
The contrail cluster ASN is 64513
building your cni configuration as file contrail-eks-out.yaml
replacing the AWS CNI with the Contrail SDN CNI
daemonset.apps "aws-node" deleted
node/ip-100-72-0-121.ec2.internal labeled
node/ip-100-72-0-145.ec2.internal labeled
node/ip-100-72-1-73.ec2.internal labeled
secret/contrail-registry created
configmap/cni-patches-config unchanged
daemonset.apps/cni-patches unchanged
configmap/env created
configmap/defaults-env created
configmap/configzookeeperenv created
configmap/nodemgr-config created
configmap/contrail-analyticsdb-config created
configmap/contrail-configdb-config created
configmap/rabbitmq-config created
configmap/kube-manager-config created
daemonset.apps/config-zookeeper created
daemonset.apps/contrail-analyticsdb created
daemonset.apps/contrail-configdb created
daemonset.apps/contrail-analytics created
daemonset.apps/contrail-analytics-snmp created
daemonset.apps/contrail-analytics-alarm created
daemonset.apps/contrail-controller-control created
daemonset.apps/contrail-controller-config created
daemonset.apps/contrail-controller-webui created
daemonset.apps/redis created
daemonset.apps/rabbitmq created
daemonset.apps/contrail-kube-manager created
daemonset.apps/contrail-agent created
clusterrole.rbac.authorization.k8s.io/contrail-kube-manager created
serviceaccount/contrail-kube-manager created
clusterrolebinding.rbac.authorization.k8s.io/contrail-kube-manager created
secret/contrail-kube-manager-token created
checking pods are up
waiting for pods to show up
...


$ kubectl get pods -A -o wide
NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE     IP             NODE                           NOMINATED NODE   READINESS GATES
kube-system   cni-patches-fgxwl                   1/1     Running   0          19m     100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   cni-patches-fjdl4                   1/1     Running   0          19m     100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   cni-patches-s5vl8                   1/1     Running   0          19m     100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   config-zookeeper-gkrcx              1/1     Running   0          7m45s   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   config-zookeeper-gs2sj              1/1     Running   0          7m46s   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   config-zookeeper-xm5wx              1/1     Running   0          7m46s   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   contrail-agent-ctclp                3/3     Running   2          6m33s   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   contrail-agent-hn5qq                3/3     Running   2          6m33s   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   contrail-agent-vk5v4                3/3     Running   2          6m32s   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   contrail-analytics-alarm-fx4s6      4/4     Running   2          5m29s   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   contrail-analytics-alarm-rj6mk      4/4     Running   1          5m29s   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   contrail-analytics-alarm-sxcm2      4/4     Running   1          5m29s   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   contrail-analytics-hvnwd            4/4     Running   2          5m45s   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   contrail-analytics-nfxjq            4/4     Running   2          5m48s   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   contrail-analytics-rn98q            4/4     Running   2          5m47s   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   contrail-analytics-snmp-hhs8d       4/4     Running   2          5m9s    100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   contrail-analytics-snmp-x2z8l       4/4     Running   2          5m13s   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   contrail-analytics-snmp-xdcdw       4/4     Running   2          5m13s   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   contrail-analyticsdb-5ztrc          4/4     Running   1          4m59s   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   contrail-analyticsdb-f8nw8          4/4     Running   1          4m58s   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   contrail-analyticsdb-ngr68          4/4     Running   1          4m59s   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   contrail-configdb-49qtm             3/3     Running   1          4m38s   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   contrail-configdb-bgkxk             3/3     Running   1          4m42s   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   contrail-configdb-jjqzz             3/3     Running   1          4m34s   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   contrail-controller-config-65tnk    6/6     Running   1          4m30s   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   contrail-controller-config-9gf52    6/6     Running   1          4m23s   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   contrail-controller-config-jb7zf    6/6     Running   1          4m30s   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   contrail-controller-control-92csc   5/5     Running   0          4m19s   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   contrail-controller-control-bsh6n   5/5     Running   0          4m17s   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   contrail-controller-control-qbr6x   5/5     Running   0          4m18s   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   contrail-controller-webui-9jzhv     2/2     Running   4          4m5s    100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   contrail-controller-webui-ldhww     2/2     Running   4          4m6s    100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   contrail-controller-webui-nd2zb     2/2     Running   4          4m6s    100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   contrail-kube-manager-d22v7         1/1     Running   0          3m52s   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   contrail-kube-manager-fcm9t         1/1     Running   0          3m53s   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   contrail-kube-manager-tcwj7         1/1     Running   0          3m52s   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   coredns-66cb55d4f4-jw754            1/1     Running   1          76m     100.72.0.248   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   coredns-66cb55d4f4-qxb92            1/1     Running   1          76m     100.72.0.52    ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   kube-proxy-7gb5m                    1/1     Running   1          67m     100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   kube-proxy-88shx                    1/1     Running   1          67m     100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   kube-proxy-kfb4w                    1/1     Running   1          67m     100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   rabbitmq-2x5k8                      1/1     Running   0          2m42s   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   rabbitmq-9kl4m                      1/1     Running   0          2m42s   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   rabbitmq-knlrm                      1/1     Running   0          2m41s   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
kube-system   redis-d88zw                         1/1     Running   0          2m29s   100.72.0.121   ip-100-72-0-121.ec2.internal   <none>           <none>
kube-system   redis-x9v7c                         1/1     Running   0          2m24s   100.72.1.73    ip-100-72-1-73.ec2.internal    <none>           <none>
kube-system   redis-ztz45                         1/1     Running   0          2m30s   100.72.0.145   ip-100-72-0-145.ec2.internal   <none>           <none>
  1. Deploy the setup bastion to provide SSH access for worker nodes
$ ./setup-bastion.sh

$ kubectl get nodes -A -owide
NAME                           STATUS   ROLES   AGE   VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
ip-100-72-0-121.ec2.internal   Ready    infra   89m   v1.16.15   100.72.0.121   <none>        Ubuntu 18.04.3 LTS   4.15.0-1054-aws   docker://17.3.2
ip-100-72-0-145.ec2.internal   Ready    infra   89m   v1.16.15   100.72.0.145   <none>        Ubuntu 18.04.3 LTS   4.15.0-1054-aws   docker://17.3.2
ip-100-72-1-73.ec2.internal    Ready    infra   89m   v1.16.15   100.72.1.73    <none>        Ubuntu 18.04.3 LTS   4.15.0-1054-aws   docker://17.3.2

$ ssh -i ContrailKey.pem ec2-user@{EKSBastion_Public_IPaddress}

###############################################################################
#     ___        ______     ___        _      _      ____  _             _    #
#    / \ \      / / ___|   / _ \ _   _(_) ___| | __ / ___|| |_ __ _ _ __| |_  #
#   / _ \ \ /\ / /\___ \  | | | | | | | |/ __| |/ / \___ \| __/ _` | '__| __| #
#  / ___ \ V  V /  ___) | | |_| | |_| | | (__|   <   ___) | || (_| | |  | |_  #
# /_/   \_\_/\_/  |____/   \__\_\\__,_|_|\___|_|\_\ |____/ \__\__,_|_|   \__| #
#-----------------------------------------------------------------------------#
#                      Amazon EKS Quick Start bastion host                    #
#    https://docs.aws.amazon.com/quickstart/latest/amazon-eks-architecture/   #
###############################################################################

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
No packages needed for security; 2 packages available
Run "sudo yum update" to apply all updates.

# 从 Bastion 访问 K8s worker nodes。
$ ssh ubuntu@100.72.0.121
$ ssh ubuntu@100.72.0.145
$ ssh ubuntu@100.72.1.73
  1. Run the Contrail setup file to provide a base Contrail Networking configuration:
$ ./setup-contrail.sh

checking pods are up
setting variables
node1 name ip-100-72-0-121.ec2.internal
node2 name ip-100-72-0-145.ec2.internal
node3 name ip-100-72-1-73.ec2.internal
node1 ip 100.72.0.121
set global config
Updated.{"global-vrouter-config": {"href": "http://100.72.0.121:8082/global-vrouter-config/d5cca8bb-e594-4b0c-8a0c-874fa970ec6e", "uuid": "d5cca8bb-e594-4b0c-8a0c-874fa970ec6e"}}
GlobalVrouterConfig Exists Already!
Updated.{"global-vrouter-config": {"href": "http://100.72.0.121:8082/global-vrouter-config/d5cca8bb-e594-4b0c-8a0c-874fa970ec6e", "uuid": "d5cca8bb-e594-4b0c-8a0c-874fa970ec6e"}}
Add route target to the default NS
Traceback (most recent call last):
  File "/opt/contrail/utils/add_route_target.py", line 112, in <module>
    main()
  File "/opt/contrail/utils/add_route_target.py", line 108, in main
    MxProvisioner(args_str)
  File "/opt/contrail/utils/add_route_target.py", line 31, in __init__
    self._args.route_target_number)
  File "/opt/contrail/utils/provision_bgp.py", line 180, in add_route_target
    net_obj = vnc_lib.virtual_network_read(fq_name=rt_inst_fq_name[:-1])
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 58, in wrapper
    return func(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 704, in _object_read
    res_type, fq_name, fq_name_str, id, ifmap_id)
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 1080, in _read_args_to_id
    return (True, self.fq_name_to_id(res_type, fq_name))
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 58, in wrapper
    return func(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 1361, in fq_name_to_id
    content = self._request_server(OP_POST, uri, data=json_body)
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 1094, in _request_server
    retry_after_authn=retry_after_authn, retry_count=retry_count)
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 1147, in _request
    % (op, url, data, content))
vnc_api.exceptions.NoIdError: Unknown id: Error: oper 1 url /fqname-to-id body {"fq_name": ["default-domain", "k8s-default", "k8s-default-pod-network"], "type": "virtual-network"} response Name ['default-domain', 'k8s-default', 'k8s-default-pod-network'] not found
command terminated with exit code 1
Traceback (most recent call last):
  File "/opt/contrail/utils/add_route_target.py", line 112, in <module>
    main()
  File "/opt/contrail/utils/add_route_target.py", line 108, in main
    MxProvisioner(args_str)
  File "/opt/contrail/utils/add_route_target.py", line 31, in __init__
    self._args.route_target_number)
  File "/opt/contrail/utils/provision_bgp.py", line 180, in add_route_target
    net_obj = vnc_lib.virtual_network_read(fq_name=rt_inst_fq_name[:-1])
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 58, in wrapper
    return func(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 704, in _object_read
    res_type, fq_name, fq_name_str, id, ifmap_id)
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 1080, in _read_args_to_id
    return (True, self.fq_name_to_id(res_type, fq_name))
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 58, in wrapper
    return func(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 1361, in fq_name_to_id
    content = self._request_server(OP_POST, uri, data=json_body)
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 1094, in _request_server
    retry_after_authn=retry_after_authn, retry_count=retry_count)
  File "/usr/lib/python2.7/site-packages/vnc_api/vnc_api.py", line 1147, in _request
    % (op, url, data, content))
vnc_api.exceptions.NoIdError: Unknown id: Error: oper 1 url /fqname-to-id body {"fq_name": ["default-domain", "k8s-default", "k8s-default-service-network"], "type": "virtual-network"} response Name ['default-domain', 'k8s-default', 'k8s-default-service-network'] not found
command terminated with exit code 1
Provision BGP out to an example MX Gateway
Provision an example BGP Peer for Contrail Controller Federation
Perfroming provisioning of controller specific BGP parameters
Perfroming provisioning of controller specific BGP parameters
Perfroming provisioning of controller specific BGP parameters
  1. Check Contrail status
$ ./contrail-status.sh
 
**************************************
******node is 100.72.0.121
**************************************

###############################################################################
#     ___        ______     ___        _      _      ____  _             _    #
#    / \ \      / / ___|   / _ \ _   _(_) ___| | __ / ___|| |_ __ _ _ __| |_  #
#   / _ \ \ /\ / /\___ \  | | | | | | | |/ __| |/ / \___ \| __/ _` | '__| __| #
#  / ___ \ V  V /  ___) | | |_| | |_| | | (__|   <   ___) | || (_| | |  | |_  #
# /_/   \_\_/\_/  |____/   \__\_\\__,_|_|\___|_|\_\ |____/ \__\__,_|_|   \__| #
#-----------------------------------------------------------------------------#
#                      Amazon EKS Quick Start bastion host                    #
#    https://docs.aws.amazon.com/quickstart/latest/amazon-eks-architecture/   #
###############################################################################
Unable to find image 'hub.juniper.net/contrail/contrail-status:2008.121' locally
2008.121: Pulling from contrail/contrail-status
f34b00c7da20: Already exists
5a390a7d68be: Already exists
07ca884ff4ba: Already exists
0d7531696e74: Already exists
eda9dec1319f: Already exists
c52247bf208e: Already exists
a5dc1d3a1a1f: Already exists
0297580c16ad: Already exists
e341bea3e3e5: Pulling fs layer
12584a95f49f: Pulling fs layer
367eed12f241: Pulling fs layer
12584a95f49f: Download complete
367eed12f241: Verifying Checksum
367eed12f241: Download complete
e341bea3e3e5: Verifying Checksum
e341bea3e3e5: Download complete
e341bea3e3e5: Pull complete
12584a95f49f: Pull complete
367eed12f241: Pull complete
Digest: sha256:54ba0b280811a45f846d673addd38d4495eec0e7c3a7156e5c0cd556448138a7
Status: Downloaded newer image for hub.juniper.net/contrail/contrail-status:2008.121
Pod              Service         Original Name                          Original Version  State    Id            Status
                 redis           contrail-external-redis                2008-121          running  accadc98d704  Up 40 minutes
analytics        api             contrail-analytics-api                 2008-121          running  10245c7a6914  Up 44 minutes
analytics        collector       contrail-analytics-collector           2008-121          running  0dca4b93ab8b  Up 44 minutes
analytics        nodemgr         contrail-nodemgr                       2008-121          running  d7426442a261  Up 43 minutes
analytics        provisioner     contrail-provisioner                   2008-121          running  0fc9dd4d4899  Up 40 minutes
analytics-alarm  alarm-gen       contrail-analytics-alarm-gen           2008-121          running  b0bf2ccc85c6  Up 43 minutes
analytics-alarm  kafka           contrail-external-kafka                2008-121          running  243740b3dcc1  Up 43 minutes
analytics-alarm  nodemgr         contrail-nodemgr                       2008-121          running  ccc4dcb217a0  Up 43 minutes
analytics-alarm  provisioner     contrail-provisioner                   2008-121          running  c4b29a3cc543  Up 40 minutes
analytics-snmp   nodemgr         contrail-nodemgr                       2008-121          running  2b7f27f280ab  Up 43 minutes
analytics-snmp   provisioner     contrail-provisioner                   2008-121          running  6a228daa9611  Up 40 minutes
analytics-snmp   snmp-collector  contrail-analytics-snmp-collector      2008-121          running  99be46f9d53b  Up 43 minutes
analytics-snmp   topology        contrail-analytics-snmp-topology       2008-121          running  528635330a97  Up 43 minutes
config           api             contrail-controller-config-api         2008-121          running  10fe07047b64  Up 42 minutes
config           device-manager  contrail-controller-config-devicemgr   2008-121          running  9a9bdc71e62e  Up 42 minutes
config           nodemgr         contrail-nodemgr                       2008-121          running  426cf167a2dc  Up 42 minutes
config           provisioner     contrail-provisioner                   2008-121          running  1edf12a1bd2c  Up 41 minutes
config           schema          contrail-controller-config-schema      2008-121          running  8634b4f363ac  Up 42 minutes
config           svc-monitor     contrail-controller-config-svcmonitor  2008-121          running  11b45ee00139  Up 42 minutes
config-database  cassandra       contrail-external-cassandra            2008-121          running  b07d460fc690  Up 43 minutes
config-database  nodemgr         contrail-nodemgr                       2008-121          running  a9eea4b90de2  Up 43 minutes
config-database  provisioner     contrail-provisioner                   2008-121          running  c30f3a886459  Up 41 minutes
config-database  rabbitmq        contrail-external-rabbitmq             2008-121          running  ead17463fa02  Up 41 minutes
config-database  zookeeper       contrail-external-zookeeper            2008-121          running  509355220e84  Up 46 minutes
control          control         contrail-controller-control-control    2008-121          running  1771c57e1a13  Up 42 minutes
control          dns             contrail-controller-control-dns        2008-121          running  6fbd73a52097  Up 42 minutes
control          named           contrail-controller-control-named      2008-121          running  aaf3317caddf  Up 42 minutes
control          nodemgr         contrail-nodemgr                       2008-121          running  da30aefd6b5a  Up 42 minutes
control          provisioner     contrail-provisioner                   2008-121          running  c501f97b1d71  Up 42 minutes
database         cassandra       contrail-external-cassandra            2008-121          running  cfd08e88f324  Up 43 minutes
database         nodemgr         contrail-nodemgr                       2008-121          running  2250aaa51b1c  Up 43 minutes
database         provisioner     contrail-provisioner                   2008-121          running  3207f608f21b  Up 42 minutes
database         query-engine    contrail-analytics-query-engine        2008-121          running  0a706056f9f9  Up 43 minutes
kubernetes       kube-manager    contrail-kubernetes-kube-manager       2008-121          running  a64cdc9ba8c2  Up 42 minutes
vrouter          agent           contrail-vrouter-agent                 2008-121          running  cb69a41cc18c  Up 43 minutes
vrouter          nodemgr         contrail-nodemgr                       2008-121          running  a9bece59538b  Up 43 minutes
vrouter          provisioner     contrail-provisioner                   2008-121          running  f48315907788  Up 40 minutes
webui            job             contrail-controller-webui-job          2008-121          running  3495bd7d52dc  Up 40 minutes
webui            web             contrail-controller-webui-web          2008-121          running  48c286bd11b4  Up 42 minutes

vrouter kernel module is PRESENT
== Contrail control ==
control: active
nodemgr: active
named: active
dns: active

== Contrail analytics-alarm ==
nodemgr: active
kafka: active
alarm-gen: active

== Contrail kubernetes ==
kube-manager: active

== Contrail database ==
nodemgr: active
query-engine: active
cassandra: active

== Contrail analytics ==
nodemgr: active
api: active
collector: active

== Contrail config-database ==
nodemgr: active
zookeeper: active
rabbitmq: active
cassandra: active

== Contrail webui ==
web: active
job: active

== Contrail vrouter ==
nodemgr: active
agent: timeout

== Contrail analytics-snmp ==
snmp-collector: active
nodemgr: active
topology: active

== Contrail config ==
svc-monitor: backup
nodemgr: active
device-manager: backup
api: active
schema: backup

NOTE:A vRouter agent timeout might appear in the output. In most cases, the vRouter is working fine and this is a cosmetic issue.

  1. Setup Contrail user interface access.
$ ./setup-contrail-ui.sh

...
*************************************************************************
Done to see the contrail ui point chrome to https://{bastion-public-ip-address}:8143
*************************************************************************
  1. WebGUI https://{bastion-public-ip-address}:8143,默认用户为 admin,密码为 contrail123
    在这里插入图片描述

NOTE: You may get some BGP alarm messages upon login. These messages occur because sample BGP peering relationships are established with gateway devices and federated clusters. Delete the BGP peers in your environment if you want to clear the alarms.

  1. Modify the auto scaling groups so that you can stop instances that are not in use.
$ export SCALINGGROUPS=( $(aws autoscaling describe-auto-scaling-groups --query "AutoScalingGroups[].AutoScalingGroupName" --output text) )
$ aws autoscaling suspend-processes --auto-scaling-group-name ${SCALINGGROUPS[0]}
$ aws autoscaling suspend-processes --auto-scaling-group-name ${SCALINGGROUPS[1]}
  1. If you plan on deleting stacks at a later time, you will have to reset this configuration and use the resume-processes option before deleting the primary stack:
$ export SCALINGGROUPS=( $(aws autoscaling describe-auto-scaling-groups --query "AutoScalingGroups[].AutoScalingGroupName" --output text) )
$ aws autoscaling resume-processes --auto-scaling-group-name ${SCALINGGROUPS[0]}
$ aws autoscaling resume-processes --auto-scaling-group-name ${SCALINGGROUPS[1]}
  1. If you have a public network that you’d like to use for ingress via a gateway, perform the following configuration steps:

    1. Enter https://bastion-public-ip-address:8143 to connect to the web user interface.
    2. Navigate to Configure > Networks > k8s-default > networks (left side of page) > Add network (+)
    3. In the Add network box, enter the following parameters:
      • Name: k8s-public
      • Subnet: Select ipv4, then enter the IP address of your public service network.
      • Leave all other fields in subnet as default.
      • advanced: External=tick
      • advanced: Share-tick
      • route target: Click +. Enter a route target for your public network. For example, 64512:1000.
      • Click Save.
        在这里插入图片描述
  2. Deploy a test application on each node:

$ cd TestDaemonSet
$ ./create.sh
$ kubectl get pods -A -o wide
  1. Deploy a multitier test application:
$ cd ../TestApp
$ ./Create.sh
$ kubectl get deployments -n justlikenetflix
$ kubectl get pods -o wide -n justlikenetflix
$ kubectl get services -n justlikenetflix
$ kubectl get ingress -n justlikenetflix
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

范桂飓

文章对您有帮助就请一键三连:)

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值