情况说明
前段时间呢,为了学习使用k8s买了3台服务器,当然为了省钱。。,于是就买了腾讯云的学生优惠服务器,由于是3个账号分别购买,于是就。。。只能公网部署,中间遇到了好多好多坑,但是最终还是一个坑一个坑爬出来了,特此记录一下,哪些不小心买了非内网部署的小伙伴们可以参考下。
服务器配置
三台4核8g的 学习环境的话,如果仅仅只是搭建k8s集群,上面在运行几个springboot项目应该是够用的 如果要用kubesphere的话,会有点儿卡 纯学习的话,其实可以买按时间计费的服务器
一些链接
k8s官网链接:https://kubernetes.io/ kubesphere官网:https://kubesphere.com.cn/ 内网部署的话,可以参考尚硅谷的:尚硅谷内网部署
部署步骤
1、 每台机器设置hostname
hostnamectl set-hostname master
hostnamectl set-hostname slave1
hostnamectl set-hostname slave2
2、 基础环境
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
cat << EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat << EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
3、创建虚拟网卡(这里是公网部署的关键)
ifconfig eth0:1 < 你的公网IP>
cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 << EOF
BOOTPROTO=static
DEVICE=eth0:1
IPADDR=<你的公网IP>
PREFIX=32
TYPE=Ethernet
USERCTL=no
ONBOOT=yes
EOF
sudo systemctl restart network
4、所有机器添加域名映射
echo "110.40.155.237 master" >> /etc/hosts
echo "43.142.47.211 slave1" >> /etc/hosts
echo "43.138.55.127 slave2" >> /etc/hosts
5、所有机器安装kubelet、kubeadm、kubectl
cat << EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes= kubernetes
sudo systemctl enable --now kubelet
6、所有机器安装kubeadm引导集群
sudo tee ./images.sh <<- 'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF
chmod +x ./images.sh && ./images.sh
7、修改kubelet启动参数文件(每台机器都需要执行) (很重要)
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
ExecStart = /usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip= < 公网IP>
8、初始化master节点(只在主节点进行)
8.1 添加kubeadm-config.yaml配置文件
cat > kubeadm-config.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.9
apiServer:
certSANs:
- master #请替换为hostname
- 110.41.115.236 #请替换为公网IP
- 10.0.4.7 #请替换为私网IP
- 10.96.0.1
controlPlaneEndpoint: 110.41.115.236:6443 #替换为公网IP
imageRepository: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images
networking:
#pod网络层 这里需要根据你使用的网络插件来更换,例如calico就用192.168.0.0/16,falnnel需要换成10.244.0.0/16
podSubnet: 10.244.0.0/16
#service网络层
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeproxy-config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
EOF
8.2 执行初始化命令
kubeadm init --config= kubeadm-config.yaml
kubeadm reset
rm -rf /root/.kube/
sudo rm -rf /etc/kubernetes/
sudo rm -rf /var/lib/kubelet/
sudo rm -rf /var/lib/dockershim
sudo rm -rf /var/run/kubernetes
sudo rm -rf /var/lib/cni
sudo rm -rf /var/lib/etcd
sudo rm -rf /etc/cni/net.d
8.3 成功之后
8.3.1 显示的内容
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME /.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config
sudo chown $( id -u) : $( id -g) $HOME /.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG = /etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 110.40 .225.236:6443 --token 0tb7mi.q6q1cwt7mv11pxa0 \
--discovery-token-ca-cert-hash sha256:b69907b021012f8a2968afb73872b782e1b753385d4ccedc71e94a79fbed3e80 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 133.40 .222.236:6443 --token 0tb7mi.q6q1cwt7mv11pxa0 \
--discovery-token-ca-cert-hash sha256:b69907b021012f8a2968afb73872b782e1b753385d4ccedc71e94a79fbed3e80
8.3.2 第一步
mkdir -p $HOME /.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config
sudo chown $( id -u) : $( id -g) $HOME /.kube/config
8.3.3 第二步
export KUBECONFIG = /etc/kubernetes/admin.conf
8.4 修改kube-apiserver参数 (只在master)
vim /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.0 .20.8:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address= 123.40 .456.236
- --bind-address= 0.0 .0.0
- --allow-privileged= true
- --authorization-mode= Node,RBAC
- --client-ca-file= /etc/kubernetes/pki/ca.crt
- --enable-admission-plugins= NodeRestriction
- --enable-bootstrap-token-auth= true
- --etcd-cafile= /etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile= /etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile= /etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers= https://127.0.0.1:2379
- --insecure-port= 0
- --kubelet-client-certificate= /etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key= /etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types= InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file= /etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file= /etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names= front-proxy-client
- --requestheader-client-ca-file= /etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix= X-Remote-Extra-
- --requestheader-group-headers= X-Remote-Group
- --requestheader-username-headers= X-Remote-User
- --secure-port= 6443
- --service-account-issuer= https://kubernetes.default.svc.cluster.local
- --service-account-key-file= /etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file= /etc/kubernetes/pki/sa.key
- --service-cluster-ip-range= 10.96 .0.0/12
- --tls-cert-file= /etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file= /etc/kubernetes/pki/apiserver.key
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-apiserver:v1.20.9
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 10.0 .20.8
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: 10.0 .20.8
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: 10.0 .20.8
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
status: { }
8.5 安装网络组件 使用 flanel (calico没成功。。。)
8.5.1 下载flannel配置文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
8.5.2 修改 kube-flannel.yml 内容,如下 (有新增注释的地方)
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
allowedCapabilities: [ 'NET_ADMIN' , 'NET_RAW' ]
defaultAddCapabilities: [ ]
requiredDropCapabilities: [ ]
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
seLinux:
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: [ 'extensions' ]
resources: [ 'podsecuritypolicies' ]
verbs: [ 'use' ]
resourceNames: [ 'psp.flannel.unprivileged' ]
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name" : "cbr0" ,
"cniVersion" : "0.3.1" ,
"plugins" : [
{
"type" : "flannel" ,
"delegate" : {
"hairpinMode" : true,
"isDefaultGateway" : true
}
} ,
{
"type" : "portmap" ,
"capabilities" : {
"portMappings" : true
}
}
]
}
net-conf.json: |
{
"Network" : "10.244.0.0/16" ,
"Backend" : {
"Type" : "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.14.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.14.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --public-ip= $( PUBLIC_IP)
- --iface= eth0
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: [ "NET_ADMIN" , "NET_RAW" ]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: PUBLIC_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
8.5.3 创建网络插件
kubectl apply -f kube-flannel.yml
8.6 从节点加入
8.6.1 加入命令
kubeadm join 123.45 .165.236:6443 --token 8z98g4.p7eafjdkcaqd279a \
--discovery-token-ca-cert-hash sha256:0f6891168a726c265528355ca932ae74769bfef1e0f7d0f89b089a428eda9829
8.6.2 使用命令检查(在主节点)
kubectl get pods -A
kubectl get nodes -o wide
journalctl -f -u kubelet
8.6.3 重新生成加入命令 (在主节点)
kubeadm token create --print-join-command
9、部署控制台
9.1 拉取配置
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
9.2 配置内容(如果拉不下来,可以直接用这个,也可以到github上下载)
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
- apiGroups: [ "" ]
resources: [ "secrets" ]
resourceNames: [ "kubernetes-dashboard-key-holder" , "kubernetes-dashboard-certs" , "kubernetes-dashboard-csrf" ]
verbs: [ "get" , "update" , "delete" ]
- apiGroups: [ "" ]
resources: [ "configmaps" ]
resourceNames: [ "kubernetes-dashboard-settings" ]
verbs: [ "get" , "update" ]
- apiGroups: [ "" ]
resources: [ "services" ]
resourceNames: [ "heapster" , "dashboard-metrics-scraper" ]
verbs: [ "proxy" ]
- apiGroups: [ "" ]
resources: [ "services/proxy" ]
resourceNames: [ "heapster" , "http:heapster:" , "https:heapster:" , "dashboard-metrics-scraper" , "http:dashboard-metrics-scraper" ]
verbs: [ "get" ]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
- apiGroups: [ "metrics.k8s.io" ]
resources: [ "pods" , "nodes" ]
verbs: [ "get" , "list" , "watch" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.3.1
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace= kubernetes-dashboard
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: { }
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os" : linux
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.6
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os" : linux
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: { }
9.4 设置访问端口
kubectl get svc -A | grep kubernetes-dashboard
9.5 创建访问账号
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
9.6 加载配置文件 生成访问令牌
kubectl apply -f dash.yaml
kubectl -n kubernetes-dashboard get secret $( kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath = "{.secrets[0].name}" ) -o go-template= "{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6Ikk3cXItU0I0ZTV0bG9YcXFITWY5aklOOEtPcGZTNnJuaWlZcEtLLVpNb2sifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXBjc2I3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L_NRCIGzjH1qCqfgSt2jgwVmDQg