这个task描述Istio的流量隐蔽/监控能力。流量监控是一个强大的概念,它允许功能团队尽可能减少生产变化带来的风险。监控将实时流量的副本带入监控服务,这发生在主要服务的关键请求路径带外部。
Before you begin
- 安装Istio
- 开启能访问日志的
httpbin
服务的两个版本。
httpbin-v1:
cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: httpbin-v1
spec:
replicas: 1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:8080", "httpbin:app"]
ports:
- containerPort: 8080
EOF
httpbin-v2:
cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: httpbin-v2
spec:
replicas: 1
template:
metadata:
labels:
app: httpbin
version: v2
spec:
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:8080", "httpbin:app"]
ports:
- containerPort: 8080
EOF
httpbin Kubernetes service:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: httpbin
labels:
app: httpbin
spec:
ports:
- name: http
port: 8080
selector:
app: httpbin
EOF
- 开启
sleep
服务,我们可以使用curl
提供负载
sleep service:
cat <<EOF | istioctl kube-inject -f - | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sleep
spec:
replicas: 1
template:
metadata:
labels:
app: sleep
spec:
containers:
- name: sleep
image: tutum/curl
command: ["/bin/sleep","infinity"]
imagePullPolicy: IfNotPresent
EOF
Mirroring
让我们建立一个方案来描述Istio的流量监控能力。我们有 httpbin
服务的两个版本。默认情况下,k8s将通过负载均衡访问服务的这两个版本。我们将使用Istio强制将所有流量打到 httpbin
服务的v1版本。
Creating default routing policy
1.让我们创建一个默认路由规则将所有流量路由到的 httpbin
服务的v1版本:
cat <<EOF | istioctl create -f -
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: httpbin-default-v1
spec:
destination:
name: httpbin
precedence: 5
route:
- labels:
version: v1
EOF
现在所有路由应该进入 httpbin v1
。让我们尝试发送一些流量:
export SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8080/headers'
{
"headers": {
"Accept": "*/*",
"Content-Length": "0",
"Host": "httpbin:8080",
"User-Agent": "curl/7.35.0",
"X-B3-Sampled": "1",
"X-B3-Spanid": "eca3d7ed8f2e6a0a",
"X-B3-Traceid": "eca3d7ed8f2e6a0a",
"X-Ot-Span-Context": "eca3d7ed8f2e6a0a;eca3d7ed8f2e6a0a;0000000000000000"
}
}
如果我们检测 httpbin pods
中 v1
and v2
的日志,我们应该发现访问日志记录只有 v1
:
$ kubectl logs -f httpbin-v1-2113278084-98whj -c httpbin
127.0.0.1 - - [07/Feb/2018:00:07:39 +0000] "GET /headers HTTP/1.1" 200 349 "-" "curl/7.35.0"
1.创建一个路由规则监控到v2的流量:
cat <<EOF | istioctl create -f -
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: mirror-traffic-to-httbin-v2
spec:
destination:
name: httpbin
precedence: 11
route:
- labels:
version: v1
weight: 100
- labels:
version: v2
weight: 0
mirror:
name: httpbin
labels:
version: v2
EOF
这个路由规则指定我们路由100%的流量到v1,0%到v2。同时,明确指明v2服务是必要的,因为这是在后台创建envoy集群定义的。在未来版本,我们将努力改进,不必再明确指定0%加权路由。
最后一节指定我们想要监控的 httpbin v2
。当流量得到监控,请求带着它的 Host/Authority 请求头附加上 -shadow
被发送给监控服务。例如,cluster-1
变成 cluster-1-shadow
。 意识到这些请求被当作“fire and forget”(换句话说,响应被丢弃)监控也同样重要。
现在如果我们发送流量:
kubectl exec -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8080/headers'
我们将看到 v1
and v2
都有访问日志。在 v2
中被创建的访问日志是实际上要发送到 v1
的监控请求。
Cleaning up
- 移除规则。
istioctl delete routerule mirror-traffic-to-httbin-v2
istioctl delete routerule httpbin-default-v1
- 关闭 httpbin 服务端和客户端。
kubectl delete deploy httpbin-v1 httpbin-v2 sleep
kubectl delete svc httpbin