k8s pod restartcount 改0_k8s权威指南 - Services笔记

Pods需要一种能够发现其他Pods的服务来提供交互的能力。但:

  1. 由于pod随时可能被抹除,更新,所以hardcode pod服务的方式是不可行的。
  2. k8s在部署一个pod时,只有确定node节点之后才会真正分配一个ip,所以我们也没法提前知道pods的ip.
  3. 横向扩展一个服务意味着同时会有多个pod提供同一个服务,每个pod都有自己独立的ip地址。客户不应该care后端有多少pod在提供服务。

INTRODUCING SERVICES

seservices就是这样一种可以把多个pod抽象成持久化能提供服务的工具。通过给pod创建一个service,我们会暴露一个持久化的IP给外部,内部的客户去使用。下面是个例图:

f4bc0f81f76762a900c3a451bb5afb00.png

创建服务

一个服务背后是可以有多个pod支撑的。那我们怎么定义哪些pod属于哪些服务呢?

当然是通过label啊!

9ba63335755690eaf1d6593747812084.png

最简单的创建Service的方法是通过kubectl expose。当然实际操作中还是用yaml多。下面就是一个例子kubia-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubia
spec:
  ports:
  - port: 80                ❶
    targetPort: 8080        ❷
  selector:                 ❸
    app: kubia              ❸
  • ❶ The port this service will be available on
  • ❷ The container port the service will forward to
  • ❸ All pods with the app=kubia label will be part of this service.

这样子创建完成,把所有app=kubia的pod集成为一个服务,然后当收到80端口的请求时转发到pod的8080端口。

我们可以列一下svc来看看

$ kubectl get svc
NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.111.240.1     <none>        443/TCP   30d
kubia        10.111.249.153   <none>        80/TCP    6m        ❶

可以看到IP地址已经被assign出来了,但注意这个clusterIP只有集群内部能访问。

你可以通过以下几种方式来访问集群中的服务:

  • The obvious way is to create a pod that will send the request to the service’s cluster IP and log the response. You can then examine the pod’s log to see what the service’s response was.
  • You can ssh into one of the Kubernetes nodes and use the curl command.
  • You can execute the curl command inside one of your existing pods through the kubectl exec command.

我们就ping一下服务看看:

$ kubectl exec kubia-7nog1 -- curl -s http://10.111.249.153
You've hit kubia-gzwli

Why the double dash?

--代表kubectl命令的结束,--之后的命令都在pod里面运行。如果不加--参数会被转译为kubectl的参数,会报错

$ kubectl exec kubia-7nog1 curl -s http://10.111.249.153
The connection to the server 10.111.249.153 was refused – did you
     specify the right host or port?

在打上面那条命令的时候,你的命令走向如下图:(真的复杂啊,从local到service,再从service到随机一个pod)

9e355028d9aef7e28ef1555c904d868e.png

这种情况下每次curl都有可能命中不同的pod,因为service会随机把一个请求分配到pod上。

所以如果你想同一个请求端的请求都被转发到固定的pod上的话,我们可以通过设定sessionAffinity设定为ClientIP。(这个参数默认是None)。下面是一个例子

apiVersion: v1
kind: Service
spec:
  sessionAffinity: ClientIP
  ...

K8s支持两种形式的service session affinity:None and ClientIP. k8s目前不支持cookie方式。

目前为止我们的服务只暴露了一个端口,但是service也可以暴露多个端口。比如我们在8080坚挺http在8443坚挺https。我们就可以这么写:

apiVersion: v1
kind: Service
metadata:
  name: kubia
spec:
  ports:
  - name: http              ❶
    port: 80                ❶
    targetPort: 8080        ❶
  - name: https             ❷
    port: 443               ❷
    targetPort: 8443        ❷
  selector:                 ❸
    app: kubia              ❸
  • ❶ Port 80 is mapped to the pods’ port 8080.
  • ❷ Port 443 is mapped to pods’ port 8443.
  • ❸ The label selector always applies to the whole service.
这个label select对于服务来说是整体使用的。也就是说不能针对各个port单独配置。如果你想要不同的ports对应不同的pod,你需要创建2个服务。

另外,为了便于管理,我们也可以给对应的port起名字,便于记录

kind: Pod
spec:
  containers:
  - name: kubia
    ports:
    - name: http               ❶
      containerPort: 8080      ❶
    - name: https              ❷
      containerPort: 8443      ❷
  • ❶ Container’s port 8080 is called http
  • ❷ Port 8443 is called https.

在服务端也可以起名字:

apiVersion: v1
kind: Service
spec:
  ports:
  - name: http              ❶
    port: 80                ❶
    targetPort: http        ❶
  - name: https             ❷
    port: 443               ❷
    targetPort: https       ❷
  • ❶ Port 80 is mapped to the container’s port called http.
  • ❷ Port 443 is mapped to the container’s port, whose name is https.

为什么要给服务端口起名字呢?因为这样你端口号变了就不用改配置了呀!

发现服务

服务创建完毕后,我们就有了一个稳定的IP来访问我们的后端pods。但是客户端怎么知道service的ip和端口呢?

当一个Pod被启动时候,k8s会初始化一系列在当前节点已经存在的服务,并嵌入到这个Pod的环境变量中。我们就来初始化看一下:

$ kubectl exec kubia-3inly env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=kubia-3inly
KUBERNETES_SERVICE_HOST=10.111.240.1
KUBERNETES_SERVICE_PORT=443
...
KUBIA_SERVICE_HOST=10.111.249.153                           ❶
KUBIA_SERVICE_PORT=80                                       ❷
...
  • ❶ Here’s the cluster IP of the service.
  • ❷ And here’s the port the service is available on.

举个例子,如果有一个前端pod需要call后端的pod servie,我们可以创建一个backend-database服务,然后让前端pod通过查找叫 BACKEND_DATABASE_SERVICE_HOSTandBACKEND_DATABASE_SERVICE_PORT来发现后端服务的IP和端口。!

Dashes in the service name are converted to underscores and all letters are uppercased when the service name is used as the prefix in the environment variable’s name.

环境变量是查找服务的一种方式,但我们也可以通过DNS server来查找。

when you listed pods in thekube-systemnamespace? One of the pods was calledkube-dns. Thekube-systemnamespace also includes a corresponding service with the same name.

我们在kube-system的ns里是可以发现一个pod叫做kube-dns(也有一个同名的service)

s这个pod会运行一个dns servr。任何一个在k8s集群上的DNS查询进程都会被这个dns server handle。

Whether a pod uses the internal DNS server or not is configurable through thednsPolicyproperty in each pod’s spec.

每个service都会在internal dns server上有一个DNS entry.那么知道service 名称的客户端就可以通过qualified domain name (FQDN)来访问服务。

还是用前端pod访问后端pod举例,FQDN就可以这么写:

backend-database.default.svc.cluster.local
backend-database corresponds to the service name,defaultstands for the namespace the service is defined in, and svc.cluster.local is a configurable cluster domain suffix used in all cluster local service names.
The client must still know the service’s port number. If the service is using a standard port (for example, 80 for HTTP or 5432 for Postgres), that shouldn’t be a problem. If not, the client can get the port number from the environment variable.

在实际使用时,如果2个pod在一个ns空间中,你甚至是可以把svc.cluster.local和namespace省略的。直接用backend-database 就可以。让我们来通过FQDN来访问服务试试:

首先通过exec进入pod

$ kubectl exec -it kubia-3inly bash
root@kubia-3inly:/#

然后我们可以通过三种方式来访问服务:

root@kubia-3inly:/# curl http://kubia.default.svc.cluster.local
You've hit kubia-5asi2

root@kubia-3inly:/# curl http://kubia.default
You've hit kubia-3inly

root@kubia-3inly:/# curl http://kubia
You've hit kubia-8awf3

我们可以通过service名称来看请求URL

root@kubia-3inly:/# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local ...

有时候服务挂掉无法访问时,我们会想通过ping来访问。

root@kubia-3inly:/# ping kubia
PING kubia.default.svc.cluster.local (10.111.249.153): 56 data bytes
^C--- kubia.default.svc.cluster.local ping statistics ---
54 packets transmitted, 0 packets received, 100% packet loss

Hmm.curl-ing the service works, but pinging it doesn’t. That’s because the service’s clusterIP is a virtual IP, and only has meaning when combined with the service port

Introducing service endpoints

服务并不会直接联系到pod.有一种资源还存在于两者之间:the Endpoints resource。我们用kbl的describe命令可以看到

$ kubectl describe svc kubia
Name:                kubia
Namespace:           default
Labels:              <none>
Selector:            app=kubia                                          ❶
Type:                ClusterIP
IP:                  10.111.249.153
Port:                <unset> 80/TCP
Endpoints:           10.108.1.4:8080,10.108.2.5:8080,10.108.2.6:8080    ❷
Session Affinity:    None
No events.
  • ❶ The service’s pod selector is used to create the list of endpoints.
  • ❷ The list of pod IPs and ports that represent the endpoints of this service

Endpoints 资源其实就是一系列的ip地址和端口号暴露的列表。可以通过get方法得到:

$ kubectl get endpoints kubia
NAME    ENDPOINTS                                         AGE
kubia   10.108.1.4:8080,10.108.2.5:8080,10.108.2.6:8080   1h
Although the pod selector is defined in the service spec, it’s not used directly when redirecting incoming connections. Instead, the selector is used to build a list of IPs and ports, which is then stored in the Endpoints resource. When a client connects to a service, the service proxy selects one of those IP and port pairs and redirects the incoming connection to the server listening at that location.

手动创建一个Endpoint和Service资源的例子(如果service没pod的label的话就需要手动创建)

You may have probably realized this already, but having the service’s endpoints decoupled from the service allows them to be configured and updated manually.
If you create a service without a pod selector, Kubernetes won’t even create the Endpoints resource (after all, without a selector, it can’t know which pods to include in the service). It’s up to you to create the Endpoints resource to specify the list of endpoints for the service.
To create a service with manually managed endpoints, you need to create both a Service and an Endpoints resource.

先创建一个服务:

apiVersion: v1
kind: Service
metadata:
  name: external-service          ❶
spec:                             ❷
  ports:
  - port: 80
  • ❶ The name of the service must match the name of the Endpoints object (see next listing).
  • ❷ This service has no selector defined.

再创建一个endpoint

apiVersion: v1
kind: Endpoints
metadata:
  name: external-service      ❶
subsets:
  - addresses:
    - ip: 11.11.11.11         ❷
    - ip: 22.22.22.22         ❷
    ports:
    - port: 80                ❸
  • ❶ The name of the Endpoints object must match the name of the service (see previous listing).
  • ❷ The IPs of the endpoints that the service will forward connections to
  • ❸ The target port of the endpoints

endpoint的名字要和service一样,并且要有请求转发的目标ip地址。两者都完成时,我们的手动service就搞定了,示意图如下:

6b6ed6d5c81603ab9ff513e7db373647.png

直接创建外部通信服务

手疼这里懒得翻了

Instead of exposing an external service by manually configuring the service’s Endpoints, a simpler method allows you to refer to an external service by its fully qualified domain name (FQDN).To create a service that serves as an alias for an external service, you create a Service resource with thetypefield set toExternalName. For example, let’s imagine there’s a public API available atapi.somecompany.com. You can define a service that points to it as shown in the following listing.

apiVersion: v1
kind: Service
metadata:
  name: external-service
spec:
  type: ExternalName                         ❶
  externalName: someapi.somecompany.com      ❷
  ports:
  - port: 80
  • ❶ Service type is set to ExternalName
  • ❷ The fully qualified domain name of the actual service

After the service is created, pods can connect to the external service through the external-service.default.svc.cluster.local domain name (or even external-service) instead of using the service’s actual FQDN. This hides the actual service name and its location from pods consuming the service, allowing you to modify the service definition and point it to a different service any time later, by only changing the externalName attribute or by changing the type back to ClusterIP and creating an Endpoints object for the service—either manually or by specifying a label selector on the service and having it created automatically.

ExternalName services are implemented solely at the DNS level—a simple CNAME DNS record is created for the service. Therefore, clients connecting to the service will connect to the external service directly, bypassing the service proxy completely. For this reason, these types of services don’t even get a cluster IP.

向外部客户提供服务

c6fd4fc1d6d1f2e4607b5bb49fdc8ff5.png

有几种方式:

Setting the service type toNodePort - 每个node开放一个外部端口转发服务

Setting the service type toLoadBalancer,an extension of theNodePorttype 指定一个load balancer转发请求服务

Creating an Ingress resource, a radically different mechanism for exposing multiple services through a single IP address 从http level创建服务

下面分别是3个例子:

Using a NodePort service

apiVersion: v1
kind: Service
metadata:
  name: kubia-nodeport
spec:
  type: NodePort             ❶
  ports:
  - port: 80                 ❷
    targetPort: 8080         ❸
    nodePort: 30123          ❹
  selector:
    app: kubia
  • ❶ Set the service type to NodePort.
  • ❷ This is the port of the service’s internal cluster IP.
  • ❸ This is the target port of the backing pods.
  • ❹ The service will be accessible through port 30123 of each of your cluster nodes.

看看结果

$ kubectl get svc kubia-nodeport
NAME             CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubia-nodeport   10.111.254.223   <nodes>       80:30123/TCP   2m

Look at the EXTERNAL-IP column. It shows <nodes>, indicating the service is accessible through the IP address of any cluster node. The PORT(S) column shows both the internal port of the cluster IP (80) and the node port (30123). The service is accessible at the following addresses:

  • 10.111.254.223:80
  • <1st node's IP>:30123
  • <2nd node's IP>:30123, and so on.

下面是一个示意图:

7c0755b10760edf29417ad64a8b625bf.png

我们可以通过下面这个命令拿到所有的node ip

$ kubectl get nodes -o jsonpath='{.items[*].status.
➥  addresses[?(@.type=="ExternalIP")].address}'
130.211.97.55 130.211.99.206

这条命令的解释:

  • Go through all the elements in the items attribute.
  • For each element, enter the status attribute.
  • Filter elements of the addresses attribute, taking only those that have the type attribute set to ExternalIP.
  • Finally, print the address attribute of the filtered elements.

更多关于json path的文档可见http://kubernetes.io/docs/user-guide/jsonpath.

知道了node的ip后你就可以通过端口请求玩了

$ curl http://130.211.97.55:30123
You've hit kubia-ym8or
$ curl http://130.211.99.206:30123
You've hit kubia-xueq1

Exposing a service through an external load balancer

apiVersion: v1
kind: Service
metadata:
  name: kubia-loadbalancer
spec:
  type: LoadBalancer                ❶
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: kubia

❶ This type of service obtains a load balancer from the infrastructure hosting the Kubernetes cluster.

Session affinity and web browsers
Because your service is now exposed externally, you may try accessing it with your web browser. You’ll see something that may strike you as odd—the browser will hit the exact same pod every time. Did the service’s session affinity change in the meantime? With kubectl describe, you can double-check that the service’s session affinity is still set to None, so why don’t different browser requests hit different pods, as is the case when using curl?
Let me explain what’s happening. The browser is using keep-alive connections and sends all its requests through a single connection, whereas curl opens a new connection every time. Services work at the connection level, so when a connection to a service is first opened, a random pod is selected and then all network packets belonging to that connection are all sent to that single pod. Even if session affinity is set to None, users will always hit the same pod (until the connection is closed).

10788daaeebb32b62a25d609dd3f8b74.png

EXPOSING SERVICES EXTERNALLY THROUGH AN INGRESS RESOURCE

最后再写一个ingress.ingress可以继承不同的服务(只要一个IP)示意图如下:

5f4d1ae5b8885bf74a22b6226809b0a5.png

Before we go into the features an Ingress object provides, let me emphasize that to make Ingress resources work, an Ingress controller needs to be running in the cluster. Different Kubernetes environments use different implementations of the controller, but several don’t provide a default controller at all.

创建一个ingress服务:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubia
spec:
  rules:
  - host: kubia.example.com               ❶
    http:
      paths:
      - path: /                           ❷
        backend:
          serviceName: kubia-nodeport     ❷
          servicePort: 80                 ❷
  • ❶ This Ingress maps the http://kubia.example.com domain name to your service.
  • ❷ All requests will be sent to port 80 of the kubia-nodeport service.
  • 3

Accessing the service through the Ingress

看下是否创建成功:

$ kubectl get ingresses
NAME      HOSTS               ADDRESS          PORTS     AGE
kubia     kubia.example.com   192.168.99.100   80        29m

一切就绪之后,我们就可以通过浏览器或者curl来访问刚刚创建的服务了

$ curl http://kubia.example.com
You've hit kubia-ke823
Figure 5.10shows how the client connected to one of the pods through the Ingress controller. The client first performed a DNS lookup of kubia.example.com, and the DNS server (or the local operating system) returned the IP of the Ingress controller. The client then sent an HTTP request to the Ingress controller and specifiedkubia.example.comin theHostheader. From that header, the controller determined which service the client is trying to access, looked up the pod IPs through the Endpointsobject associated with the service, and forwarded the client’s request to one of the pods.

10602487691f5dc811afa759f6932de2.png

Exposing multiple services through the same Ingress

...
  - host: kubia.example.com
    http:
      paths:
      - path: /kubia                ❶
        backend:                    ❶
          serviceName: kubia        ❶
          servicePort: 80           ❶
      - path: /bar                  ❷
        backend:                    ❷
          serviceName: bar          ❷
          servicePort: 80           ❷
  • ❶ Requests to http://kubia.example.com/kubia will be routed to the kubia service.
  • ❷ Requests to http://kubia.example.com/bar will be routed to the bar service.

同样的我们也可以指定不同的host来确定不同的服务

spec:
  rules:
  - host: foo.example.com          ❶
    http:
      paths:
      - path: /
        backend:
          serviceName: foo         ❶
          servicePort: 80
  - host: bar.example.com          ❷
    http:
      paths:
      - path: /
        backend:
          serviceName: bar         ❷
          servicePort: 80
  • ❶ Requests for http://foo.example.com will be routed to service foo.
  • ❷ Requests for http://bar.example.com will be routed to service bar.

最后是一些troubleshooting services的建议

When you’re unable to access your pods through the service, you should start by going through the following list:

  • First, make sure you’re connecting to the service’s cluster IP from within the cluster, not from the outside.
  • Don’t bother pinging the service IP to figure out if the service is accessible (remember, the service’s cluster IP is a virtual IP and pinging it will never work).
  • If you’ve defined a readiness probe, make sure it’s succeeding; otherwise the pod won’t be part of the service.
  • To confirm that a pod is part of the service, examine the corresponding Endpoints object with kubectl get endpoints.
  • If you’re trying to access the service through its FQDN or a part of it (for example, myservice.mynamespace.svc.cluster.local or myservice.mynamespace) and it doesn’t work, see if you can access it using its cluster IP instead of the FQDN.
  • Check whether you’re connecting to the port exposed by the service and not the target port.
  • Try connecting to the pod IP directly to confirm your pod is accepting connections on the correct port.
  • If you can’t even access your app through the pod’s IP, make sure your app isn’t only binding to localhost.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值