Kubernetes 学习10 Service资源

一、Service对应组件关系

  1、在kubernetes平台之上,pod是有生命周期的,所以为了能够给对应的客户端提供一个固定的访问端点,因此我们在客户端和服务Pod之间添加一个固定的中间层,这个中间层我们称之为Service,这个Service的真正工作还要严重依赖于我们k8s之上部署的附件。称之为kubernetes的dns服务,不同的k8s版本实现可能不同,较新版本中默认使用的CoreDNS,1.11版本之前版本用的kube-dns,Service名称解析是强依赖于DNS附件的,因此我们部署完k8s以后必须要去部署一个CoreDNS或者kube-dns。

  2、k8s要想能够向客户端提供网络功能,它需要依赖于第三方的方案,这种第三方方案可通过(至少较新版本中) cni(容器网络插件标准的接口)来进行接入任何遵循这种插件标准的第三方方案。当然,这里面的方案有很多个,像我们之前部署的flannel,canal等。

  3、在k8s中有三类网络地址,分别是节点网络(node network),pod网络(pod network),集群网络(cluster network或Service network)前两种网络都是实实在在的存在纯硬件设备或软件模拟的,都是存在的。后一种集群网络 的IP称之为virtual IP,因为这些IP没有实实在在配置在某个接口上。它仅是出现在Service的规则中。

  4、那么Service是什么呢?在每一个节点上我们工作了一个组件叫kube-proxy,此组件将始终监视着master上的api server中有关Service的资源变动信息,这种是通过k8s中的固有的一种请求方法watch(监视)来实现的。一旦有Service的资源的变动,包括创建,kube-proxy都要将其转换为当前节点之上的能够实现service资源调度,包括将用户请求调度到后端特定pod资源之上的规则中,这个规则有可能是iptables,也有可能是ipvs,取决于Service的实现方式

  5、Service实现方式在k8s上有三种模型

    a、userspace,来自内部的请求client pod请求某个服务时一定先到达当前节点内核空间的iptables规则,也就是service的规则。这个service的工作方式是请求到达service后由service先把它转为本地监听在某个套接字上的用户空间的kube-proxy,它来负责处理,处理完后再转给service IP,最终代理至于service相关联的各个pod实现调度。可以发现请求由client pod发给service,service还要回到监听在这个端口上的kube-proxy,由kube-proxy来进行分发,所以kube-proxy是工作在用户空间的进程。所以其被称之为userspace。这种方式效率很低,原因在于先要到内核空间然后再到当前主机的用户空间kube-proxy,由kube-proxy封装其报文代理完以后再回到内核空间然后由iptables规则进行分发。

    

    b、iptables,后来就到了第二种方式,方法是,客户端ip请求时直接请求service的ip,这个请求报文被本地内核空间中的service规则所截取,进而直接调度给server pod,这种方式直接工作在内核空间由iptables规则直接负责调度。

    c、ipvs,client pod请求到达内核空间后直接由ipvs规则来调度,直接调度给pod网络地址范围内的相关Pod资源。

  6、我们在安装并配置k8s的时候设定service工作在什么模式下他就会生成对应的什么模式的规则。1.10及之前的版本用的是iptables,再往前是1.1之前用的是userspace,1.11默认使用的是ipvs,若ipvs未被激活则默认为iptables。如果某个服务背后的Pod资源发生改变,比如service的标签选择器适用的版本又多一个那么这个pod适用的信息会立即反应到apiserver上,因为多个pod信息是会存在apiserver的etcd中,而后kube-proxy能够检测到这种变化并将其立即转化为service规则。所以他的转化是动态实时的,如果删除了一个pod并且这个pod没有被重构,这个状态结果会反馈至apiserver的etcd中,这种变化被kube-proxy watch到了,然后立即将其转换成iptables规则。

  7、service到pod是有一个中间层的,service不会直接到pod,service会先到endpoints ,endpoints也是一个标准的k8s对象。其相当于是pod 地址 + 端口,然后再由endpoint关联至后端的pod,但是我们理解的话可以直接理解为从service直接到pod就行,但事实上我们可以为service手动创建endpoints资源。

    

 

二、service创建

  1、使用清单创建service资源,以前创建service都是通过expose命令来创建,现在通过使用清单来创建。

[root@k8smaster ~]# kubectl explain svc
KIND:     Service
VERSION:  v1 #核心资源v1

DESCRIPTION:
     Service is a named abstraction of software service (for example, mysql)
     consisting of local port (for example 3306) that the proxy listens on, and
     the selector that determines which pods will answer requests sent through
     the proxy.

FIELDS:
   apiVersion    <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#resources

   kind    <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds

   metadata    <Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata

   spec    <Object>
     Spec defines the behavior of a service.
     https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status

   status    <Object>
     Most recently observed status of the service. Populated by the system.
     Read-only. More info:
     https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status

[root@k8smaster ~]# kubectl explain svc.spec
KIND:     Service
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
     Spec defines the behavior of a service.
     https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status

     ServiceSpec describes the attributes that a user creates on a service.

FIELDS:
   clusterIP    <string> #默认是自动分配,也可自己指定
     clusterIP is the IP address of the service and is usually assigned randomly
     by the master. If an address is specified manually and is not in use by
     others, it will be allocated to the service; otherwise, creation of the
     service will fail. This field can not be changed through updates. Valid
     values are "None", empty string (""), or a valid IP address. "None" can be
     specified for headless services when proxying is not required. Only applies
     to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is
     ExternalName. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

   externalIPs    <[]string>
     externalIPs is a list of IP addresses for which nodes in the cluster will
     also accept traffic for this service. These IPs are not managed by
     Kubernetes. The user is responsible for ensuring that traffic arrives at a
     node with this IP. A common example is external load-balancers that are not
     part of the Kubernetes system.

   externalName    <string>
     externalName is the external reference that kubedns or equivalent will
     return as a CNAME record for this service. No proxying will be involved.
     Must be a valid RFC-1123 hostname (https://tools.ietf.org/html/rfc1123) and
     requires Type to be ExternalName.

   externalTrafficPolicy    <string>
     externalTrafficPolicy denotes if this Service desires to route external
     traffic to node-local or cluster-wide endpoints. "Local" preserves the
     client source IP and avoids a second hop for LoadBalancer and Nodeport type
     services, but risks potentially imbalanced traffic spreading. "Cluster"
     obscures the client source IP and may cause a second hop to another node,
     but should have good overall load-spreading.

   healthCheckNodePort    <integer>
     healthCheckNodePort specifies the healthcheck nodePort for the service. If
     not specified, HealthCheckNodePort is created by the service api backend
     with the allocated nodePort. Will use user-specified nodePort value if
     specified by the client. Only effects when Type is set to LoadBalancer and
     ExternalTrafficPolicy is set to Local.

   loadBalancerIP    <string>
     Only applies to Service Type: LoadBalancer LoadBalancer will get created
     with the IP specified in this field. This feature depends on whether the
     underlying cloud-provider supports specifying the loadBalancerIP when a
     load balancer is created. This field will be ignored if the cloud-provider
     does not support the feature.

   loadBalancerSourceRanges    <[]string>
     If specified and supported by the platform, this will restrict traffic
     through the cloud-provider load-balancer will be restricted to the
     specified client IPs. This field will be ignored if the cloud-provider does
     not support the feature." More info:
     https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/

   ports    <[]Object>  #我们打算把哪个端口与后端容器端口建立关联关系
     The list of ports that are exposed by this service. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

   publishNotReadyAddresses    <boolean>
     publishNotReadyAddresses, when set to true, indicates that DNS
     implementations must publish the notReadyAddresses of subsets for the
     Endpoints associated with the Service. The default value is false. The
     primary use case for setting this field is to use a StatefulSet's Headless
     Service to propagate SRV records for its Pods without respect to their
     readiness for purpose of peer discovery.

   selector    <map[string]string> #关联到哪些pod资源上
     Route service traffic to pods with label keys and values matching this
     selector. If empty or not present, the service is assumed to have an
     external process managing its endpoints, which Kubernetes will not modify.
     Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if
     type is ExternalName. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/

   sessionAffinity    <string>
     Supports "ClientIP" and "None". Used to maintain session affinity. Enable
     client IP based session affinity. Must be ClientIP or None. Defaults to
     None. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

   sessionAffinityConfig    <Object>
     sessionAffinityConfig contains the configurations of session affinity.

   type    <string>
     type determines how the Service is exposed. Defaults to ClusterIP. Valid
     options are ExternalName, ClusterIP, NodePort, and LoadBalancer.
     "ExternalName" maps to the specified externalName. "ClusterIP" allocates a
     cluster-internal IP address for load-balancing to endpoints. Endpoints are
     determined by the selector or if that is not specified, by manual
     construction of an Endpoints object. If clusterIP is "None", no virtual IP
     is allocated and the endpoints are published as a set of endpoints rather
     than a stable IP. "NodePort" builds on ClusterIP and allocates a port on
     every node which routes to the clusterIP. "LoadBalancer" builds on NodePort
     and creates an external load-balancer (if supported in the current cloud)
     which routes to the clusterIP. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
[root@k8smaster ~]# kubectl explain svc.spec.ports
KIND:     Service
VERSION:  v1

RESOURCE: ports <[]Object>

DESCRIPTION:
     The list of ports that are exposed by this service. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

     ServicePort contains information on service's port.

FIELDS:
   name    <string> #ports的名称
     The name of this port within the service. This must be a DNS_LABEL. All
     ports within a ServiceSpec must have unique names. This maps to the 'Name'
     field in EndpointPort objects. Optional if only one ServicePort is defined
     on this service.

   nodePort    <integer> #指定节点上的端口,只有类型为NodePort时才生效。
     The port on each node on which this service is exposed when type=NodePort
     or LoadBalancer. Usually assigned by the system. If specified, it will be
     allocated to the service if unused or else creation of the service will
     fail. Default is to auto-allocate a port if the ServiceType of this Service
     requires one. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport

   port    <integer> -required-  #这个服务对外提供服务的端口
     The port that will be exposed by this service.

   protocol    <string> #协议,默认tcp
     The IP protocol for this port. Supports "TCP" and "UDP". Default is TCP.

   targetPort    <string> #容器的端口
     Number or name of the port to access on the pods targeted by the service.
     Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If
     this is a string, it will be looked up as a named port in the target Pod's
     container ports. If this is not specified, the value of the 'port' field is
     used (an identity map). This field is ignored for services with
     clusterIP=None, and should be omitted or set equal to the 'port' field.
     More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service

  2、kubectl explain svc.spec 中的type属性有四种

    a、默认为ClusterIP表示给其分配一个集群ip地址仅用于集群内通信,使用此类型时只有两个端口有用

      1)、port,service地址上的端口

      2)、targetPort,pod ip上的端口

[root@k8smaster manifests]# cat redis-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: default
spec:
  slector:
    app: redis
    role: logstor
  clusterIP: 10.97.97.97 #指定固定ClusterIP
  type: ClusterIP
  ports:
  - port: 6379 #Service上的端口
    targetPort: 6379 #pod IP上的端口

[root@k8smaster manifests]# kubectl apply -f redis-svc.yaml 
service/redis created

[root@k8smaster manifests]# kubectl get svc -o wide --show-labels
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE       SELECTOR                 LABELS
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP    11d       <none>                   component=apiserver,provider=kubernetes
redis        ClusterIP   10.97.97.97   <none>        6379/TCP   14s       app=redis,role=logstor   <none>

[root@k8smaster manifests]# kubectl describe svc redis
Name:              redis
Namespace:         default
Labels:            <none>
Annotations:       kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"redis","namespace":"default"},"spec":{"clusterIP
":"10.97.97.97","ports":[{"por...Selector:          app=redis,role=logstor
Type:              ClusterIP
IP:                10.97.97.97 #集群ip
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         10.244.1.45:6379  #匹配到的pod IP
Session Affinity:  None
Events:            <none>

      service创建完只要k8s上的集群的dns服务是存在的那么我们在这儿就可以直接解析他的服务名,服务名的解析方式为每一个服务创建完以后都会在集群的dns中自动动态添加一个资源记录。不止一个,还会包含服务层svc记录,A记录等,添加完后就可以解析,资源记录的格式为 SVC_NAME(服务名).NS_NAME(名称空间名).DOMAIN.LTD.(域名后缀),而集群的默认域名后缀是svc.cluster.local. 因此如果我们没改域名后缀,那么我们每一个服务创建完就是这种域名格式的,比如上面的资源记录为  redis.default.svc.cluster.local

    b、NodePort 接入集群外部的流量,只有使用这种类型时才能使用nodePort,否则是没用的

[root@k8smaster manifests]# kubectl get pods -o wide --show-labels
NAME                            READY     STATUS    RESTARTS   AGE       IP            NODE       LABELS
filebeat-ds-f5drs               1/1       Running   1          3d        10.244.1.44   k8snode1   app=filebeat,controller-revision-hash=2004607620,pod-template-generation=2,release=stable
filebeat-ds-n9hgz               1/1       Running   1          3d        10.244.2.34   k8snode2   app=filebeat,controller-revision-hash=2004607620,pod-template-generation=2,release=stable
liveness-httpget-pod            1/1       Running   2          6d        10.244.2.38   k8snode2   <none>
myapp-deploy-69b47bc96d-6x987   1/1       Running   1          4d        10.244.2.35   k8snode2   app=myapp,pod-template-hash=2560367528,release=canary
myapp-deploy-69b47bc96d-f2cjq   1/1       Running   1          4d        10.244.1.47   k8snode1   app=myapp,pod-template-hash=2560367528,release=canary
myapp-deploy-69b47bc96d-tlq6v   1/1       Running   1          4d        10.244.1.48   k8snode1   app=myapp,pod-template-hash=2560367528,release=canary
myapp-deploy-69b47bc96d-vx46z   1/1       Running   1          4d        10.244.1.46   k8snode1   app=myapp,pod-template-hash=2560367528,release=canary
myapp-deploy-69b47bc96d-vzdpt   1/1       Running   1          4d        10.244.2.36   k8snode2   app=myapp,pod-template-hash=2560367528,release=canary
nginx-deploy-5b595999-zgjgz     1/1       Running   0          3d        10.244.1.52   k8snode1   pod-template-hash=16151555,run=nginx-deploy
poststart-pod                   1/1       Running   30         6d        10.244.2.37   k8snode2   <none>
readiness-httpget-pod           1/1       Running   1          6d        10.244.2.39   k8snode2   <none>
redis-5b5d6fbbbd-kk782          1/1       Running   1          3d        10.244.1.45   k8snode1   app=redis,pod-template-hash=1618296668,role=logstor
[root@k8smaster manifests]# cat myapp.svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  clusterIP: 10.99.99.99 #指定固定ClusterIP
  type: NodePort
  ports:
  - port: 80 #Service上的端口
    targetPort: 80 #pod IP上的端口
    nodePort: 30080 #节点端口,也可以不指定让系统动态分配
  
[root@k8smaster manifests]# kubectl apply -f myapp.svc.yaml 
service/myapp configured
[root@k8smaster manifests]# kubectl get svc -o wide --show-labels
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE       SELECTOR                   LABELS
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        12d       <none>                     component=apiserver,provider=kubernetes
myapp        NodePort    10.99.99.99   <none>        80:30080/TCP   13m       app=myapp,release=canary   <none>
redis        ClusterIP   10.97.97.97   <none>        6379/TCP       22h       app=redis,role=logstor     <none>
[root@k8smaster manifests]# kubectl describe svc myapp
Name:                     myapp
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"cl
usterIP":"10.99.99.99","ports":[{"nod...Selector:                 app=myapp,release=canary
Type:                     NodePort
IP:                       10.99.99.99
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30080/TCP
Endpoints:                10.244.1.46:80,10.244.1.47:80,10.244.1.48:80 + 2 more...
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

      重新打开一个shell访问集群任意一个节点,可以看到其还有负载均衡的效果,并且流量是经过了好几级转换,首先由nodeport转换为service port,再由service转成pod port。

[root@k8smaster ~]# while true; do curl http://192.168.10.10:30080/hostname.html; sleep 5; done
myapp-deploy-69b47bc96d-6x987
myapp-deploy-69b47bc96d-6x987
myapp-deploy-69b47bc96d-vx46z
myapp-deploy-69b47bc96d-f2cjq
myapp-deploy-69b47bc96d-6x987
myapp-deploy-69b47bc96d-vzdpt
myapp-deploy-69b47bc96d-vzdpt
^C

    c、LoadBalance(负载均衡及服务(LBaas)的一键调用):表示我们把k8s部署在虚拟机上而虚拟机是工作在云环境中,而我们云环境支持lb负载均衡器时使用。自动触发在外部创建一个负载均衡器。 比如在阿里云上买了四个虚拟主机,同时又买了阿里云的LBaas的服务,在这四个vps上部署了k8s集群,然后这个k8s集群可以与其底层的公有云IaaS公有云的api相交互。其自身有这个能力,其能去调底层的IAAS云计算层中的API,调的时候其能够请求去创建一个外置的负载均衡器,比如我们有四个节点,一个master,正常工作的是三个节点,在这三个节点上都使用的是同一个nodePort对集群外提供服务,它会自动请求底层IAAS用纯软件的方式做一个负载均衡器并且为这个负载均衡器提供的配置信息是我们本机这三个节点的(注意是节点IP)节点端口上提供的相应服务,可以自动通过底层IAAS的api创建这个软负载均衡器的时候提供后端有哪几个节点,因此,回头用户通过云计算环境之外的客户端来访问阿里云的内部的LBAAS生成的负载均衡器时由该负载均衡器来调度到后端几个节点的nodePort上,然后由nodeport转发给service,再由service在集群内部负载均衡至pod上,因此可以发现其有两级负载均衡,第一级是将用户请求负载给多个node中的某一个,再由node通过service反代给集群内部的多个pod中的某一个。

      

    d、ExternalName :将集群外部的服务引入至集群内部在集群内部直接使用。假如我们有个k8s集群,有三个工作节点,三个节点上有一些节点上的pod资源是作为客户端使用的,当此客户端访问某一服务时,此服务应该是由其它pod提供的,但有这种可能性,我们pod访问服务集群中没有,但是集群外有个服务,比如在我们的本地局域网环境中,但是是在k8s集群之外,或者在互联网上有一个服务,比如dns等,我们期望这个服务让集群内能够访问到,集群内一般用的都是私网地址,就算我们能够将请求报文路由出去离开本地网络到外部去那么外部响应报文也回不来,这样干是没法正常通信的,因此,ExternalName就是用来实现我们在集群中建一个服务(service),这个service的端点不是本地port而是service关联到外部服务上去了因此我们客户端访问service时由service通过层级转换,包括nodePort转换请求到外部的服务中,外部服务先回给nodeIP,再由nodeIP转交给service,再由service转交给pod client,从而让pod能够访问集群外部的服务。这样就能让我们集群内的pod像使用集群内部的服务一样来使用集群外部的服务。对此种服务来讲我们的cluster IP作用在于pod client内部解析时使用,更重要的是ExternalName此时很关键,因为ExternalName确实应该是一个name而不是一个IP,并且此name还必须要被我们dns服务所解析才能够被访问,所以ExternalName引入时有这么一个基本限制(了解一下就好),在svc.spec.中有如下字段

[root@k8smaster /]# kubectl explain svc.spec
KIND:     Service
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
     Spec defines the behavior of a service.
     https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status

     ServiceSpec describes the attributes that a user creates on a service.

FIELDS:
   clusterIP    <string>
     clusterIP is the IP address of the service and is usually assigned randomly
     by the master. If an address is specified manually and is not in use by
     others, it will be allocated to the service; otherwise, creation of the
     service will fail. This field can not be changed through updates. Valid
     values are "None", empty string (""), or a valid IP address. "None" can be
     specified for headless services when proxying is not required. Only applies
     to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is
     ExternalName. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

   externalIPs    <[]string>
     externalIPs is a list of IP addresses for which nodes in the cluster will
     also accept traffic for this service. These IPs are not managed by
     Kubernetes. The user is responsible for ensuring that traffic arrives at a
     node with this IP. A common example is external load-balancers that are not
     part of the Kubernetes system.

   externalName    <string> #这个类型也只有类型为externalName时才有用,这个name解析出来应该是一个记录,这个CNAME能够被我们真正的互联网上的dns服务器或者能被我们本地服务器在根域解析为A记录进行互相通信
     externalName is the external reference that kubedns or equivalent will
     return as a CNAME record for this service. No proxying will be involved.
     Must be a valid RFC-1123 hostname (https://tools.ietf.org/html/rfc1123) and
     requires Type to be ExternalName.

   externalTrafficPolicy    <string>
     externalTrafficPolicy denotes if this Service desires to route external
     traffic to node-local or cluster-wide endpoints. "Local" preserves the
     client source IP and avoids a second hop for LoadBalancer and Nodeport type
     services, but risks potentially imbalanced traffic spreading. "Cluster"
     obscures the client source IP and may cause a second hop to another node,
     but should have good overall load-spreading.

   healthCheckNodePort    <integer>
     healthCheckNodePort specifies the healthcheck nodePort for the service. If
     not specified, HealthCheckNodePort is created by the service api backend
     with the allocated nodePort. Will use user-specified nodePort value if
     specified by the client. Only effects when Type is set to LoadBalancer and
     ExternalTrafficPolicy is set to Local.

   loadBalancerIP    <string>
     Only applies to Service Type: LoadBalancer LoadBalancer will get created
     with the IP specified in this field. This feature depends on whether the
     underlying cloud-provider supports specifying the loadBalancerIP when a
     load balancer is created. This field will be ignored if the cloud-provider
     does not support the feature.

   loadBalancerSourceRanges    <[]string>
     If specified and supported by the platform, this will restrict traffic
     through the cloud-provider load-balancer will be restricted to the
     specified client IPs. This field will be ignored if the cloud-provider does
     not support the feature." More info:
     https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/

   ports    <[]Object>
     The list of ports that are exposed by this service. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

   publishNotReadyAddresses    <boolean>
     publishNotReadyAddresses, when set to true, indicates that DNS
     implementations must publish the notReadyAddresses of subsets for the
     Endpoints associated with the Service. The default value is false. The
     primary use case for setting this field is to use a StatefulSet's Headless
     Service to propagate SRV records for its Pods without respect to their
     readiness for purpose of peer discovery.

   selector    <map[string]string>
     Route service traffic to pods with label keys and values matching this
     selector. If empty or not present, the service is assumed to have an
     external process managing its endpoints, which Kubernetes will not modify.
     Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if
     type is ExternalName. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/

   sessionAffinity    <string>
     Supports "ClientIP" and "None". Used to maintain session affinity. Enable
     client IP based session affinity. Must be ClientIP or None. Defaults to
     None. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

   sessionAffinityConfig    <Object>
     sessionAffinityConfig contains the configurations of session affinity.

   type    <string>
     type determines how the Service is exposed. Defaults to ClusterIP. Valid
     options are ExternalName, ClusterIP, NodePort, and LoadBalancer.
     "ExternalName" maps to the specified externalName. "ClusterIP" allocates a
     cluster-internal IP address for load-balancing to endpoints. Endpoints are
     determined by the selector or if that is not specified, by manual
     construction of an Endpoints object. If clusterIP is "None", no virtual IP
     is allocated and the endpoints are published as a set of endpoints rather
     than a stable IP. "NodePort" builds on ClusterIP and allocates a port on
     every node which routes to the clusterIP. "LoadBalancer" builds on NodePort
     and creates an external load-balancer (if supported in the current cloud)
     which routes to the clusterIP. More info:
     https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types

      另外,在我们svc实现负载均衡时还支持sessionAffinity(会话联系,上述explain中有),默认值为None,因此其是随机基于iptables来调度的,若我们将其值定义成ClientIP则表示把来自同一个客户端IP的请求始终调度到同一个后端pod上去。

[root@k8smaster manifests]# kubectl patch svc myapp -p '{"spec":{"sessionAffinity":"ClientIP"}}'
service/myapp patched

[root@k8smaster ~]# while true; do curl http://192.168.10.10:30080/hostname.html; sleep 3; done
myapp-deploy-67f6f6b4dc-tf2zm
myapp-deploy-67f6f6b4dc-g694w
myapp-deploy-67f6f6b4dc-g694w
myapp-deploy-67f6f6b4dc-tf2zm
myapp-deploy-67f6f6b4dc-tr4sn
myapp-deploy-67f6f6b4dc-lqpxm
myapp-deploy-67f6f6b4dc-lqpxm
#这是打补丁后后的结果
myapp-deploy-67f6f6b4dc-g694w
myapp-deploy-67f6f6b4dc-g694w
myapp-deploy-67f6f6b4dc-g694w
myapp-deploy-67f6f6b4dc-g694w
myapp-deploy-67f6f6b4dc-g694w
myapp-deploy-67f6f6b4dc-g694w
myapp-deploy-67f6f6b4dc-g694w
^C
[root@k8smaster manifests]# kubectl describe svc myapp
Name:                     myapp
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"cl
usterIP":"10.99.99.99","ports":[{"nod...Selector:                 app=myapp,release=canary
Type:                     NodePort
IP:                       10.99.99.99
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30080/TCP
Endpoints:                10.244.1.53:80,10.244.1.54:80,10.244.1.55:80 + 2 more...
Session Affinity:         ClientIP #可以看到我们打的补丁已经生效
External Traffic Policy:  Cluster
Events:                   <none>

三、还有一种service叫无头service(headless),我们此前使用service一直是客户端pod访问service时解析的应该是service的名称,每一个service应该有其名称,并且其解析结果应该是其ClusterIP,一般解析ClusterIP一般只有一个。但是我们也可以这样干,把中间层去掉,每一个pod也有其自己名称,我们可以在解析service IP时,将其解析给后端的pod IP,这种service就叫无头service。创建这种service时我们一样只需要指定明确定义clusterIP,并且指定其值为None。

[root@k8smaster manifests]# cat myapp-svc-headless.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  clusterIP: "None"
  ports:
  - port: 80 #Service上的端口
    targetPort: 80 #pod IP上的端口


[root@k8smaster manifests]# kubectl apply -f myapp-svc-headless.yaml 
service/myapp-svc created
[root@k8smaster manifests]# kubectl get svc -o wide --show-labels
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE       SELECTOR                   LABELS
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        16d       <none>                     component=apiserver,provider=kubernetes
myapp        NodePort    10.99.99.99   <none>        80:30080/TCP   3d        app=myapp,release=canary   <none>
myapp-svc    ClusterIP   None          <none>        80/TCP         16s       app=myapp,release=canary   <none>
redis        ClusterIP   10.97.97.97   <none>        6379/TCP       4d        app=redis,role=logstor     <none>

#首先我们查看我们pod 的ip
[root@k8smaster manifests]# kubectl get pods -o wide
NAME                            READY     STATUS    RESTARTS   AGE       IP            NODE
filebeat-ds-f5drs               1/1       Running   1          7d        10.244.1.44   k8snode1
filebeat-ds-n9hgz               1/1       Running   1          7d        10.244.2.34   k8snode2
liveness-httpget-pod            1/1       Running   2          10d       10.244.2.38   k8snode2
myapp-deploy-67f6f6b4dc-g694w   1/1       Running   0          49m       10.244.1.55   k8snode1
myapp-deploy-67f6f6b4dc-k6rbp   1/1       Running   0          51m       10.244.1.54   k8snode1
myapp-deploy-67f6f6b4dc-lqpxm   1/1       Running   0          51m       10.244.2.40   k8snode2
myapp-deploy-67f6f6b4dc-tf2zm   1/1       Running   0          49m       10.244.2.41   k8snode2
myapp-deploy-67f6f6b4dc-tr4sn   1/1       Running   0          51m       10.244.1.53   k8snode1
nginx-deploy-5b595999-zgjgz     1/1       Running   0          7d        10.244.1.52   k8snode1
poststart-pod                   1/1       Running   54         10d       10.244.2.37   k8snode2
readiness-httpget-pod           1/1       Running   1          10d       10.244.2.39   k8snode2
redis-5b5d6fbbbd-kk782          1/1       Running   1          7d        10.244.1.45   k8snode1

#我们通过dig解析
[root@k8smaster ~]# dig -t A myapp-svc.default.svc.cluster.local. #这是我们svc的名字 @10.96.0.10 #这是我们coredns pod 的IP

; <<>> DiG 9.9.4-RedHat-9.9.4-50.el7 <<>> -t A myapp-svc.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41276
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;myapp-svc.default.svc.cluster.local. IN    A

;; ANSWER SECTION: #可以看到解析出了5条A记录
myapp-svc.default.svc.cluster.local. 5 IN A    10.244.1.53
myapp-svc.default.svc.cluster.local. 5 IN A    10.244.1.54
myapp-svc.default.svc.cluster.local. 5 IN A    10.244.1.55
myapp-svc.default.svc.cluster.local. 5 IN A    10.244.2.40
myapp-svc.default.svc.cluster.local. 5 IN A    10.244.2.41

;; Query time: 75 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sat May 25 10:26:14 CST 2019
;; MSG SIZE  rcvd: 319

  但是可以发现service有个问题,当我们定义完service后,我们要访问service后端的pod需要多级调度或代理,因此如果我们要建一个https服务的话我们会发现我们每一个myapp都要配置为https的主机,事实上k8s还有一种引入集群外部流量的方式叫做 ingress ,我们service是4层调度,但是ingress是七层调度器,它利用一种七层pod实现将外部流量引入到内部来,但是事实上他也脱离不了service的工作。作为ingress作为七层调度时我们必须要用Pod中的运行的七层服务功能的应用来调度,可以用nginx,haproxy等等

转载于:https://www.cnblogs.com/Presley-lpc/p/10894879.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值