Kubernetes安装学习

环境:Centos 7.5
           Docker 1.13.1
           Kubernetes 1.5.2

官网:https://kubernetes.io/
Kubernetes架构与组件:
Master包含:
Etcd:集群中的主数据,存储所有资源对象及其状态,etcd的数据变更都需通过apiserver进行        
Apiserver:集群控制的唯一入口,封装资源对象的增删改查操作,集群中所有组件通信中枢提供集群                  
                 的安全机制(认证授权等)
Scheduler:通过apiserver的watch接口监听pod的副本信息,通过调度算法为pod选取最合适的node
Controller manager:各种资源controller的核心管理者,不同的资源有不同的controller,保证                               
                                   对应资源始终处于”期待状态”,例如kubectl get deployment查看desired数,当一个pod
                                   失效后,deployment controller会启动新pod来恢复‘期待状态’即replicas指定副本个数
Node包含:
Kubelet:处理master下发到本node的pod的创建启停等,向apiserver注册node信息,监控node上的资源情况,                
               并向master汇报
Kube-proxy:service抽象概念的实现,负责将service上的请求分发到对应的pod上
Docker engine:Pod中的容器运行

单机安装部署:
yum install kubernetes etcd
[root@kbs Desktop]# yum list installed|grep kuber
kubernetes.x86_64                          1.5.2-0.7.git269f928.el7       @extras  
kubernetes-client.x86_64                1.5.2-0.7.git269f928.el7       @extras  
kubernetes-master.x86_64              1.5.2-0.7.git269f928.el7       @extras  
kubernetes-node.x86_64                 1.5.2-0.7.git269f928.el7       @extras

[root@kbs Desktop]# yum list installed|grep docker
docker.x86_64                             2:1.13.1-68.gitdded712.el7.centos
docker-client.x86_64                    2:1.13.1-68.gitdded712.el7.centos
docker-common.x86_64               2:1.13.1-68.gitdded712.el7.centos

1.配置文件:
[root@kbs kubernetes]# ls /etc/kubernetes/
apiserver  config  controller-manager  kubelet  proxy  scheduler

[root@kbs etcd]# grep -iv -e '^#' -e '^$' etcd.conf 
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379"

[root@kbs kubernetes]# grep -iv -e '^#' -e '^$' apiserver 
KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
                         ---clusterip集群内部ip外部不可访问
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" 
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
                         ---此处把admission中的ServiceAccount拿掉了
KUBE_API_ARGS=""

root@kbs kubernetes]# grep -iv -e '^#' -e '^$' config 
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://127.0.0.1:8080"


[root@k8s kubernetes]# grep -iv -e '^#' -e '^$' kubelet 
KUBELET_ADDRESS="--address=127.0.0.1"
KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=daocloud.io/daocloud/google_containers_pause-amd64:3.0"
KUBELET_ARGS=""

kubelet参数--pod-infra-container-image指定了pod的基础容器,每个pod都会启动这么一个基础容器,docker ps 可查看,
具体可参考:http://dockone.io/question/1060

启动服务:
systemctl start etcd.service
systemctl start docker
systemctl start kube-apiserver.service  ---这个要先于后面的controller和scheduler
systemctl start kube-controller-manager.service
systemctl start kube-scheduler.service
systemctl start kubelet.service
systemctl start kube-proxy.service 

2.Kubectl命令行:
Kubectl --help    
Kubectl get --help

查看资源对象:
Kubectl  get  nodes|namespaces|deployments|pods|svc(service)|rs(replicasets)....  rs是replicasets的简写
kubectl  get  pods  -l  app=nginx   -l即是-lables
kubectl  get  pods  --selector="run=load-balancer-example" --output=wide
Kubectl  describe  nodes|deployments|pods|svc....
删除:
Kubectl  delete svc|deployment|pod   <资源name>
kubectl  delete  -f  <yaml文件名字>
kubectl  delete  pod  --all
强制删除:
kubectl delete pods <pod-name> --grace-period=0 --force 
---进入pod中执行命令
kubectl  exec  -it  pods-name  /bin/bash     
---查看pod的日志                         
kubectl  logs  -f  my-nginx1-558678731-1m00x(pods-name) 

[root@kbs kubernetes]# kubectl get pods -o wide
NAME                         READY     STATUS    AGE   IP           NODE
my-nginx2-4011639226-19zs4   1/1       Running   1h    172.17.0.3   127.0.0.1
my-nginx2-4011639226-r91cw   1/1       Running   1h    172.17.0.2   127.0.0.1

[root@kbs kubernetes]# kubectl describe svc/kubernetes 
Name:			kubernetes
Namespace:		default
Labels:			component=apiserver
			provider=kubernetes
Selector:		<none>
Type:			ClusterIP   ----类型ClusterIP,NodePort,LoadBalancer
IP:			10.254.0.1
Port:			https	443/TCP
Endpoints:		192.168.1.102:6443
Session Affinity:	ClientIP
No events.

删除时候pod出现一直自动生成情况,应当先删除deployment,因为deployment声明集群的期待状态,
pod删除了,deployment controller会自动生成pod以将pod和replicaset维持在期待状态:

[root@kbs ~]# kubectl delete pod --all
pod "my-nginx-236897103-twkj0" deleted
pod "test-2768805660-lfmgp"       deleted

[root@kbs ~]# kubectl get pods
NAME                                 READY   STATUS        RESTARTS   AGE
my-nginx-236897103-twkj0  0/1       Terminating   0                1m
test-2768805660-lfmgp        0/1       Terminating   0                1m
执行:[root@kbs ~]# systemctl status kube-apiserver.service

[root@kbs ~]# kubectl get pods
No resources found.

3.命令行方式配置pod集群:

创建deployment:
kubectl create deployment nginx --image nginx

kubectl run my-nginx1 --replicas=1 --labels="app=nginx1" --image=daocloud.io/library/nginx  --port=80
该命令将创建一个 Deployment 对象和一个关联的 ReplicaSet 对象。ReplicaSet有1个 Pod,每个都运行nginx应用程序
kubectl get deployments
kubectl get replicasets
kubectl get pods

[root@kbs ~]# kubectl describe deployment/my-nginx1
Name:			my-nginx1
Namespace:		default
CreationTimestamp:	Wed, 25 Jul 2018 15:00:23 +0800
Labels:			app=nginx1
Selector:		app=nginx1
Replicas:		1 updated | 1 total | 0 available | 1 unavailable
StrategyType:		RollingUpdate
MinReadySeconds:	0
RollingUpdateStrategy:	1 max unavailable, 1 max surge
Conditions:
  Type		Status	Reason
  ----		------	------
  Available 	True	MinimumReplicasAvailable
OldReplicaSets:	<none>
NewReplicaSet:	my-nginx1-558678731 (1/1 replicas created)
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	    Type		Reason			Message
  ---------	--------	-----	----				-------------	    --------	------			-------
  38s		    38s		       1	{deployment-controller }			Normal		ScalingReplicaSet	Scaled up replica set my-nginx1-558678731 to 1
  

kubectl get rs
kubectl describe rs/rs-name
root@kbs ~]# kubectl describe rs/my-nginx1-558678731
Name:		my-nginx1-558678731
Namespace:	default
Image(s):	daocloud.io/library/nginx
Selector:	app=nginx1,pod-template-hash=558678731
Labels:		app=nginx1
		pod-template-hash=558678731
Replicas:	1 current / 1 desired
Pods Status:	1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
  FirstSeen	LastSeen	Count	From				SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----				-------------	--------	------			-------
  3m		3m		1	{replicaset-controller }			Normal		SuccessfulCreate	Created pod: my-nginx1-558678731-1m00x



[root@kbs ~]# kubectl get pods/my-nginx1-558678731-1m00x -o wide
NAME                        READY     STATUS    RESTARTS   AGE       IP           NODE
my-nginx1-558678731-1m00x   1/1       Running   0          4m        172.17.0.2   127.0.0.1

[root@kbs ~]# kubectl describe pods/my-nginx1-558678731-1m00x
Name:		my-nginx1-558678731-1m00x
Namespace:	default
Node:		127.0.0.1/127.0.0.1
Start Time:	Wed, 25 Jul 2018 15:00:23 +0800
Labels:		app=nginx1
		pod-template-hash=558678731
Status:		Running
IP:		172.17.0.2
Controllers:	ReplicaSet/my-nginx1-558678731
Containers:
  my-nginx1:
    Container ID:		docker://32b467d7f1c1dd282a954afc433f7908e43e3912e3121f55e6490be1fe097f21
    Image:			daocloud.io/library/nginx
    Image ID:			docker-pullable://daocloud.io/library/nginx@sha256:9af87a8467adcc5f77321b1ad97f692c909cb1a3eb1ca466b1c0e2dcdf4fe74e
    Port:			80/TCP
    State:			Running
      Started:			Wed, 25 Jul 2018 15:01:08 +0800
    Ready:			True
    Restart Count:		0
    Volume Mounts:		<none>
    Environment Variables:	<none>
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	True 
  PodScheduled 	True 
No volumes.
QoS Class:	BestEffort
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From			SubObjectPath			Type		Reason			Message
  ---------	--------	-----	----			-------------			--------	------			-------
  6m		6m		1	{default-scheduler }					Normal		Scheduled		Successfully assigned my-nginx1-558678731-1m00x to 127.0.0.1
  6m		6m		1	{kubelet 127.0.0.1}	spec.containers{my-nginx1}	Normal		Pulling			pulling image "daocloud.io/library/nginx"
  6m		5m		2	{kubelet 127.0.0.1}					Warning		MissingClusterDNS	kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
  5m		5m		1	{kubelet 127.0.0.1}	spec.containers{my-nginx1}	Normal		Pulled			Successfully pulled image "daocloud.io/library/nginx"
  5m		5m		1	{kubelet 127.0.0.1}	spec.containers{my-nginx1}	Normal		Created			Created container with docker id 32b467d7f1c1; Security:[seccomp=unconfined]
  5m		5m		1	{kubelet 127.0.0.1}	spec.containers{my-nginx1}	Normal		Started			Started container with docker id 32b467d7f1c1

创建deployment的service对象:
kubectl expose deployment  my-nginx1 --type=NodePort --name=my-service
kubectl get svc

[root@kbs ~]# kubectl describe svc/my-service
Name:			my-service
Namespace:		default
Labels:			app=nginx1
Selector:	        app=nginx1
Type:			NodePort
IP:			10.254.205.78
Port:			<unset>	80/TCP
NodePort:		<unset>	32019/TCP
Endpoints:		172.17.0.2:80
Session Affinity:	None
No events.

port是Service暴露的端口,targetport是Pod的端口,nodeport是Service暴露到Node上的端口(对外)
转发逻辑是:
<NodeIP>:<nodeport> => <ServiceVIP>:<port>=> <PodIP>:<targetport>
如果service的type是NodePort,则每个node都会监听nodeport 32019(可指定也可以k8s自动分配,范围是30000-32767)
访问是 http://<nodeip>:<nodeport> 
curl http://127.0.0.1:32019
yes,nginx service1

如果service的type是LoadBalancer的话则会分配一个EXTERNAL-IP:
[root@kbs kubernetes]# kubectl get svc
NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   10.254.0.1         <none>        443/TCP        4h
my-service   10.254.250.126   <nodes>       80:30080/TCP   2h
则会有个 EXTERNAL-IP,那么不用每个node都监听端口,访问是 http://<EXTERNAL-IP>:port

4.使用yaml文件配置:

service.yaml文件
--------------
apiVersion: v1
kind: Service
metadata:
   name: my-service
   labels:
      app: nginx2
spec:
   type: NodePort 
   selector:
      app: nginx2
   ports:
   - protocol: TCP
     port: 80
     targetPort: 80      ----targetport是Pod的端口
     nodePort: 30080  ----nodeport是Service暴露到Node上的端口
deployment.yaml文件
---------------
apiVersion: extensions/v1beta1 
kind: Deployment
metadata:
  name: my-nginx2
  labels:
     app: nginx2
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx2
    spec:
      containers:
      - name: nginx2
        image: daocloud.io/library/nginx
        ports:
        - containerPort: 80
   -------------------------------还可以为pod容器设置cpu,env等
        resources:
           requests:
              memory: "64Mi"
              cpu: "250m"
           limits:
              memory: "128Mi"
              cpu: "1"
        env:                
        - name:GREETING
          value: "hello everyone"

如上,yaml文件描述对象:
Kind:有Pod/Service/Endpoints/Deployment/Statefulset/Configmap/Volume/Secret等对象
Metadata:常用属性name /namespace /labels /annotations等
Spec:声明replicas,selector,template,container等

kubectl  create  -f  deployment.yaml
kubectl  create  -f  servcie.yaml

[root@kbs Desktop]# kubectl get pods -o wide
NAME                         READY     STATUS    RESTARTS   AGE       IP           NODE
my-nginx2-4011639226-19zs4   1/1       Running   0          1m        172.17.0.3   127.0.0.1
my-nginx2-4011639226-r91cw   1/1       Running   0          1m        172.17.0.2   127.0.0.1

[root@kbs Desktop]# kubectl get deployments
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
my-nginx2   2         2         2            1           10s

[root@kbs Desktop]# kubectl get namespace
NAME          STATUS    AGE
default       Active    2h
kube-system   Active    2h
访问多次:
[root@kbs Desktop]# curl http://127.0.0.1:30080
yes,nginx service222
[root@kbs Desktop]# curl http://127.0.0.1:30080
yes,nginx service1
[root@kbs Desktop]# curl http://127.0.0.1:30080
yes,nginx service222
[root@kbs Desktop]# curl http://127.0.0.1:30080
yes,nginx service222
[root@kbs Desktop]# curl http://127.0.0.1:30080
yes,nginx service1

打开2个窗口监控两个pod的日志,确实请求被负载均衡不同的pod上:
[root@kbs ~]# kubectl logs -f my-nginx2-4011639226-r91cw 
172.17.0.1 - - [25/Jul/2018:08:28:25 +0000] "GET / HTTP/1.1" 200 21 "-" "curl/7.29.0" "-"
172.17.0.1 - - [25/Jul/2018:08:28:29 +0000] "GET / HTTP/1.1" 200 21 "-" "curl/7.29.0" "-"
172.17.0.1 - - [25/Jul/2018:08:28:30 +0000] "GET / HTTP/1.1" 200 21 "-" "curl/7.29.0" "-"

[root@kbs ~]# kubectl logs -f my-nginx2-4011639226-19zs4
172.17.0.1 - - [25/Jul/2018:08:28:27 +0000] "GET / HTTP/1.1" 200 19 "-" "curl/7.29.0" "-"
172.17.0.1 - - [25/Jul/2018:08:28:31 +0000] "GET / HTTP/1.1" 200 19 "-" "curl/7.29.0" "-"

灵活的pod的副本个数,版本升级等:
修改deployment.yaml文件
replicas: 3
再:kubectl  apply  -f  deployment.yaml

或者命令行方式:
kubectl scale deployment/my-nginx2 --replicas=3

5.关于labels的筛选:
在Kubernetes中,Controller Manager负责创建Pod,Kubelet负责处理执行Pod,而Scheduler就是负责
安排Pod到具体的Node,它通过API Server提供的接口监听Pod任务列表,获取待调度pod,然后根据一系
列的预选策略和优选策略给各个Node节点打分,然后将Pod发送到得分最高的Node节点上.
当然如果pod指定了nodename或nodeselector的labels话,指定了node的Pod会直接跳过Scheduler的调度

为资源设置labels:
kubectl label node 127.0.0.1 env=test
kubectl get node 127.0.0.1 --show-labels

在yaml文件中指定nodeName或nodeSelector:
spec:    
   replicas: 1    
   template:        
     metadata:        
     spec:     
        containers:       
        nodeName: slave1 #指定调度节点为salve1                    
        或者          
        nodeSelector:                   
            env:test     #指定调度节点为带有label标记为:env:test的node节点

使用lables来建立对象之间的关系如下:

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值