03 Istio 安装
Istio 支持在不同的平台下安装其控制平面,例如 kubernetes,mesos和虚拟机等。
课程上以 kubernetes 为基础讲解如何在集群中安装 istio (istio-1.0.6 要求 kubernetes 的版本在1.11及以上)。
可以在本地或公有云上搭建 Istio 环境,也可以直接使用公有云平台上已经集成了 istio 的托管服务。
3.1 在本地搭建 istio 环境
3.1.1 Kubernetes 集群环境
目前有许多软件提供了在本地搭建 kubernetes 集群的能力,例如 Minikube/kubeadm 都可以搭建 kubernetes 集群,我们这边所选用的是 kubeadm 来安装 kubernetes 集群。
kubeadm 是一个工具,他提供了 kubeadm init 以及 kubeadm join 这两个命令作为快速创建 kubernetes 集群的最佳实践。
准备机器
两台 centos7 的虚拟机,地址分别为:
# 检查网络
ping 172.38.22.128
ping 172.38.22.129
# 检查虚拟机配置 要求是 2C2G
top
# 安装docker
root@ubuntu-node:~# apt install docker.io
root@ubuntu-node:~# docker version
Client:
Version: 24.0.5
API version: 1.43
Go version: go1.20.3
Git commit: 24.0.5-0ubuntu1~22.04.1
Built: Mon Aug 21 19:50:14 2023
OS/Arch: linux/amd64
Context: default
Server:
Engine:
Version: 24.0.5
API version: 1.43 (minimum version 1.12)
Go version: go1.20.3
Git commit: 24.0.5-0ubuntu1~22.04.1
Built: Mon Aug 21 19:50:14 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.7.2
GitCommit:
runc:
Version: 1.1.7-0ubuntu1~22.04.2
GitCommit:
docker-init:
Version: 0.19.0
GitCommit:
root@ubuntu-node:~# docker --version
Docker version 24.0.5, build 24.0.5-0ubuntu1~22.04.1
root@ubuntu-node:~#
# 配置主机
root@ubuntu-master:~# vim /etc/hosts
root@ubuntu-master:~# cat /etc/hosts
127.0.0.1 localhost
# 127.0.1.1 ubuntu
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.38.22.128 ubuntu-master
172.38.22.129 ubuntu-node
# 测试网络环境
root@ubuntu-master:~# ping ubuntu-node
PING ubuntu-node (172.38.22.129) 56(84) bytes of data.
64 bytes from ubuntu-node (172.38.22.129): icmp_seq=1 ttl=64 time=1.75 ms
64 bytes from ubuntu-node (172.38.22.129): icmp_seq=2 ttl=64 time=0.530 ms
64 bytes from ubuntu-node (172.38.22.129): icmp_seq=3 ttl=64 time=0.601 ms
root@ubuntu-node:~# ping ubuntu-master
PING ubuntu-master (192.168.222.128) 56(84) bytes of data.
64 bytes from ubuntu-master (172.38.22.128): icmp_seq=1 ttl=64 time=0.575 ms
64 bytes from ubuntu-master (172.38.22.128): icmp_seq=2 ttl=64 time=0.798 ms
64 bytes from ubuntu-master (172.38.22.128): icmp_seq=3 ttl=64 time=0.544 ms
--------------------------------------------------------------------------
验证 kubernetes 安装
# 检查集群环境 kubectl get nodes 在master节点上检查集群信息
root@k8s-master-9:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-9 Ready control-plane 7h53m v1.28.8
k8s-node-13 Ready <none> 6h34m v1.28.8
root@k8s-master-9:~#
# 监控 w1 节点的状态: kubectl get nodes -w # 监控成 ready 状态
root@k8s-master-9:~# kubectl get nodes -w
NAME STATUS ROLES AGE VERSION
k8s-master-9 Ready control-plane 8h v1.28.8
k8s-node-13 Ready <none> 6h40m v1.28.8
注意:kubernetes 集群安装方式有很多,大家可以安装自己熟悉的方式搭建 kubernetes ,这里只是介绍本次课程使用的 kubernetes 集群环境
# 检查容器命名空间 kubectl get pods -n kube-system
'-n' 表示后面跟 k8s 的命名空间
root@k8s-master-9:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66f779496c-h2glg 1/1 Running 1 (9m46s ago) 7h52m
coredns-66f779496c-h8zr7 1/1 Running 1 (9m46s ago) 7h52m
etcd-k8s-master-9 1/1 Running 1 (9m46s ago) 7h52m
kube-apiserver-k8s-master-9 1/1 Running 1 (9m46s ago) 7h52m
kube-controller-manager-k8s-master-9 1/1 Running 1 (9m46s ago) 7h52m
kube-proxy-7vpxd 1/1 Running 1 (9m46s ago) 7h52m
kube-proxy-97555 1/1 Running 1 (8m48s ago) 6h33m
kube-scheduler-k8s-master-9 1/1 Running 1 (9m46s ago) 7h52m
root@k8s-master-9:~#
# 安装操作符
root@k8s-master-9:/home/tools/istio-1.21.1/manifests/charts/istio-operator/crds# kubectl apply -f crd-operator.yaml
customresourcedefinition.apiextensions.k8s.io/istiooperators.install.istio.io created
安装helm软件包
1.下载 helm 编译版,地址: 下载后,将 helm 添加到环境变量 path 中
https://github.com/helm/helm/releases
https://github.com/helm/helm/archive/refs/tags/v3.14.4.tar.gz
https://github.com/helm/helm/releases/latest
使用Helm 软件包管理器安装istio
# helm 官网地址:https://helm.sh/
# helm 安装地址:https://helm.sh/zh/docs/intro/install/
# 发布 chart 具体操作步骤地址:
https://helm.sh/zh/docs/howto/chart_releaser_action/
# 安装步骤:
1. 下载安装包 helm-v3.14.3-linux-amd64.tar.gz
root@k8s-master-8:~# ls
calico-3.27.3.tar.gz cri-containerd-cni-1.7.14-linux-amd64.tar.gz helm-v3.14.3-linux-amd64.tar.gz ip_forward~ nginx-deploy.yaml snap
2.解压
root@k8s-master-8:~# tar -zxvf helm-v3.14.3-linux-amd64.tar.gz
linux-amd64/
linux-amd64/LICENSE
linux-amd64/README.md
linux-amd64/helm
root@k8s-master-8:~# ls
calico-3.27.3.tar.gz cri-containerd-cni-1.7.14-linux-amd64.tar.gz helm-v3.14.3-linux-amd64.tar.gz ip_forward~ linux-amd64 nginx-deploy.yaml snap
root@k8s-master-8:~# ls linux-amd64/
helm LICENSE README.md
3. 在解压目录中找到helm程序,移动到需要的目录中(mv linux-amd64/helm /usr/local/bin/helm)
root@k8s-master-8:~# mv linux-amd64/helm /usr/local/bin/helm
root@k8s-master-8:~#
4.当您已经安装好了Helm之后,您可以添加一个chart 仓库。从 Artifact Hub中查找有效的Helm chart仓库。
root@k8s-master-8:~# helm repo add bitnami https://charts.bitnami.com/bitnami
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
"bitnami" has been added to your repositories
root@k8s-master-8:~#
安装和删除
a. 安装
helm install 软件名 -f ./value.yaml --namespace 命名空间
b. 查看
helm list -n 命名空间
c. 删除
helm uninstall 软件名 -n 命名空间名
5.当添加完成,您将可以看到可以被您安装的charts列表:
root@k8s-master-8:~# helm search repo bitnami
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/airflow 18.0.4 2.9.0 Apache Airflow is a tool to express and execute...
bitnami/apache 11.0.2 2.4.59 Apache HTTP Server is an open-source HTTP serve...
bitnami/apisix 3.0.2 3.8.0 Apache APISIX is high-performance, real-time AP...
bitnami/appsmith 3.1.1 1.20.0 Appsmith is an open source platform for buildin...
bitnami/argo-cd 6.0.9 2.10.6 Argo CD is a continuous delivery tool for Kuber...
bitnami/argo-workflows 8.0.5 3.5.5 Argo Workflows is meant to orchestrate Kubernet...
bitnami/aspnet-core 6.0.2 8.0.4 ASP.NET Core is an open-source framework for we...
bitnami/cassandra 11.1.0 4.1.4 Apache Cassandra is an open source distributed ...
bitnami/cert-manager 1.1.1 1.14.4 cert-manager is a Kubernetes add-on to automate...
bitnami/clickhouse 6.0.2 24.3.2 ClickHouse is an open-source column-oriented OL...
bitnami/common 2.19.1 2.19.1 A Library Helm Chart for grouping common logic ...
bitnami/concourse 4.0.0 7.11.2 Concourse is an automation system written in Go...
bitnami/consul 11.1.0 1.18.1 HashiCorp Consul is a tool for discovering and ...
bitnami/contour 17.0.5 1.28.3 Contour is an open source Kubernetes ingress co...
bitnami/contour-operator 4.2.1 1.24.0 DEPRECATED The Contour Operator extends the Kub...
bitnami/dataplatform-bp2 12.0.5 1.0.1 DEPRECATED This Helm chart can be used for the ...
bitnami/deepspeed 2.0.2 0.14.0 DeepSpeed is deep learning software suite for e...
bitnami/discourse 13.0.1 3.2.1 Discourse is an open source discussion platform...
bitnami/dokuwiki 16.0.2 20240206.1.0 DokuWiki is a standards-compliant wiki optimize...
bitnami/drupal 18.0.2 10.2.5 Drupal is one of the most versatile open source...
bitnami/ejbca 14.0.1 8.2.0-1 EJBCA is an enterprise class PKI Certificate Au...
bitnami/elasticsearch 20.0.4 8.13.2 Elasticsearch is a distributed search and analy...
bitnami/etcd 10.0.3 3.5.13 etcd is a distributed key-value store designed ...
bitnami/external-dns 7.1.2 0.14.1 ExternalDNS is a Kubernetes addon that configur...
bitnami/flink 1.0.1 1.19.0 Apache Flink is a framework and distributed pro...
bitnami/fluent-bit 2.0.1 3.0.2 Fluent Bit is a Fast and Lightweight Log Proces...
bitnami/fluentd 6.1.1 1.16.5 Fluentd collects events from various data sourc...
bitnami/flux 2.1.0 1.2.5 Source Controller is a component of Flux. Flux ...
bitnami/geode 1.1.8 1.15.1 DEPRECATED Apache Geode is a data management pl...
bitnami/ghost 20.0.2 5.82.1 Ghost is an open source publishing platform des...
bitnami/gitea 2.0.3 1.21.10 Gitea is a lightweight code hosting solution. W...
bitnami/grafana 10.0.7 10.4.2 Grafana is an open source metric analytics and ...
bitnami/grafana-loki 3.2.0 2.9.7 Grafana Loki is a horizontally scalable, highly...
bitnami/grafana-mimir 1.0.2 2.12.0 Grafana Mimir is an open source, horizontally s...
bitnami/grafana-operator 4.0.3 5.8.1 Grafana Operator is a Kubernetes operator that ...
bitnami/grafana-tempo 3.0.3 2.4.1 Grafana Tempo is a distributed tracing system t...
bitnami/haproxy 1.0.4 2.9.7 HAProxy is a TCP proxy and a HTTP reverse proxy...
bitnami/haproxy-intel 0.2.11 2.7.1 DEPRECATED HAProxy for Intel is a high-performa...
bitnami/harbor 21.1.1 2.10.2 Harbor is an open source trusted cloud-native r...
bitnami/influxdb 6.0.6 2.7.6 InfluxDB(TM) is an open source time-series data...
bitnami/jaeger 2.0.1 1.56.0 Jaeger is a distributed tracing system. It is u...
bitnami/jasperreports 18.2.5 8.2.0 DEPRECATED JasperReports Server is a stand-alon...
bitnami/jenkins 13.0.0 2.440.2 Jenkins is an open source Continuous Integratio...
bitnami/joomla 19.0.1 5.0.3 Joomla! is an award winning open source CMS pla...
bitnami/jupyterhub 7.0.3 4.1.5 JupyterHub brings the power of notebooks to gro...
bitnami/kafka 28.0.4 3.7.0 Apache Kafka is a distributed streaming platfor...
bitnami/keycloak 21.0.0 24.0.2 Keycloak is a high performance Java-based ident...
bitnami/kiam 2.0.2 4.2.0 kiam is a proxy that captures AWS Metadata API ...
bitnami/kibana 11.0.4 8.13.2 Kibana is an open source, browser based analyti...
bitnami/kong 12.0.2 3.6.1 Kong is an open source Microservice API gateway...
bitnami/kube-prometheus 9.0.4 0.73.1 Prometheus Operator provides easy monitoring de...
bitnami/kube-state-metrics 4.0.3 2.12.0 kube-state-metrics is a simple service that lis...
bitnami/kubeapps 15.0.2 2.10.0 Kubeapps is a web-based UI for launching and ma...
bitnami/kuberay 1.0.0 1.0.0 KubeRay is a Kubernetes operator for deploying ...
bitnami/kubernetes-event-exporter 3.0.3 1.7.0 Kubernetes Event Exporter makes it easy to expo...
bitnami/logstash 6.0.3 8.13.2 Logstash is an open source data processing engi...
bitnami/magento 26.0.1 2.4.6 Magento is a powerful open source e-commerce pl...
bitnami/mariadb 18.0.1 11.3.2 MariaDB is an open source, community-developed ...
bitnami/mariadb-galera 13.0.0 11.3.2 MariaDB Galera is a multi-primary database clus...
bitnami/mastodon 5.0.0 4.2.8 Mastodon is self-hosted social network server b...
bitnami/matomo 7.0.4 5.0.3 Matomo, formerly known as Piwik, is a real time...
bitnami/mediawiki 20.0.2 1.41.1 MediaWiki is the free and open source wiki soft...
bitnami/memcached 7.0.3 1.6.26 Memcached is an high-performance, distributed m...
bitnami/metallb 5.0.3 0.14.3 MetalLB is a load-balancer implementation for b...
bitnami/metrics-server 7.0.3 0.7.1 Metrics Server aggregates resource usage data, ...
bitnami/milvus 7.0.0 2.3.12 Milvus is a cloud-native, open-source vector da...
bitnami/minio 14.1.7 2024.4.6 MinIO(R) is an object storage server, compatibl...
bitnami/mlflow 1.0.2 2.11.3 MLflow is an open-source platform designed to m...
bitnami/mongodb 15.1.4 7.0.8 MongoDB(R) is a relational open source NoSQL da...
bitnami/mongodb-sharded 8.0.5 7.0.8 MongoDB(R) is an open source NoSQL database tha...
bitnami/moodle 21.0.1 4.3.3 Moodle(TM) LMS is an open source online Learnin...
bitnami/multus-cni 2.0.2 4.0.2 Multus is a CNI plugin for Kubernetes clusters....
bitnami/mxnet 3.5.2 1.9.1 DEPRECATED Apache MXNet (Incubating) is a flexi...
bitnami/mysql 10.1.1 8.0.36 MySQL is a fast, reliable, scalable, and easy t...
bitnami/nats 8.0.4 2.10.14 NATS is an open source, lightweight and high-pe...
bitnami/nginx 16.0.3 1.25.4 NGINX Open Source is a web server that can be a...
bitnami/nginx-ingress-controller 11.1.0 1.10.0 NGINX Ingress Controller is an Ingress controll...
bitnami/nginx-intel 2.1.15 0.4.9 DEPRECATED NGINX Open Source for Intel is a lig...
bitnami/node 19.1.7 16.18.0 DEPRECATED Node.js is a runtime environment bui...
bitnami/node-exporter 4.0.3 1.7.0 Prometheus exporter for hardware and OS metrics...
bitnami/oauth2-proxy 5.0.2 7.6.0 A reverse proxy and static file server that pro...
bitnami/odoo 26.0.0 17.0.20240305 Odoo is an open source ERP and CRM platform, fo...
bitnami/opencart 18.0.1 4.0.2-3 OpenCart is free open source ecommerce platform...
bitnami/opensearch 1.0.2 2.13.0 OpenSearch is a scalable open-source solution f...
bitnami/osclass 18.2.6 8.2.0 DEPRECATED Osclass allows you to easily create ...
bitnami/owncloud 12.2.11 10.11.0 DEPRECATED ownCloud is an open source content c...
bitnami/parse 23.0.0 7.0.0 Parse is a platform that enables users to add a...
bitnami/phpbb 18.0.1 3.3.11 phpBB is a popular bulletin board that features...
bitnami/phpmyadmin 16.0.1 5.2.1 phpMyAdmin is a free software tool written in P...
bitnami/pinniped 2.0.4 0.29.0 Pinniped is an identity service provider for Ku...
bitnami/postgresql 15.2.5 16.2.0 PostgreSQL (Postgres) is an open source object-...
bitnami/postgresql-ha 14.0.3 16.2.0 This PostgreSQL cluster solution includes the P...
bitnami/prestashop 21.0.1 8.1.5 PrestaShop is a powerful open source eCommerce ...
bitnami/prometheus 1.0.6 2.51.2 Prometheus is an open source monitoring and ale...
bitnami/pytorch 4.0.2 2.2.2 PyTorch is a deep learning platform that accele...
bitnami/rabbitmq 14.0.0 3.13.1 RabbitMQ is an open source general-purpose mess...
bitnami/rabbitmq-cluster-operator 4.2.4 2.8.0 The RabbitMQ Cluster Kubernetes Operator automa...
bitnami/redis 19.1.0 7.2.4 Redis(R) is an open source, advanced key-value ...
bitnami/redis-cluster 10.0.1 7.2.4 Redis(R) is an open source, scalable, distribut...
bitnami/redmine 28.0.1 5.1.2 Redmine is an open source management applicatio...
bitnami/schema-registry 18.0.3 7.6.1 Confluent Schema Registry provides a RESTful in...
bitnami/sealed-secrets 2.0.2 0.26.2 Sealed Secrets are "one-way" encrypted K8s Secr...
bitnami/seaweedfs 0.1.1 3.64.0 SeaweedFS is a simple and highly scalable distr...
bitnami/solr 9.1.1 9.5.0 Apache Solr is an extremely powerful, open sour...
bitnami/sonarqube 5.0.2 10.4.1 SonarQube(TM) is an open source quality managem...
bitnami/spark 9.0.1 3.5.1 Apache Spark is a high-performance engine for l...
bitnami/spring-cloud-dataflow 28.0.0 2.11.2 Spring Cloud Data Flow is a microservices-based...
bitnami/suitecrm 14.1.1 7.13.4 DEPRECATED SuiteCRM is a completely open source...
bitnami/supabase 4.0.0 0.23.11 Supabase is an open source Firebase alternative...
bitnami/tensorflow-resnet 4.0.3 2.16.1 TensorFlow ResNet is a client utility for use w...
bitnami/thanos 15.0.5 0.34.1 Thanos is a highly available metrics system tha...
bitnami/tomcat 11.0.1 10.1.20 Apache Tomcat is an open-source web server desi...
bitnami/vault 1.0.3 1.16.1 Vault is a tool for securely managing and acces...
bitnami/wavefront 4.4.3 1.13.0 DEPRECATED Wavefront is a high-performance stre...
bitnami/wavefront-adapter-for-istio 2.0.6 0.1.5 DEPRECATED Wavefront Adapter for Istio is an ad...
bitnami/wavefront-hpa-adapter 1.5.2 0.9.10 DEPRECATED Wavefront HPA Adapter for Kubernetes...
bitnami/wavefront-prometheus-storage-adapter 2.3.3 1.0.7 DEPRECATED Wavefront Storage Adapter is a Prome...
bitnami/whereabouts 1.0.2 0.6.3 Whereabouts is a CNI IPAM plugin for Kubernetes...
bitnami/wildfly 19.0.0 31.0.1 Wildfly is a lightweight, open source applicati...
bitnami/wordpress 22.1.7 6.5.2 WordPress is the world's most popular blogging ...
bitnami/wordpress-intel 2.1.31 6.1.1 DEPRECATED WordPress for Intel is the most popu...
bitnami/zookeeper 13.1.1 3.9.2 Apache ZooKeeper provides a reliable, centraliz...
root@k8s-master-8:~#
7.安装Chart
您可以通过helm install 命令安装chart。 Helm可以通过多种途径查找和安装chart, 但最简单的是安装官方的bitnami charts。
# 确定我们可以拿到最新的charts列表
helm repo update
# 安装Chart
helm install bitnami/mysql --generate-name
root@k8s-master-8:~# helm repo update
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
root@k8s-master-8:~#
安装Chart
root@k8s-master-8:~# helm install bitnami/mysql --generate-name
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
NAME: mysql-1713160432
LAST DEPLOYED: Mon Apr 15 13:53:54 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mysql
CHART VERSION: 10.1.1
APP VERSION: 8.0.36
** Please be patient while the chart is being deployed **
Tip:
Watch the deployment status using the command: kubectl get pods -w --namespace default
Services:
echo Primary: mysql-1713160432.default.svc.cluster.local:3306
Execute the following to get the administrator credentials:
echo Username: root
MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql-1713160432 -o jsonpath="{.data.mysql-root-password}" | base64 -d)
To connect to your database:
1. Run a pod that you can use as a client:
kubectl run mysql-1713160432-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.36-debian-12-r10 --namespace default --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash
2. To connect to primary service (read/write):
mysql -h mysql-1713160432.default.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"
WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs:
- primary.resources
- secondary.resources
+info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
root@k8s-master-8:~#
在上面的例子中,bitnami/mysql这个chart被发布,名字是 mysql-1713160432
您可以通过执行 helm show chart bitnami/mysql 命令简单的了解到这个chart的基本信息。 或者您可以执行 helm show all bitnami/mysql 获取关于该chart的所有信息。
每当您执行 helm install 的时候,都会创建一个新的发布版本。 所以一个chart在同一个集群里面可以被安装多次,每一个都可以被独立的管理和升级。
helm install 是一个拥有很多能力的强大的命令
# 了解到这个chart(mysql-1713160432)的基本信息
root@k8s-master-8:~# helm show chart bitnami/mysql
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
annotations:
category: Database
images: |
- name: mysql
image: docker.io/bitnami/mysql:8.0.36-debian-12-r10
- name: mysqld-exporter
image: docker.io/bitnami/mysqld-exporter:0.15.1-debian-12-r10
- name: os-shell
image: docker.io/bitnami/os-shell:12-debian-12-r18
licenses: Apache-2.0
apiVersion: v2
appVersion: 8.0.36
dependencies:
- name: common
repository: oci://registry-1.docker.io/bitnamicharts
tags:
- bitnami-common
version: 2.x.x
description: MySQL is a fast, reliable, scalable, and easy to use open source relational
database system. Designed to handle mission-critical, heavy-load production applications.
home: https://bitnami.com
icon: https://bitnami.com/assets/stacks/mysql/img/mysql-stack-220x234.png
keywords:
- mysql
- database
- sql
- cluster
- high availability
maintainers:
- name: VMware, Inc.
url: https://github.com/bitnami/charts
name: mysql
sources:
- https://github.com/bitnami/charts/tree/main/bitnami/mysql
version: 10.1.1
root@k8s-master-8:~#
# 获取关于该chart的所有信息
root@k8s-master-8:~# helm show all bitnami/mysql
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
annotations:
category: Database
images: |
- name: mysql
image: docker.io/bitnami/mysql:8.0.36-debian-12-r10
- name: mysqld-exporter
image: docker.io/bitnami/mysqld-exporter:0.15.1-debian-12-r10
- name: os-shell
image: docker.io/bitnami/os-shell:12-debian-12-r18
licenses: Apache-2.0
apiVersion: v2
appVersion: 8.0.36
dependencies:
- name: common
。。。。。。。。。。。
# 通过Helm您可以很容易看到哪些chart被发布了:
# helm list (或 helm ls) 命令会列出所有可被部署的版本。
root@k8s-master-8:~# helm list
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
mysql-1713160432 default 1 2024-04-15 13:53:54.858620904 +0800 CST deployed mysql-10.1.1 8.0.36
root@k8s-master-8:~#
# 卸载一个版本
您可以使用helm uninstall 命令卸载你的版本
helm uninstall mysql-1713160432
# 查看 chart 的版本
root@k8s-master-8:~# helm ls
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
mysql-1713160432 default 1 2024-04-15 13:53:54.858620904 +0800 CST deployed mysql-10.1.1 8.0.36
# 卸载这个 chart 版本(mysql-1713160432)
root@k8s-master-8:~# helm uninstall mysql-1713160432
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
release "mysql-1713160432" uninstalled
# 再次查看 chart 可以看出已经卸载了
root@k8s-master-8:~# helm ls
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
root@k8s-master-8:~#
该命令会从Kubernetes卸载 mysql-1713160432, 它将删除和该版本相关的所有相关资源(service、deployment、 pod等等)甚至版本历史。
如果您在执行 helm uninstall 的时候提供 --keep-history 选项, Helm将会保存版本历史。 您可以通过命令查看该版本的信息
helm status mysql-1713160432
root@k8s-master-8:~# helm status mysql-1713160432
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
Error: release: not found
root@k8s-master-8:~#
因为 --keep-history 选项会让helm跟踪你的版本(即使你卸载了他们), 所以你可以审计集群历史甚至使用 helm rollback 回滚版本。
查看帮助信息
如果您想通过Helm命令查看更多的有用的信息,请使用 helm help 命令,或者在任意命令后添加 -h 选项:
root@k8s-master-8:~# helm help
root@k8s-master-8:~# helm get -h
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
This command consists of multiple subcommands which can be used to
get extended information about the release, including:
- The values used to generate the release
- The generated manifest file
- The notes provided by the chart of the release
- The hooks associated with the release
- The metadata of the release
Usage:
helm get [command]
Available Commands:
all download all information for a named release
hooks download all hooks for a named release
manifest download the manifest for a named release
metadata This command fetches metadata for a given release
notes download the notes for a named release
values download the values file for a named release
Flags:
-h, --help help for get
Global Flags:
--burst-limit int client-side default throttling limit (default 100)
--debug enable verbose output
--kube-apiserver string the address and the port for the Kubernetes API server
--kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups.
--kube-as-user string username to impersonate for the operation
--kube-ca-file string the certificate authority file for the Kubernetes API server connection
--kube-context string name of the kubeconfig context to use
--kube-insecure-skip-tls-verify if true, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure
--kube-tls-server-name string server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used
--kube-token string bearer token used for authentication
--kubeconfig string path to the kubeconfig file
-n, --namespace string namespace scope for this request
--qps float32 queries per second used when communicating with the Kubernetes API, not including bursting
--registry-config string path to the registry config file (default "/root/.config/helm/registry/config.json")
--repository-cache string path to the file containing cached repository indexes (default "/root/.cache/helm/repository")
--repository-config string path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")
Use "helm get [command] --help" for more information about a command.
root@k8s-master-8:~#
---------------------------------------------------------------------------------------
root@k8s-master-8:~# helm help
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
The Kubernetes package manager
Common actions for Helm:
- helm search: search for charts
- helm pull: download a chart to your local directory to view
- helm install: upload the chart to Kubernetes
- helm list: list releases of charts
Environment variables:
| Name | Description |
|------------------------------------|------------------------------------------------------------------------------------------------------------|
| $HELM_CACHE_HOME | set an alternative location for storing cached files. |
| $HELM_CONFIG_HOME | set an alternative location for storing Helm configuration. |
| $HELM_DATA_HOME | set an alternative location for storing Helm data. |
| $HELM_DEBUG | indicate whether or not Helm is running in Debug mode |
| $HELM_DRIVER | set the backend storage driver. Values are: configmap, secret, memory, sql. |
| $HELM_DRIVER_SQL_CONNECTION_STRING | set the connection string the SQL storage driver should use. |
| $HELM_MAX_HISTORY | set the maximum number of helm release history. |
| $HELM_NAMESPACE | set the namespace used for the helm operations. |
| $HELM_NO_PLUGINS | disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins. |
| $HELM_PLUGINS | set the path to the plugins directory |
| $HELM_REGISTRY_CONFIG | set the path to the registry config file. |
| $HELM_REPOSITORY_CACHE | set the path to the repository cache directory |
| $HELM_REPOSITORY_CONFIG | set the path to the repositories file. |
| $KUBECONFIG | set an alternative Kubernetes configuration file (default "~/.kube/config") |
| $HELM_KUBEAPISERVER | set the Kubernetes API Server Endpoint for authentication |
| $HELM_KUBECAFILE | set the Kubernetes certificate authority file. |
| $HELM_KUBEASGROUPS | set the Groups to use for impersonation using a comma-separated list. |
| $HELM_KUBEASUSER | set the Username to impersonate for the operation. |
| $HELM_KUBECONTEXT | set the name of the kubeconfig context. |
| $HELM_KUBETOKEN | set the Bearer KubeToken used for authentication. |
| $HELM_KUBEINSECURE_SKIP_TLS_VERIFY | indicate if the Kubernetes API server's certificate validation should be skipped (insecure) |
| $HELM_KUBETLS_SERVER_NAME | set the server name used to validate the Kubernetes API server certificate |
| $HELM_BURST_LIMIT | set the default burst limit in the case the server contains many CRDs (default 100, -1 to disable) |
| $HELM_QPS | set the Queries Per Second in cases where a high number of calls exceed the option for higher burst values |
Helm stores cache, configuration, and data based on the following configuration order:
- If a HELM_*_HOME environment variable is set, it will be used
- Otherwise, on systems supporting the XDG base directory specification, the XDG variables will be used
- When no other location is set a default location will be used based on the operating system
By default, the default directories depend on the Operating System. The defaults are listed below:
| Operating System | Cache Path | Configuration Path | Data Path |
|------------------|---------------------------|--------------------------------|-------------------------|
| Linux | $HOME/.cache/helm | $HOME/.config/helm | $HOME/.local/share/helm |
| macOS | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm |
| Windows | %TEMP%\helm | %APPDATA%\helm | %APPDATA%\helm |
Usage:
helm [command]
Available Commands:
completion generate autocompletion scripts for the specified shell
create create a new chart with the given name
dependency manage a chart's dependencies
env helm client environment information
get download extended information of a named release
help Help about any command
history fetch release history
install install a chart
lint examine a chart for possible issues
list list releases
package package a chart directory into a chart archive
plugin install, list, or uninstall Helm plugins
pull download a chart from a repository and (optionally) unpack it in local directory
push push a chart to remote
registry login to or logout from a registry
repo add, list, remove, update, and index chart repositories
rollback roll back a release to a previous revision
search search for a keyword in charts
show show information of a chart
status display the status of the named release
template locally render templates
test run tests for a release
uninstall uninstall a release
upgrade upgrade a release
verify verify that a chart at the given path has been signed and is valid
version print the client version information
Flags:
--burst-limit int client-side default throttling limit (default 100)
--debug enable verbose output
-h, --help help for helm
--kube-apiserver string the address and the port for the Kubernetes API server
--kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups.
--kube-as-user string username to impersonate for the operation
--kube-ca-file string the certificate authority file for the Kubernetes API server connection
--kube-context string name of the kubeconfig context to use
--kube-insecure-skip-tls-verify if true, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure
--kube-tls-server-name string server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used
--kube-token string bearer token used for authentication
--kubeconfig string path to the kubeconfig file
-n, --namespace string namespace scope for this request
--qps float32 queries per second used when communicating with the Kubernetes API, not including bursting
--registry-config string path to the registry config file (default "/root/.config/helm/registry/config.json")
--repository-cache string path to the file containing cached repository indexes (default "/root/.cache/helm/repository")
--repository-config string path to the file containing repository names and URLs (default "/root/.config/helm/repositories.yaml")
Use "helm [command] --help" for more information about a command.
root@k8s-master-8:~#
使用helm安装istio环境 进入 istio 解压后的目录
1. 为 istio 组件,创建命名空间 istio-system
kubectl create namespace istio-system
2。查看下载的软件包
root@k8s-master-8:~# ls
calico-3.27.3.tar.gz go1.22.2.linux-amd64.tar.gz istio-1.0.6-linux.tar.gz operator-sdk-1.34.1.tar.gz
cri-containerd-cni-1.7.14-linux-amd64.tar.gz helm-v3.14.3-linux-amd64.tar.gz kubebuilder-3.14.1.tar.gz snap
root@k8s-master-8:~#
3.创建 存放软件目录
root@k8s-master-8:~# mkdir -p /home/tools/
4. 解压 istio-1.0.6-linux.tar.gz 软件包存放到 /home/tools/ 目录下
tar -zxf istio-1.0.6-linux.tar.gz -C /home/tools/
5. 进入 istio 的安装目录 cd /home/tools/istio-1.0.6
root@k8s-master-8:/home/tools/istio-1.0.6# ll
total 48
drwxr-xr-x 6 root root 4096 Feb 9 2019 ./
drwxr-xr-x 3 root root 4096 Apr 20 09:23 ../
drwxr-xr-x 2 root root 4096 Feb 9 2019 bin/
drwxr-xr-x 6 root root 4096 Feb 9 2019 install/
-rw-r--r-- 1 root root 648 Feb 9 2019 istio.VERSION
-rw-r--r-- 1 root root 11343 Feb 9 2019 LICENSE
-rw-r--r-- 1 root root 5817 Feb 9 2019 README.md
drwxr-xr-x 12 root root 4096 Feb 9 2019 samples/
drwxr-xr-x 8 root root 4096 Feb 9 2019 tools/
root@k8s-master-8:/home/tools/istio-1.0.6#
6 . 设置 istio 的环境变量 /home/tools/istio-1.0.6/bin
export ISTIO_HOME={parent_dir}/istio-1.0.6
export PATH=$PATH:$ISTIO_HOM/bin
root@k8s-master-8:/home/tools/istio-1.0.6/bin# vim /etc/profile
root@k8s-master-8:/home/tools/istio-1.0.6/bin# source /etc/profile
root@k8s-master-8:/home/tools/istio-1.0.6/bin# tail -2 /etc/profile
export ISTIO_HOME={parent_dir}/istio-1.0.6
export PATH=$PATH:$ISTIO_HOM/bin
root@k8s-master-8:/home/tools/istio-1.0.6/bin#
7. 进入 istio 解压后的目录 为 istio 组件,创建命名空间 istio-system
kubectl create namespace istio-system
8.安装 Istio base chart,它包含了 Istio 控制平面用到的集群范围的资源
helm install istio-base -n istio-system
9.执行 CRD 安装 cd /home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates
找到 'crds.yaml' 文件 首先执行 kubectl apply -f crds.yaml
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates# ll
total 64
drwxr-xr-x 2 root root 4096 Feb 9 2019 ./
drwxr-xr-x 4 root root 4096 Feb 9 2019 ../
-rw-r--r-- 1 root root 1075 Feb 9 2019 _affinity.tpl
-rw-r--r-- 1 root root 5031 Feb 9 2019 configmap.yaml
-rw-r--r-- 1 root root 22109 Feb 9 2019 crds.yaml
-rw-r--r-- 1 root root 868 Feb 9 2019 _helpers.tpl
-rw-r--r-- 1 root root 919 Feb 9 2019 install-custom-resources.sh.tpl
-rw-r--r-- 1 root root 8928 Feb 9 2019 sidecar-injector-configmap.yaml
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates#
root@k8s-master-9:/home/tools/istio-1.21.1# pwd
/home/tools/istio-1.21.1
root@k8s-master-9:/home/tools/istio-1.21.1#
root@k8s-master-9:/home/tools/istio-1.21.1# kubectl create namespace istio-system
2. 安装 Istio base chart ,它包含了 Istio 控制平面用到的集群范围资源
helm install istio-base -n istio-system manifests/charts/base
root@k8s-master-9:/home/tools/istio-1.21.1# helm install istio-base -n istio-system manifests/charts/base
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
NAME: istio-base
LAST DEPLOYED: Wed Apr 17 14:15:22 2024
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Istio base successfully installed!
To learn more about the release, try:
$ helm status istio-base
$ helm get all istio-base
root@k8s-master-9:/home/tools/istio-1.21.1#
3. 安装 Istio discovery ,它用于部署 istiod 服务
helm install -n istio-system istio-21 manifests/charts/istio-control/istio-discover
root@k8s-master-9:/home/tools/istio-1.21.1# helm install -n istio-system istio-21 manifests/charts/istio-control/istio-discover
4.安装 Istio-ingress
helm install -n istio-system istio-ingress manifests/charts/gateways/istio-ingress
root@k8s-master-9:/home/tools/istio-1.21.1# helm install -n istio-system istio-ingress manifests/charts/gateways/istio-ingress
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
NAME: istio-ingress
LAST DEPLOYED: Wed Apr 17 14:30:54 2024
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-master-9:/home/tools/istio-1.21.1#
5. 安装 istio-egress
helm install -n istio-system istio-egress manifests/charts/gateways/istio-egress
root@k8s-master-9:/home/tools/istio-1.21.1# helm install -n istio-system istio-egress manifests/charts/gateways/istio-egress
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /etc/kubernetes/admin.conf
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /etc/kubernetes/admin.conf
NAME: istio-egress
LAST DEPLOYED: Wed Apr 17 14:34:47 2024
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-master-9:/home/tools/istio-1.21.1#
6. 安装 istio cni 插件
可以不安装次插件,此插件安装后可能会 导致 pod 创建失效
安装命令如下:
helm install istio-cni -n kube-system manifests/charts/istio-cni
安装 addons
1.安装 prometheus
kubectl apply -f samples/addons/prometheus.yaml -n istio-system
root@k8s-master-9:/home/tools/istio-1.21.1# kubectl apply -f samples/addons/prometheus.yaml -n istio-system
serviceaccount/prometheus created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created
deployment.apps/prometheus created
root@k8s-master-9:/home/tools/istio-1.21.1#
2.安装 jaeger
kubectl apply -f samples/addons/jaeger.yaml -n istio-system
root@k8s-master-9:/home/tools/istio-1.21.1# kubectl apply -f samples/addons/jaeger.yaml -n istio-system
deployment.apps/jaeger unchanged
service/tracing unchanged
service/zipkin unchanged
service/jaeger-collector unchanged
root@k8s-master-9:/home/tools/istio-1.21.1#
3. 安装 grafana
kubectl apply -f samples/addons/grafana.yaml -n istio-system
root@k8s-master-9:/home/tools/istio-1.21.1# kubectl apply -f samples/addons/grafana.yaml -n istio-system
serviceaccount/grafana unchanged
configmap/grafana unchanged
service/grafana unchanged
deployment.apps/grafana configured
configmap/istio-grafana-dashboards configured
configmap/istio-services-grafana-dashboards configured
root@k8s-master-9:/home/tools/istio-1.21.1#
4. 安装 kiali
kubectl apply -f samples/addons/kiali -n istio-system
root@k8s-master-9:/home/tools/istio-1.21.1# kubectl apply -f samples/addons/kiali.yaml -n istio-system
serviceaccount/kiali created
configmap/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrolebinding.rbac.authorization.k8s.io/kiali created
role.rbac.authorization.k8s.io/kiali-controlplane created
rolebinding.rbac.authorization.k8s.io/kiali-controlplane created
service/kiali created
deployment.apps/kiali created
root@k8s-master-9:/home/tools/istio-1.21.1#
访问 kiali
# 先绑定 kiali 对外端口
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=kiali -o jsonpath='{.items[0].metadata.name}') 20001:20001
然后通过浏览器访问 kiali
3.1.2 安装 Istio
在 istio 的版本发布页面 https://github.com/istio/istio/releases/tag/1.0.6 下载安装包并并解压(我使用的是一个比较稳定的版本1.0.6版本,放到master上,以linux平台的istio-1.0.6-linux.tar.gz为例)
istio 最新版本官网下载地址:https://github.com/istio/istio/releases/
# istio官网地址 https://istio.io/latest/docs/setup/getting-started/
# 我这里用的是istio-1.21.1-linux.tar.gz
# 解压
root@k8s-master-9:/home/tools# tar -zxvf istio-1.21.1-linux-amd64.tar.gz
istio-1.21.1/
istio-1.21.1/LICENSE
istio-1.21.1/README.md
istio-1.21.1/bin/
root@k8s-master-9:/home/tools# ls
istio-1.21.1 istio-1.21.1-linux-amd64.tar.gz
root@k8s-master-9:/home/tools# ll
total 25100
drwxr-xr-x 3 root root 4096 Apr 14 05:49 ./
drwxr-xr-x 4 root root 4096 Apr 14 05:47 ../
drwxr-x--- 6 root root 4096 Apr 6 00:47 istio-1.21.1/
-rw-r--r-- 1 root root 25689584 Apr 14 05:46 istio-1.21.1-linux-amd64.tar.gz
# 进入解压目录
root@k8s-master-9:/home/tools# cd istio-1.21.1/
# 查看目录及文件
# 1.istio-1.21.1 版本
root@k8s-master-9:/home/tools/istio-1.21.1# ll
total 48
drwxr-x--- 6 root root 4096 Apr 6 00:47 ./
drwxr-xr-x 3 root root 4096 Apr 14 05:49 ../
drwxr-x--- 2 root root 4096 Apr 6 00:47 bin/
-rw-r--r-- 1 root root 11357 Apr 6 00:47 LICENSE
drwxr-xr-x 5 root root 4096 Apr 6 00:47 manifests/
-rw-r----- 1 root root 956 Apr 6 00:47 manifest.yaml
-rw-r--r-- 1 root root 6615 Apr 6 00:47 README.md
drwxr-xr-x 25 root root 4096 Apr 6 00:47 samples/
drwxr-xr-x 3 root root 4096 Apr 6 00:47 tools/
root@k8s-master-9:/home/tools/istio-1.21.1#
# 找出 CRD 的 yaml 文件
root@k8s-master-9:/home/tools/istio-1.21.1/manifests/charts/base/crds# pwd
/home/tools/istio-1.21.1/manifests/charts/base/crds
root@k8s-master-9:/home/tools/istio-1.21.1/manifests/charts/base/crds# ll
total 400
drwxr-xr-x 2 root root 4096 Apr 6 00:47 ./
drwxr-xr-x 5 root root 4096 Apr 6 00:47 ../
-rw-r--r-- 1 root root 399275 Apr 6 00:47 crd-all.gen.yaml
root@k8s-master-9:/home/tools/istio-1.21.1/manifests/charts/base/crds#
# 执行 CRD YAML文件 kubectl apply -f crd-all.gen.yaml
root@k8s-master-9:/home/tools/istio-1.21.1/manifests/charts/base/crds# kubectl apply -f crd-all.gen.yaml
customresourcedefinition.apiextensions.k8s.io/wasmplugins.extensions.istio.io created
customresourcedefinition.apiextensions.k8s.io/destinationrules.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/envoyfilters.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/gateways.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/proxyconfigs.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/serviceentries.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/sidecars.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/virtualservices.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/workloadentries.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/workloadgroups.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/authorizationpolicies.security.istio.io created
customresourcedefinition.apiextensions.k8s.io/peerauthentications.security.istio.io created
customresourcedefinition.apiextensions.k8s.io/requestauthentications.security.istio.io created
customresourcedefinition.apiextensions.k8s.io/telemetries.telemetry.istio.io created
root@k8s-master-9:/home/tools/istio-1.21.1/manifests/charts/base/crds#
# 查看创建的 crd
root@k8s-master-9:/home/tools/istio-1.21.1/manifests/charts/base/crds# kubectl get crd -n istio-system
NAME CREATED AT
authorizationpolicies.security.istio.io 2024-04-13T22:32:23Z
destinationrules.networking.istio.io 2024-04-13T22:32:22Z
envoyfilters.networking.istio.io 2024-04-13T22:32:23Z
gateways.networking.istio.io 2024-04-13T22:32:23Z
peerauthentications.security.istio.io 2024-04-13T22:32:23Z
proxyconfigs.networking.istio.io 2024-04-13T22:32:23Z
requestauthentications.security.istio.io 2024-04-13T22:32:23Z
serviceentries.networking.istio.io 2024-04-13T22:32:23Z
sidecars.networking.istio.io 2024-04-13T22:32:23Z
telemetries.telemetry.istio.io 2024-04-13T22:32:24Z
virtualservices.networking.istio.io 2024-04-13T22:32:23Z
wasmplugins.extensions.istio.io 2024-04-13T22:32:22Z
workloadentries.networking.istio.io 2024-04-13T22:32:23Z
workloadgroups.networking.istio.io 2024-04-13T22:32:23Z
root@k8s-master-9:/home/tools/istio-1.21.1/manifests/charts/base/crds#
# 统计创建了多少个命名空间
root@k8s-master-9:/home/tools/istio-1.21.1/manifests/charts/base/crds# kubectl get crd -n istio-system | wc -l
15
root@k8s-master-9:/home/tools/istio-1.21.1/manifests/charts/base/crds#
# 安装 tree 命令
root@k8s-master-9:/home/tools/istio-1.21.1/bin# apt install tree -y
------------------------------------------------------------------------------
# 2.istio-1.0.6 版本
root@k8s-master-10:/opt/istio-1.0.6# ll
total 48
drwxr-xr-x 6 root root 4096 Feb 9 2019 ./
drwxr-xr-x 6 root root 4096 Apr 9 14:36 ../
drwxr-xr-x 2 root root 4096 Feb 9 2019 bin/
drwxr-xr-x 6 root root 4096 Feb 9 2019 install/
-rw-r--r-- 1 root root 648 Feb 9 2019 istio.VERSION
-rw-r--r-- 1 root root 11343 Feb 9 2019 LICENSE
-rw-r--r-- 1 root root 5817 Feb 9 2019 README.md
drwxr-xr-x 12 root root 4096 Feb 9 2019 samples/
drwxr-xr-x 8 root root 4096 Feb 9 2019 tools/
root@k8s-master-10:/opt/istio-1.0.6#
# CRD文件 install/kubernetes/helm/istio/templates/
root@k8s-master-10:/opt/istio-1.0.6# cd install/kubernetes/helm/istio/templates/
root@k8s-master-10:/opt/istio-1.0.6/install/kubernetes/helm/istio/templates# pwd
/opt/istio-1.0.6/install/kubernetes/helm/istio/templates
root@k8s-master-10:/opt/istio-1.0.6/install/kubernetes/helm/istio/templates# ll
total 64
drwxr-xr-x 2 root root 4096 Feb 9 2019 ./
drwxr-xr-x 4 root root 4096 Feb 9 2019 ../
-rw-r--r-- 1 root root 1075 Feb 9 2019 _affinity.tpl
-rw-r--r-- 1 root root 5031 Feb 9 2019 configmap.yaml
-rw-r--r-- 1 root root 22109 Feb 9 2019 crds.yaml
-rw-r--r-- 1 root root 868 Feb 9 2019 _helpers.tpl
-rw-r--r-- 1 root root 919 Feb 9 2019 install-custom-resources.sh.tpl
-rw-r--r-- 1 root root 8928 Feb 9 2019 sidecar-injector-configmap.yaml
root@k8s-master-10:/opt/istio-1.0.6/install/kubernetes/helm/istio/templates#
# 执行CRD 文件 kubectl apply -f crds.yaml
root@k8s-master-10:/opt/istio-1.0.6/install/kubernetes/helm/istio/templates#
root@k8s-master-8:~/istio-1.0.6/install/kubernetes/helm/istio/templates# pwd
/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates
root@k8s-master-8:~/istio-1.0.6/install/kubernetes/helm/istio/templates# ll
total 64
drwxr-xr-x 2 root root 4096 Apr 14 15:14 ./
drwxr-xr-x 4 root root 4096 Feb 9 2019 ../
-rw-r--r-- 1 root root 1075 Feb 9 2019 _affinity.tpl
-rw-r--r-- 1 root root 5031 Feb 9 2019 configmap.yaml
-rw-r--r-- 1 root root 22109 Apr 14 15:14 crds.yaml
-rw-r--r-- 1 root root 868 Feb 9 2019 _helpers.tpl
-rw-r--r-- 1 root root 919 Feb 9 2019 install-custom-resources.sh.tpl
-rw-r--r-- 1 root root 8928 Feb 9 2019 sidecar-injector-configmap.yaml
root@k8s-master-8:~/istio-1.0.6/install/kubernetes/helm/istio/templates#
# 在部署 istio 之前我们还需要这些 crds.yaml 文件 CRD 我们在前面已经介绍过了,
# CRD 是 kubernetes 支持资源的扩展 说白了可以来自定义资源类型 crds.yaml文件在
'/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates'该目录下
# 首先我们来执行 'kubectl apply -f crds.yaml' 这个 yaml 文件
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates# pwd
/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates# ll
total 64
drwxr-xr-x 2 root root 4096 Apr 15 10:07 ./
drwxr-xr-x 4 root root 4096 Feb 9 2019 ../
-rw-r--r-- 1 root root 1075 Feb 9 2019 _affinity.tpl
-rw-r--r-- 1 root root 5031 Feb 9 2019 configmap.yaml
-rw-r--r-- 1 root root 22109 Apr 15 10:07 crds.yaml
-rw-r--r-- 1 root root 868 Feb 9 2019 _helpers.tpl
-rw-r--r-- 1 root root 919 Feb 9 2019 install-custom-resources.sh.tpl
-rw-r--r-- 1 root root 8928 Feb 9 2019 sidecar-injector-configmap.yaml
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates# kubectl apply -f crds.yaml
# 列出已经安装的 CRDS
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates# kubectl get crds
NAME CREATED AT
apiservers.operator.tigera.io 2024-04-14T22:57:25Z
bgpconfigurations.crd.projectcalico.org 2024-04-14T22:57:24Z
bgpfilters.crd.projectcalico.org 2024-04-14T22:57:24Z
bgppeers.crd.projectcalico.org 2024-04-14T22:57:24Z
blockaffinities.crd.projectcalico.org 2024-04-14T22:57:24Z
caliconodestatuses.crd.projectcalico.org 2024-04-14T22:57:24Z
clusterinformations.crd.projectcalico.org 2024-04-14T22:57:24Z
felixconfigurations.crd.projectcalico.org 2024-04-14T22:57:24Z
globalnetworkpolicies.crd.projectcalico.org 2024-04-14T22:57:24Z
globalnetworksets.crd.projectcalico.org 2024-04-14T22:57:24Z
hostendpoints.crd.projectcalico.org 2024-04-14T22:57:24Z
imagesets.operator.tigera.io 2024-04-14T22:57:25Z
installations.operator.tigera.io 2024-04-14T22:57:25Z
ipamblocks.crd.projectcalico.org 2024-04-14T22:57:24Z
ipamconfigs.crd.projectcalico.org 2024-04-14T22:57:24Z
ipamhandles.crd.projectcalico.org 2024-04-14T22:57:24Z
ippools.crd.projectcalico.org 2024-04-14T22:57:24Z
ipreservations.crd.projectcalico.org 2024-04-14T22:57:24Z
kubecontrollersconfigurations.crd.projectcalico.org 2024-04-14T22:57:24Z
networkpolicies.crd.projectcalico.org 2024-04-14T22:57:25Z
networksets.crd.projectcalico.org 2024-04-14T22:57:25Z
tigerastatuses.operator.tigera.io 2024-04-14T22:57:25Z
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates#
# 找到 'istio-demo.yaml' 官网提供快速安装 istio 脚本文件
root@k8s-master-10:/opt/istio-1.0.6/install/kubernetes# pwd
/opt/istio-1.0.6/install/kubernetes
root@k8s-master-10:/opt/istio-1.0.6/install/kubernetes# ll
total 876
drwxr-xr-x 5 root root 4096 Apr 9 15:37 ./
drwxr-xr-x 6 root root 4096 Feb 9 2019 ../
drwxr-xr-x 2 root root 4096 Feb 9 2019 addons/
drwxr-xr-x 3 root root 4096 Feb 9 2019 ansible/
drwxr-xr-x 4 root root 4096 Feb 9 2019 helm/
-rw-r--r-- 1 root root 1369 Feb 9 2019 istio-citadel-plugin-certs.yaml
-rw-r--r-- 1 root root 2274 Feb 9 2019 istio-citadel-standalone.yaml
-rw-r--r-- 1 root root 1809 Feb 9 2019 istio-citadel-with-health-check.yaml
-rw-r--r-- 1 root root 423721 Feb 9 2019 istio-demo-auth.yaml
-rw-r--r-- 1 root root 422195 Apr 9 15:37 istio-demo.yaml
-rw-r--r-- 1 root root 1610 Feb 9 2019 mesh-expansion.yaml
-rw-r--r-- 1 root root 102 Feb 9 2019 namespace.yaml
-rw-r--r-- 1 root root 379 Feb 9 2019 README.md
root@k8s-master-10:/opt/istio-1.0.6/install/kubernetes#
# 执行安装 istio 命令 kubectl apply -f istio-demo.yaml
root@k8s-master-10:/opt/istio-1.0.6/install/kubernetes# kubectl apply -f istio-demo.yaml
1.解压 tar -zxf istio-1.0.6-linux.tar.gz
root@k8s-master-8:~# mkdir -p /home/tools/
root@k8s-master-8:~# cd /home/tools/
# 将安装包'istio-1.0.6-linux.tar.gz' 放到'/home/tools' 目录下
root@k8s-master-8:/home/tools# ls
istio-1.0.6-linux.tar.gz
root@k8s-master-8:/home/tools# pwd
/home/tools
# 解压
root@k8s-master-8:/home/tools# tar -zxf istio-1.0.6-linux.tar.gz
root@k8s-master-8:/home/tools#
root@k8s-master-8:/home/tools# ls
istio-1.0.6 istio-1.0.6-linux.tar.gz
2.进入 istio 目录 cd istio-1.0.6
root@k8s-master-8:/home/tools# cd istio-1.0.6
root@k8s-master-8:/home/tools/istio-1.0.6# ll
total 48
drwxr-xr-x 6 root root 4096 Feb 9 2019 ./
drwxr-xr-x 3 root root 4096 Apr 14 15:59 ../
drwxr-xr-x 2 root root 4096 Feb 9 2019 bin/
drwxr-xr-x 6 root root 4096 Feb 9 2019 install/
-rw-r--r-- 1 root root 648 Feb 9 2019 istio.VERSION
-rw-r--r-- 1 root root 11343 Feb 9 2019 LICENSE
-rw-r--r-- 1 root root 5817 Feb 9 2019 README.md
drwxr-xr-x 12 root root 4096 Feb 9 2019 samples/
drwxr-xr-x 8 root root 4096 Feb 9 2019 tools/
root@k8s-master-8:/home/tools/istio-1.0.6#
# 介绍每个目录的含义
1.bin 目录 里面有一个'istioctl' 他是一个客户端工具,用于命令行的方式与 istio 服务间隙交互
root@k8s-master-8:/home/tools/istio-1.0.6# cd bin/
root@k8s-master-8:/home/tools/istio-1.0.6/bin# ls
istioctl
root@k8s-master-8:/home/tools/istio-1.0.6/bin#
2. install 目录 他包含了 consul,kubernetes平台的 istio 安装脚本和文件
root@k8s-master-8:/home/tools/istio-1.0.6# cd install/
root@k8s-master-8:/home/tools/istio-1.0.6/install# ll
total 28
drwxr-xr-x 6 root root 4096 Feb 9 2019 ./
drwxr-xr-x 6 root root 4096 Feb 9 2019 ../
drwxr-xr-x 2 root root 4096 Feb 9 2019 consul/
drwxr-xr-x 3 root root 4096 Feb 9 2019 gcp/
drwxr-xr-x 5 root root 4096 Feb 9 2019 kubernetes/ # 安装脚本和文件
-rw-r--r-- 1 root root 1487 Feb 9 2019 README.md
drwxr-xr-x 2 root root 4096 Feb 9 2019 tools/
root@k8s-master-8:/home/tools/istio-1.0.6/install#
3. istio.VERSION # 这个配置文件主要包含版本信息和环境变量
root@k8s-master-8:/home/tools/istio-1.0.6# cat istio.VERSION
# DO NOT EDIT THIS FILE MANUALLY instead use
# install/updateVersion.sh (see install/README.md)
export CITADEL_HUB="docker.io/istio"
export CITADEL_TAG="1.0.6"
export MIXER_HUB="docker.io/istio"
export MIXER_TAG="1.0.6"
export PILOT_HUB="docker.io/istio"
export PILOT_TAG="1.0.6"
export PROXY_HUB="docker.io/istio"
export PROXY_TAG="1.0.6"
export PROXY_DEBUG=""
export ISTIO_NAMESPACE="istio-system"
export PILOT_DEBIAN_URL="https://storage.googleapis.com/istio-release/releases/1.0.6/deb"
export FORTIO_HUB="docker.io/istio"
export FORTIO_TAG="latest_release"
export HYPERKUBE_HUB="quay.io/coreos/hyperkube"
export HYPERKUBE_TAG="v1.7.6_coreos.0"
root@k8s-master-8:/home/tools/istio-1.0.6#
4. samples # 主要包含官方文档用到的各种应用实例 比如bookinfo,heiioworld等后面我们会用这些案例演示效果。
root@k8s-master-8:/home/tools/istio-1.0.6# cd samples
root@k8s-master-8:/home/tools/istio-1.0.6/samples# ll
total 56
drwxr-xr-x 12 root root 4096 Feb 9 2019 ./
drwxr-xr-x 6 root root 4096 Feb 9 2019 ../
drwxr-xr-x 6 root root 4096 Feb 9 2019 bookinfo/
drwxr-xr-x 2 root root 4096 Feb 9 2019 certs/
-rw-r--r-- 1 root root 3194 Feb 9 2019 CONFIG-MIGRATION.md
drwxr-xr-x 2 root root 4096 Feb 9 2019 health-check/
drwxr-xr-x 2 root root 4096 Feb 9 2019 helloworld/
drwxr-xr-x 5 root root 4096 Feb 9 2019 httpbin/
drwxr-xr-x 2 root root 4096 Feb 9 2019 https/
drwxr-xr-x 2 root root 4096 Feb 9 2019 kubernetes-blog/
drwxr-xr-x 2 root root 4096 Feb 9 2019 rawvm/
-rw-r--r-- 1 root root 185 Feb 9 2019 README.md
drwxr-xr-x 2 root root 4096 Feb 9 2019 sleep/
drwxr-xr-x 2 root root 4096 Feb 9 2019 websockets/
root@k8s-master-8:/home/tools/istio-1.0.6/samples#
5. tools # 他主要存放一些本地机器进行测试的脚本和工具
root@k8s-master-8:/home/tools/istio-1.0.6# cd tools/
root@k8s-master-8:/home/tools/istio-1.0.6/tools# ll
total 308
drwxr-xr-x 8 root root 4096 Feb 9 2019 ./
drwxr-xr-x 6 root root 4096 Feb 9 2019 ../
drwxr-xr-x 2 root root 4096 Feb 9 2019 adsload/
-rw-r--r-- 1 root root 1022 Feb 9 2019 cache_buster.yaml
-rw-r--r-- 1 root root 1632 Feb 9 2019 convert_perf_results.py
drwxr-xr-x 2 root root 4096 Feb 9 2019 deb/
-rwxr-xr-x 1 root root 7837 Feb 9 2019 dump_kubernetes.sh*
drwxr-xr-x 2 root root 4096 Feb 9 2019 githubContrib/
drwxr-xr-x 2 root root 4096 Feb 9 2019 hyperistio/
-rw-r--r-- 1 root root 11425 Feb 9 2019 istio-docker.mk
drwxr-xr-x 2 root root 4096 Feb 9 2019 license/
-rw-r--r-- 1 root root 700 Feb 9 2019 perf_istio_rules.yaml
-rw-r--r-- 1 root root 1938 Feb 9 2019 perf_k8svcs.yaml
-rw-r--r-- 1 root root 190640 Feb 9 2019 perf_setup.svg
-rw-r--r-- 1 root root 14585 Feb 9 2019 README.md
-rw-r--r-- 1 root root 1931 Feb 9 2019 rules.yml
-rwxr-xr-x 1 root root 2987 Feb 9 2019 run_canonical_perf_tests.sh*
-rw-r--r-- 1 root root 17082 Feb 9 2019 setup_perf_cluster.sh
-rw-r--r-- 1 root root 761 Feb 9 2019 setup_run
-rwxr-xr-x 1 root root 270 Feb 9 2019 update_all*
drwxr-xr-x 2 root root 4096 Feb 9 2019 vagrant/
root@k8s-master-8:/home/tools/istio-1.0.6/tools#
istio 的安装目录及其说明
文件/文件夹 | 说明 |
---|
bin | 包含客户端工具 istioctl 用于和 istio API 交互 用于命令行的方式与istio交互 |
install | 包含了consul 和 kubernetes 平台的 istio 安装脚本和文件,在 kubernetes 平台上分为YAML资源文件和Helm安装文件 |
istio.VERSION | 配置文件包含版本信息和环境变量 |
samples | 包含了官方文档中用到的各种应用实例如 bookinfo/helloworld 等等,这些示例可以帮助 |
tools | 存放的一些本地机器的一些脚本和工具 |
| |
istio 安装
# 执行 CRDs的YAML 文件 CRD 是 kubernetes 资源的扩展,可以用来自定义资源类型
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates# pwd
/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates# ll
total 64
drwxr-xr-x 2 root root 4096 Feb 9 2019 ./
drwxr-xr-x 4 root root 4096 Feb 9 2019 ../
-rw-r--r-- 1 root root 1075 Feb 9 2019 _affinity.tpl
-rw-r--r-- 1 root root 5031 Feb 9 2019 configmap.yaml
-rw-r--r-- 1 root root 22109 Feb 9 2019 crds.yaml
-rw-r--r-- 1 root root 868 Feb 9 2019 _helpers.tpl
-rw-r--r-- 1 root root 919 Feb 9 2019 install-custom-resources.sh.tpl
-rw-r--r-- 1 root root 8928 Feb 9 2019 sidecar-injector-configmap.yaml
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates#
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes/helm/istio/templates# kubectl apply -f crds.yaml
# 安装 istio
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes# vim istio-demo.yaml
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes# kubectl apply -f istio-demo.yaml
namespace/istio-system created
configmap/istio-galley-configuration created
configmap/istio-grafana-custom-resources created
configmap/istio-grafana-configuration-dashboards created
configmap/istio-grafana created
configmap/istio-statsd-prom-bridge created
configmap/prometheus created
configmap/istio-security-custom-resources created
configmap/istio created
configmap/istio-sidecar-injector created
serviceaccount/istio-galley-service-account created
serviceaccount/istio-egressgateway-service-account created
serviceaccount/istio-ingressgateway-service-account created
serviceaccount/istio-grafana-post-install-account created
job.batch/istio-grafana-post-install created
serviceaccount/istio-mixer-service-account created
serviceaccount/istio-pilot-service-account created
serviceaccount/prometheus created
serviceaccount/istio-cleanup-secrets-service-account created
job.batch/istio-cleanup-secrets created
serviceaccount/istio-security-post-install-account created
job.batch/istio-security-post-install created
serviceaccount/istio-citadel-service-account created
serviceaccount/istio-sidecar-injector-service-account created
service/istio-galley created
service/istio-egressgateway created
service/istio-ingressgateway created
service/grafana created
service/istio-policy created
service/istio-telemetry created
service/istio-pilot created
service/prometheus created
service/istio-citadel created
service/servicegraph created
service/istio-sidecar-injector created
service/jaeger-query created
service/jaeger-collector created
service/jaeger-agent created
service/zipkin created
service/tracing created
# 查看命名空间
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes# kubectl get pods -n istio-systio
3.1.2.2 回顾 k8s 组件以及使用
回顾课程涉及到的 kubernetes 组件
Deployment
一旦运行了 kubernetes 集群,就可以在其上部署容器化应用程序。
Deployment 配置
Deployment 负责 kubernetes 如何创建和更新应用程序的实例。
创建 Deployment 后,kubernetes master 将应用程序实例调度到集群中的各个节点上。
创建 nginx_deployment.yaml 文件
apiVersion: apps/v1 # 定义了一个版本
kind: Deployment # k8s 资源类型是 Deployment
metadata: # metadata 这个 KEY 对应的值为一个 Maps
name: nginx-deployment # 资源名字 nginx-deployment
labels: # 将新建的 pod 附加 Label
app: nginx # 一个键值对为 key=app,value=nginx 的 Label
spec: # 以下其实就是 replicaSet 的配置
replicas: 3 # 副本数为3个,也就是有3个 pod
selector: # 匹配具有同一个 label属性的 pod 标签
matchLabels: # 寻找合适的 label,一个键值对为 key=app,value=nginx 的 Label
app: nginx
template: # 模板
metadata:
labels: # 将新建的 pod 附加 Label
app: nginx
spec:
containers: # 定义容器
- name: nginx # 容器名称
image: nginx:1.7.9 # 镜像地址
ports:
- containerPort: 80 # 容器端口
执行创建资源命令
kubectl apply -f nginx_deployment.yaml
root@k8s-master-9:~# vim nginx_deployment.yaml
root@k8s-master-9:~# kubectl apply -f nginx_deployment.yaml
deployment.apps/nginx-deployment created
# 表示资源创建成功 pod 也就创建成功,因为在 k8s 里面,deployment 是包含 pod 的。
root@k8s-master-9:~#
检查 pod
kubectl get pods
root@k8s-master-9:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-9d6cbcc65-5q4b9 1/1 Running 0 3m43s
nginx-deployment-9d6cbcc65-qf5km 1/1 Running 0 3m43s
nginx-deployment-9d6cbcc65-vb2tb 1/1 Running 0 3m43s
root@k8s-master-9:~#
# 查看 pod 详细情况
root@k8s-master-9:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-9d6cbcc65-5q4b9 1/1 Running 0 4m13s 10.88.0.2 k8s-node-13 <none> <none>
nginx-deployment-9d6cbcc65-qf5km 1/1 Running 0 4m13s 10.88.0.4 k8s-node-13 <none> <none>
nginx-deployment-9d6cbcc65-vb2tb 1/1 Running 0 4m13s 10.88.0.3 k8s-node-13 <none> <none>
root@k8s-master-9:~#
查看 deployment 资源
kubectl get deployment
root@k8s-master-9:~# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 7m32s
# 查看 deployment 资源的详细情况
root@k8s-master-9:~# kubectl get deployment -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-deployment 3/3 3 3 7m52s nginx nginx:1.7.9 app=nginx
root@k8s-master-9:~#
Labels and Selectors
将 'selector' 下面的 'nginx' 修改为 'nginx-1' 再执行资源创建命令
root@k8s-master-9:~# vim nginx_deployment.yaml
root@k8s-master-9:~# cat nginx_deployment.yaml
apiVersion: apps/v1 # 定义了一个版本
kind: Deployment # k8s 资源类型是 Deployment
metadata: # metadata 这个 KEY 对应的值为一个 Maps
name: nginx-deployment # 资源名字 nginx-deployment
labels: # 将新建的 pod 附加 Label
app: nginx # 一个键值对为 key=app,value=nginx 的 Label
spec: # 以下其实就是 replicaSet 的配置
replicas: 3 # 副本数为3个,也就是有3个 pod
selector: # 匹配具有同一个 label属性的 pod 标签
matchLabels: # 寻找合适的 label,一个键值对为 key=app,value=nginx 的 Label
app: nginx-1
template: # 模板
metadata:
labels: # 将新建的 pod 附加 Label
app: nginx
spec:
containers: # 定义容器
- name: nginx # 容器名称
image: nginx:1.7.9 # 镜像地址
ports:
- containerPort: 80 # 容器端口
root@k8s-master-9:~#
# 执行资源创建命令
root@k8s-master-9:~# kubectl apply -f nginx_deployment.yaml
The Deployment "nginx-deployment" is invalid:
* spec.template.metadata.labels: Invalid value: map[string]string{"app":"nginx"}: `selector` does not match template `labels`
* spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx-1"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
# 查看资源里面打了哪些标签
kubectl get pods --show-labels
root@k8s-master-9:~# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deployment-9d6cbcc65-5q4b9 1/1 Running 0 19m app=nginx,pod-template-hash=9d6cbcc65
nginx-deployment-9d6cbcc65-qf5km 1/1 Running 0 19m app=nginx,pod-template-hash=9d6cbcc65
nginx-deployment-9d6cbcc65-vb2tb 1/1 Running 0 19m app=nginx,pod-template-hash=9d6cbcc65
root@k8s-master-9:~#
Namespace
命名空间就是为了隔离不同的资源。比如: Pod, Service, Deployment等。可以在输入命令的时候指定命名空间,参数为 '-n',如果不指定,则使用默认的命名空间:default.
# 查看一下当前的所有命名空间:kubectl get namespace
kubectl get ns
# 查看一下 kube-system 命名空间:kubectl get pods -n kube-system
root@k8s-master-9:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66f779496c-h2glg 1/1 Running 2 (45m ago) 2d19h
coredns-66f779496c-h8zr7 1/1 Running 2 (45m ago) 2d19h
etcd-k8s-master-9 1/1 Running 2 (45m ago) 2d19h
kube-apiserver-k8s-master-9 1/1 Running 2 (45m ago) 2d19h
kube-controller-manager-k8s-master-9 1/1 Running 2 (45m ago) 2d19h
kube-proxy-7vpxd 1/1 Running 2 (45m ago) 2d19h
kube-proxy-97555 1/1 Running 2 (44m ago) 2d18h
kube-scheduler-k8s-master-9 1/1 Running 2 (45m ago) 2d19h
root@k8s-master-9:~#
# 1. 创建自己的 namespace vim my-namespace.yaml
my-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: myns
root@k8s-master-9:~# vim my-namespace.yaml
root@k8s-master-9:~# cat my-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: myns
root@k8s-master-9:~#
# 2. 执行命令:kubectl apply -f my-namespace.yaml
root@k8s-master-9:~# vim my-namespace.yaml
root@k8s-master-9:~# kubectl apply -f my-namespace.yaml
namespace/myns created # 表示命名空间已经创建成功
# 3. 查看命令 kubectl get ns
root@k8s-master-9:~# kubectl get ns
NAME STATUS AGE
default Active 2d20h
kube-node-lease Active 2d20h
kube-public Active 2d20h
kube-system Active 2d20h
myns Active 4m25s
root@k8s-master-9:~#
# 4. 删除命名空间
第一种方法:直接删除资源文件 kubectl delete -f my-namespace.yaml
root@k8s-master-9:~# kubectl delete -f my-namespace.yaml
namespace "myns" deleted
root@k8s-master-9:~# kubectl get ns
NAME STATUS AGE
default Active 2d20h
kube-node-lease Active 2d20h
kube-public Active 2d20h
kube-system Active 2d20h
root@k8s-master-9:~#
查看命名空间已经删除
root@k8s-master-9:~# cat my-namespace.yaml
apiVersion: v1 # 命名空间的版本
kind: Namespace # 资源的类型
metadata:
name: myns # 命名空间的名称
root@k8s-master-9:~#
第二种方法: 直接删除命令空间的名字即可
kubectl delete namespaces 空间的名字
kubectl delete namespaces myns
root@k8s-master-9:~# kubectl get namespace
NAME STATUS AGE
default Active 2d20h
kube-node-lease Active 2d20h
kube-public Active 2d20h
kube-system Active 2d20h
myns Active 5m52s
root@k8s-master-9:~# kubectl delete namespaces myns
namespace "myns" deleted
root@k8s-master-9:~# kubectl get namespace
NAME STATUS AGE
default Active 2d20h
kube-node-lease Active 2d20h
kube-public Active 2d20h
kube-system Active 2d20h
root@k8s-master-9:~#
# 查看 kube-system 空间有哪些资源 kubectl get pods -n kube-system
root@k8s-master-9:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66f779496c-h2glg 1/1 Running 2 (73m ago) 2d20h
coredns-66f779496c-h8zr7 1/1 Running 2 (73m ago) 2d20h
etcd-k8s-master-9 1/1 Running 2 (73m ago) 2d20h
kube-apiserver-k8s-master-9 1/1 Running 2 (73m ago) 2d20h
kube-controller-manager-k8s-master-9 1/1 Running 2 (73m ago) 2d20h
kube-proxy-7vpxd 1/1 Running 2 (73m ago) 2d20h
kube-proxy-97555 1/1 Running 2 (72m ago) 2d19h
kube-scheduler-k8s-master-9 1/1 Running 2 (73m ago) 2d20h
# 查看 kube-system 空间有哪些资源详细信息 kubectl get pods -n kube-system -o wide
root@k8s-master-9:~# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-66f779496c-h2glg 1/1 Running 2 (73m ago) 2d20h 10.88.0.6 k8s-master-9 <none> <none>
coredns-66f779496c-h8zr7 1/1 Running 2 (73m ago) 2d20h 10.88.0.7 k8s-master-9 <none> <none>
etcd-k8s-master-9 1/1 Running 2 (73m ago) 2d20h 192.168.222.133 k8s-master-9 <none> <none>
kube-apiserver-k8s-master-9 1/1 Running 2 (73m ago) 2d20h 192.168.222.133 k8s-master-9 <none> <none>
kube-controller-manager-k8s-master-9 1/1 Running 2 (73m ago) 2d20h 192.168.222.133 k8s-master-9 <none> <none>
kube-proxy-7vpxd 1/1 Running 2 (73m ago) 2d20h 192.168.222.133 k8s-master-9 <none> <none>
kube-proxy-97555 1/1 Running 2 (72m ago) 2d19h 192.168.222.134 k8s-node-13 <none> <none>
kube-scheduler-k8s-master-9 1/1 Running 2 (73m ago) 2d20h 192.168.222.133 k8s-master-9 <none> <none>
root@k8s-master-9:~#
注意:
删除一个 namespace 会自动删除所有属于该 namespace 的资源。
default 和 kube-system 命名空间不可删除。
Service
集群内部访问方式(CluterIP)
# whoami 官方地址:https://github.com/traefik/whoami/releases
https://github.com/traefik/whoami/releases/download/v1.10.1/whoami_v1.10.1_linux_amd64.tar.gz
https://github.com/traefik/whoami/archive/refs/tags/v1.10.1.tar.gz
1.下载
wget https://github.com/traefik/whoami/releases/download/v1.10.1/whoami_v1.10.1_linux_amd64.tar.gz
root@k8s-master-8:~# wget https://github.com/traefik/whoami/releases/download/v1.10.1/whoami_v1.10.1_linux_amd64.tar.gz
--2024-04-16 20:32:32-- https://github.com/traefik/whoami/releases/download/v1.10.1/whoami_v1.10.1_linux_amd64.tar.gz
Resolving github.com (github.com)... 20.205.243.166
Connecting to github.com (github.com)|20.205.243.166|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/42946496/d58f3a94-333b-4fe4-9455-38694f0ca3ab?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240416%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240416T123232Z&X-Amz-Expires=300&X-Amz-Signature=89a54b4b0f7713861cc032ad0e975c41f112c3bb66d39180aa080ce29f95612f&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=42946496&response-content-disposition=attachment%3B%20filename%3Dwhoami_v1.10.1_linux_amd64.tar.gz&response-content-type=application%2Foctet-stream [following]
--2024-04-16 20:32:32-- https://objects.githubusercontent.com/github-production-release-asset-2e65be/42946496/d58f3a94-333b-4fe4-9455-38694f0ca3ab?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240416%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240416T123232Z&X-Amz-Expires=300&X-Amz-Signature=89a54b4b0f7713861cc032ad0e975c41f112c3bb66d39180aa080ce29f95612f&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=42946496&response-content-disposition=attachment%3B%20filename%3Dwhoami_v1.10.1_linux_amd64.tar.gz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.111.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2331082 (2.2M) [application/octet-stream]
Saving to: ‘whoami_v1.10.1_linux_amd64.tar.gz’
whoami_v1.10.1_linux_amd64.tar.gz 100%[=============================================================================>] 2.22M 55.3KB/s in 21s
2024-04-16 20:33:07 (108 KB/s) - ‘whoami_v1.10.1_linux_amd64.tar.gz’ saved [2331082/2331082]
root@k8s-master-8:~# ls
calico-3.27.3.tar.gz helm-v3.14.3-linux-amd64.tar.gz kubebuilder-3.14.1.tar.gz projects whoami_v1.10.1_linux_amd64.tar.gz
cri-containerd-cni-1.7.14-linux-amd64.tar.gz ip_forward~ linux-amd64 snap
go1.22.2.linux-amd64.tar.gz kubebuilder-3.14.1 nginx-deploy.yaml test.go
2.解压
root@k8s-master-8:~# tar -zxf whoami_v1.10.1_linux_amd64.tar.gz
root@k8s-master-8:~# ls
calico-3.27.3.tar.gz ip_forward~ linux-amd64 test.go
cri-containerd-cni-1.7.14-linux-amd64.tar.gz kubebuilder-3.14.1 nginx-deploy.yaml whoami
go1.22.2.linux-amd64.tar.gz kubebuilder-3.14.1.tar.gz projects whoami_v1.10.1_linux_amd64.tar.gz
helm-v3.14.3-linux-amd64.tar.gz LICENSE snap
3.移动
root@k8s-master-8:~# mv whoami /usr/local/bin/
root@k8s-master-8:~# ls /usr/local/bin/
containerd containerd-shim-runc-v1 containerd-stress critest ctr kubebuilder
containerd-shim containerd-shim-runc-v2 crictl ctd-decoder helm whoami
root@k8s-master-8:~#
Pod 虽然实现了集群内部互相通信,但是 Pod 是不稳定的,比如通过 Deployment 管理 Pod,随时可能对 Pod 进行扩缩容,这时候 Pod 的 IP 地址是变化的。能够有一个固定的 IP,使得集群内能够访问。也就是之前在架构描述的时候所提到的,都可以通过 Service 的 IP 进行访问
k8s 用 service 来解决这个问题,因为 service 会对应一个变的 ip, 然后内部通过负载均衡到相同 label 上的不同 pod 机器上。
service 有两种类型:
1.cluterIP 保证集群内部访问
2.NodePort 暴露一个端口,可以让外部的请求访问 k8s 内部。
会把相同的 pod 打成一个 label 标签形成一个 service, service 具有负载均衡能力
创建 whoami-deployment.yaml 文件 whoami是k8s提供的实例
apiVersion: apps/v1 # 定义了一个版本
kind: Deployment # 资源类型是 Deployment
metadata: # metadata这个 KEY 对应的值为一个 Maps
name: whoami-deployment # 资源名字
labels: # 将新建的 Pod 附加 Label
app: whoami # key=app;value=whoami
spec: # 资源它描述了对象的
replicas: 3 # 副本数为1,只会有一个 pod
selector: # 匹配具有同一个 label 属性的 pod 标签
matchLabels: # 匹配合适的 label
app: whoami
template: # template 其实就是对 Pod 对象的定义 (模板)
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami # 容器名字 下面容器的镜像
image: jwilder/whoami # 容器的镜像
ports:
- containerPort: 8000 # 容器的端口
执行资源
# 创建资源 kubectl apply -f whoami-deployment.yaml
root@k8s-master-9:~# kubectl apply -f whoami-deployment.yaml
deployment.apps/whoami-deployment created # 表示已经创建成功
root@k8s-master-9:~#
# 查看详细信息 kubectl get pods -o wide
root@k8s-master-9:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-9d6cbcc65-5q4b9 1/1 Running 0 3h8m 10.88.0.2 k8s-node-13 <none> <none>
nginx-deployment-9d6cbcc65-qf5km 1/1 Running 0 3h8m 10.88.0.4 k8s-node-13 <none> <none>
nginx-deployment-9d6cbcc65-vb2tb 1/1 Running 0 3h8m 10.88.0.3 k8s-node-13 <none> <none>
whoami-deployment-65c8f4bc9f-d4xjh 1/1 Running 0 2m22s 10.88.0.7 k8s-node-13 <none> <none>
whoami-deployment-65c8f4bc9f-jlsrp 1/1 Running 0 2m22s 10.88.0.5 k8s-node-13 <none> <none>
whoami-deployment-65c8f4bc9f-t6vqd 1/1 Running 0 2m22s 10.88.0.6 k8s-node-13 <none> <none>
root@k8s-master-9:~# curl 10.88.0.6:8000
# 删除 一个 pod kubectl delete pod whoami-deployment-65c8f4bc9f-d4xjh
root@k8s-master-9:~# kubectl delete pod whoami-deployment-65c8f4bc9f-d4xjh
pod "whoami-deployment-65c8f4bc9f-d4xjh" deleted
root@k8s-master-9:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-9d6cbcc65-5q4b9 1/1 Running 0 3h18m 10.88.0.2 k8s-node-13 <none> <none>
nginx-deployment-9d6cbcc65-qf5km 1/1 Running 0 3h18m 10.88.0.4 k8s-node-13 <none> <none>
nginx-deployment-9d6cbcc65-vb2tb 1/1 Running 0 3h18m 10.88.0.3 k8s-node-13 <none> <none>
whoami-deployment-65c8f4bc9f-jlsrp 1/1 Running 0 11m 10.88.0.5 k8s-node-13 <none> <none>
whoami-deployment-65c8f4bc9f-m6s4f 1/1 Running 0 34s 10.88.0.8 k8s-node-13 <none> <none>
whoami-deployment-65c8f4bc9f-t6vqd 1/1 Running 0 11m 10.88.0.6 k8s-node-13 <none> <none>
root@k8s-master-9:~#
查看 service
# 查看 service, kubectl get svc
root@k8s-master-9:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d22h
root@k8s-master-9:~# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d22h
root@k8s-master-9:~#
创建 service 命名空间
kubectl expose deployment whoami-deployment
service-ClusterIP
创建空间
root@k8s-master-9:~# kubectl expose deployment whoami-deployment
service/whoami-deployment exposed
root@k8s-master-9:~#
# 查看 service 空间 发现有两个 service
root@k8s-master-9:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d22h
whoami-deployment ClusterIP 10.106.210.5 <none> 8000/TCP 2m53s
root@k8s-master-9:~# curl 10.106.210.5:8000
root@k8s-master-9:~# curl 10.106.210.5:8000
# 查看 service 都在干嘛 kubectl describe svc whoami-deployment
root@k8s-master-9:~# kubectl describe svc whoami-deployment
Name: whoami-deployment
Namespace: default
Labels: app=whoami
Annotations: <none>
Selector: app=whoami
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.106.210.5
IPs: 10.106.210.5
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
Endpoints: 10.88.0.5:8000,10.88.0.6:8000,10.88.0.8:8000
Session Affinity: None
Events: <none>
root@k8s-master-9:~#
# 进行扩容 从3个变5个
kubectl scale deployment whoami-deployment --replicas=5
root@k8s-master-9:~# kubectl scale deployment whoami-deployment --replicas=5
deployment.apps/whoami-deployment scaled
root@k8s-master-9:~#
# 查看 pod
kubectl get pods
root@k8s-master-9:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-9d6cbcc65-5q4b9 1/1 Running 0 3h40m
nginx-deployment-9d6cbcc65-qf5km 1/1 Running 0 3h40m
nginx-deployment-9d6cbcc65-vb2tb 1/1 Running 0 3h40m
whoami-deployment-65c8f4bc9f-ccrq7 1/1 Running 0 82s
whoami-deployment-65c8f4bc9f-jlsrp 1/1 Running 0 34m
whoami-deployment-65c8f4bc9f-m6s4f 1/1 Running 0 22m
whoami-deployment-65c8f4bc9f-t6vqd 1/1 Running 0 34m
whoami-deployment-65c8f4bc9f-tgrq5 1/1 Running 0 82s
root@k8s-master-9:~#
# 查看 service 显示多出两个服务
root@k8s-master-9:~# kubectl describe svc whoami-deployment
Name: whoami-deployment
Namespace: default
Labels: app=whoami
Annotations: <none>
Selector: app=whoami
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.106.210.5
IPs: 10.106.210.5
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
Endpoints: 10.88.0.10:8000,10.88.0.5:8000,10.88.0.6:8000 + 2 more...
Session Affinity: None
Events: <none>
root@k8s-master-9:~#
先删除以前的 service
kubectl delete svc whoami-deployment
root@k8s-master-9:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d
whoami-deployment ClusterIP 10.106.210.5 <none> 8000/TCP 86m
root@k8s-master-9:~# kubectl delete svc whoami-deployment
service "whoami-deployment" deleted
root@k8s-master-9:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d
root@k8s-master-9:~#
为了解决 pod 地址发生改变的问题,此时我们需要用到 service ,下面我们来创建自己的 service 空间
service 是为了解决 pod 的 IP 不稳定性
总结:其实 Service存在的意义就是为了 Pod 的 IP 不稳定性,而上述探讨的就是关于 Service 的一种类型 Cluster IP
外部服务访问集群中的 Pod(NodePort) 即 service-nodeport
也是 Service 的一种类型,可以通过 NodePort 的方式
说白了,因为外部能够访问到集群的物理机器 IP,所以就是在集群中每台物理机器上暴露一个相同的端口锁,比如 32008
操作
1.先删除以前的 service
kubectl delete svc whoami-deployment
2.创建 service
kubectl expose deployment whoami-deployment --type=NodePort
root@k8s-master-8:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-f7f5c78c5-z8rk9 1/1 Running 1 (58m ago) 21h 10.244.139.12 k8s-node-14 <none> <none>
root@k8s-master-8:~# kubectl apply -f whoami-deployment.yaml
deployment.apps/whoami-deployment created
root@k8s-master-8:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-f7f5c78c5-z8rk9 1/1 Running 1 (58m ago) 21h 10.244.139.12 k8s-node-14 <none> <none>
whoami-deployment-65c8f4bc9f-dtrhb 0/1 ContainerCreating 0 5s <none> k8s-node-14 <none> <none>
whoami-deployment-65c8f4bc9f-wvqxt 0/1 ContainerCreating 0 5s <none> k8s-node-14 <none> <none>
whoami-deployment-65c8f4bc9f-znlgq 0/1 ContainerCreating 0 5s <none> k8s-node-14 <none> <none>
root@k8s-master-8:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d7h
nginx-service NodePort 10.100.104.95 <none> 80:30080/TCP 35h
# 创建 service
root@k8s-master-8:~# kubectl expose deployment whoami-deployment --type=NodePort
service/whoami-deployment exposed
root@k8s-master-8:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d7h
nginx-service NodePort 10.100.104.95 <none> 80:30080/TCP 36h
whoami-deployment NodePort 10.96.198.98 <none> 8000:30320/TCP 21s
root@k8s-master-8:~#
# 检查物理机端口 netstat -anp | grep 30320
总结: service
service 有 Cluster IP 和 NodePort 两种方式
1. Cluster IP 解决集群内部的访问 内部有内置的负载均衡
2. NodePort 保证外部的请求能够访问集群内部 他的缺点是会占用各个物理机的端口 这样是很不安全的,
所以,下面我们来回顾一下 Igress 组件
Ingress
前面我们也学习可以通过 service nodeport方式实现外部访问 Pod 的需求,但是会占用了各个物理机上的端口,所以这种方式不好。
删除资源
# 1. 创建资源
kubectl apply -f whoami-deployment.yaml
# 2. 删除 pod
kubectl delete -f whoami-deployment.yaml
# 3.创建 service
kubectl expose deployment whoami-deployment --type=NodePort
# 删除 service
kubectl delete svc whoami-deployment
# 进行扩容 从3个变5个
kubectl scale deployment whoami-deployment --replicas=5
# 删除资源 pod
root@k8s-master-8:~# kubectl delete -f whoami-deployment.yaml
deployment.apps "whoami-deployment" deleted
root@k8s-master-8:~#
# 删除 service 名称为 'whoami-deployment'
root@k8s-master-8:~# kubectl delete svc whoami-deployment
service "whoami-deployment" deleted
root@k8s-master-8:~#
那我们现在还是要基于外部访问集群内部的需求,该怎么办呢?
我们使用 Ingress 来访问 whoami pod 需求 我们也需要定义 deployment 和 service 资源,而这个资源的配置如下:
1.whoami-ingress.yaml
2.whoami-service.yaml
那接下来还是基于外部访问内部集群的需求,使用 Ingress 实现访问 whoami 需求。
创建 whoami-service.yaml 文件
创建 pod 和 service
vim whoami-service.yaml
apiVersion: apps/v1 # 定义了一个版本
kind: Deployment # 资源类型是 Deployment
metadata: # metadata 这个KEY对应的值为一个 Maps
name: whoami-deployment # 资源名字
labels: # 将新建的 Pod 附加 Label
app: whoami # key=app;value=whoami
spec: # 资源它描述了对象的
replicas: 3 # 副本数为 1 个,只会有一个 pod
selector: # 匹配具有同一个 label 属性的 pod 标签
matchLabels: # 匹配合适的 label
app: whoami
template: # template 其实就是对 Pod 对象的定义 (模板)
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami # 容器名字 下面容器的镜像
image: jwilder/whoami
ports:
- containerPort: 8000 # 容器的端口
---
apiVersion: v1
kind: Service
metadata:
name: whoami-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8000
selector:
app: whoami
# 这里 service 没有指定类型,会用默认的 Cluster IP
创建资源 创建 whoami-deployment 资源 和 创建 whoami-service 资源
# 执行创建两个资源
root@k8s-master-8:~# kubectl apply -f whoami-service.yaml
deployment.apps/whoami-deployment created # 创建 whoami-deployment 资源
service/whoami-service created # 创建 whoami-service 资源
root@k8s-master-8:~#
# 创建 Ingress 资源
创建 Ingress 资源
vim whoami-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whoami-ingress
spec:
rules: # 定义规则
- host: whoami.qy.com # 定义访问域名
http:
paths:
- path: / # 定义路径规则, / 表示是没有设置任何路径规则
backend:
serviceName: whoami-service
# 把请求转发给 service 资源,这个 service 就是我们前面运行的 service
servicePort: 80 # service 的端口
执行资源
# 创建 Ingress 资源
root@k8s-master-8:~# kubectl apply -f whoami-ingress.yaml
error: resource mapping not found for name: "whoami-ingress" namespace: "" from "whoami-ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
ensure CRDs are installed first
root@k8s-master-8:~# kubectl apply -f whoami-ingress.yaml
ingress.extensions/whoami-ingress created # 创建 whoami-ingres 资源
# 查看 Ingress 资源
root@k8s-master-8:~# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
whoami-ingress whoami.qy.com 80 15s
# 查看 Ingress 资源详细信息
root@k8s-master-8:~# kubectl describe ingress whoami-ingress
域名会对应一个 service,而service会对应集群内部这三个资源 -> whoami-service:80(192.168.190.112:8000,192.168.190.82:8000,192.168.190.81:8000)
说白了,whoami-service 的三个资源他们也要把请求转发 service,然后利用 service 的负载均衡能力,来转发请求。
下面我们来把 win 的 hosts 文件来添加到 dns 的解析器里面
我们打开hosts文件,可以看到
192.168.187.137 whoami.qy.com
域名与IP已经绑定了
流程总结:
首先浏览器会把请求转发给 ingress ,ingress 里面会配置域名和转发规则,会把请求转发给whoami-service(即service),而 service 拥有所有 pod 的 IP 地址,所以最终是由 service 的负载均衡把请求转发给 pod,
ingress 转发请求更加灵活,他不需要占用物理机的端口,所以建议使用这种方式转发外部请求到集群内部。
浏览器发送请求给 ingress,ingress 根据规则配置把请求转发给对应的 service,由于 service 配置了 pod,所以请求最终转发给了 pod 内对应的服务。
ingress 转发请求更加灵活,而且不需要占用物理机的端口,所以建议使用这种方式转发外部请求到集群内部。
3.1.2.3 初步感受 Istio
在 docker 中是通过 container 来部署业务的,在 k8s 里面是通过 pod 来部署业务的,那么在 istio 里面如何体现 sidecar 呢?
猜想,会不会在 pod 中除了业务需要的 container 之外还会有一个 sidecar 的 container存在呢?
验证猜想
1.准备一个资源 first-istio.yaml vim first-istio.yaml
apiVersion: apps/v1 # 定义了一个版本
kind: Deployment # 资源类型是 Deployment
metadata:
name: first-istio
spec:
selector:
matchLabels:
app: first-istio
replicas: 1
template:
metadata:
labels:
app: first-istio
spec:
containers:
- name: first-istio # 容器名字,下面容器镜像
image: registry.cn-hangzhou.aliyuncs.com/sixupiaofei/spring-docker-demo:1.0
ports:
- containerPort: 8080 # 容器的端口
---
apiVersion: v1
kind: Service # 资源类型是 Service
metadata:
name: first-istio # 资源名字 first-istio
spec:
ports:
- port: 80 # 对外暴露80
protocol: TCP #TCP协议
targetPort: 8080 # 重定向到 8080 端口
selector:
app: first-istio # 匹配合适的 label ,也就是找到合适 pod
type: ClusterIP # Service类型 ClusterIP
执行资源
# 创建两个资源 deployment.apps 和 service
root@k8s-master-8:~# kubectl apply -f first-istio.yaml
deployment.apps/first-istio created # 创建 deployment.apps 资源
service/first-istio created # 创建 service 资源
root@k8s-master-8:~#
#
# 查看 pods
root@k8s-master-8:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
first-istio-dcf55c475-v5rtt 1/1 Running 0 4m
nginx-deployment-f7f5c78c5-z8rk9 1/1 Running 2 (3h45m ago) 35h
whoami-deployment-65c8f4bc9f-mmwxv 1/1 Running 0 168m
whoami-deployment-65c8f4bc9f-nlgln 1/1 Running 0 168m
whoami-deployment-65c8f4bc9f-rb255 1/1 Running 0 168m
# 查看 pods 详细信息
root@k8s-master-8:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
first-istio-dcf55c475-v5rtt 1/1 Running 0 4m14s 10.244.139.25 k8s-node-14 <none> <none>
nginx-deployment-f7f5c78c5-z8rk9 1/1 Running 2 (3h45m ago) 35h 10.244.139.18 k8s-node-14 <none> <none>
whoami-deployment-65c8f4bc9f-mmwxv 1/1 Running 0 168m 10.244.139.22 k8s-node-14 <none> <none>
whoami-deployment-65c8f4bc9f-nlgln 1/1 Running 0 168m 10.244.139.23 k8s-node-14 <none> <none>
whoami-deployment-65c8f4bc9f-rb255 1/1 Running 0 168m 10.244.139.24 k8s-node-14 <none> <none>
root@k8s-master-8:~#
# 查看 pod 执行过程 kubectl describe pod first-istio-dcf55c475-v5rtt
root@k8s-master-8:~# kubectl describe pod first-istio-dcf55c475-v5rtt
Name: first-istio-dcf55c475-v5rtt
Namespace: default
Priority: 0
Service Account: default
Node: k8s-node-14/192.168.222.136
Start Time: Wed, 17 Apr 2024 10:35:49 +0800
Labels: app=first-istio
pod-template-hash=dcf55c475
Annotations: cni.projectcalico.org/containerID: d6593eaba4007372c7263dbaef82f0915729e9c63a9dc7451417593e92994264
cni.projectcalico.org/podIP: 10.244.139.25/32
cni.projectcalico.org/podIPs: 10.244.139.25/32
Status: Running
IP: 10.244.139.25
IPs:
IP: 10.244.139.25
Controlled By: ReplicaSet/first-istio-dcf55c475
Containers:
first-istio:
Container ID: containerd://86cf87910acc8d284ff52eb80c0662b32a32f1f5089af25e79cd34fcb133e779
Image: registry.cn-hangzhou.aliyuncs.com/sixupiaofei/spring-docker-demo:1.0
# 如何让 pod 里面增加一个 sidecar 容器?
有两种方式来注入 sidecar
1.一种是 手动注入
2.另外一种是自动注入
3.1.2.4 手动注入
首先我们来颜色手动注入
# 1.先删除以前的资源 kubectl delete -f first-istio.yaml
root@k8s-master-8:~# kubectl delete -f first-istio.yaml
deployment.apps "first-istio" deleted
service "first-istio" deleted
root@k8s-master-8:~#
# 手动注入的语法格式: istioctl kube-inject -f first-istio.yaml | kubectl apply -f first-istio.yaml
root@k8s-master-9:~# istioctl kube-inject -f first-istio.yaml | kubectl apply -f first-istio.yaml
deployment.apps/first-istio created
service/first-istio created
# 查看创建 pod
root@k8s-master-9:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
first-istio-dcf55c475-v29mv 1/1 Running 0 12m
nginx-deployment-9d6cbcc65-b6q99 1/1 Running 1 (40m ago) 15h
nginx-deployment-9d6cbcc65-h9k2d 1/1 Running 1 (40m ago) 15h
nginx-deployment-9d6cbcc65-pcpvz 1/1 Running 1 (40m ago) 15h
whoami-deployment-65c8f4bc9f-kj7xl 1/1 Running 1 (40m ago) 15h
whoami-deployment-65c8f4bc9f-njn98 1/1 Running 1 (40m ago) 15h
whoami-deployment-65c8f4bc9f-p88cx 1/1 Running 1 (40m ago) 15h
whoami-deployment-65c8f4bc9f-p8czx 1/1 Running 1 (40m ago) 15h
whoami-deployment-65c8f4bc9f-zmdjd 1/1 Running 1 (40m ago) 15h
root@k8s-master-9:~#
# 创建 sitio-system 命名空间
root@k8s-master-9:~# kubectl create namespace istio-system
namespace/istio-system created
# 查看 所有的命名空间 kubectl get namespace kubectl get ns
总结:
这个 yaml 文件已经不是我们原来的 yaml文件了,会发现这个 yaml 文件还定义了一个 proxy 的 image ,这个 image 是 我们提前就已经准备好了的 所以 istio 是通过改变 yaml 文件来实现注入一个 代理
删除资源
istioctl kube-inject -f first-istio.yaml | delete -f -
istioctl kube-inject -f first-istio.yaml | delete -f first-istio.yaml
思考:
难道以后每次都要写那么一大串命令创建 sidecar,有没有正常的命令来直接创建 sidecar?
删除手动注入
root@k8s-master-9:~# istioctl kube-inject -f first-istio.yaml | kubectl -f first-istio.yaml
# 查询指定空间资源
root@k8s-master-9:~# kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-6f68dfd8f4-hfv8z 1/1 Running 0 17h
istio-egressgateway-9d7d57984-rq8pw 0/1 ContainerCreating 0 17h
istio-ingressgateway-5ff4fb69fc-s4rff 0/1 ContainerCreating 0 17h
jaeger-7d7d59b9d-mmpnx 1/1 Running 0 17h
kiali-588bc98cd-x5x6v 1/1 Running 0 17h
prometheus-7545dd48db-9dkvm 2/2 Running 1 (3h13m ago) 17h
root@k8s-master-9:~# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66f779496c-h2glg 1/1 Running 5 (3h19m ago) 4d11h
coredns-66f779496c-h8zr7 1/1 Running 5 (3h19m ago) 4d11h
etcd-k8s-master-9 1/1 Running 5 (3h19m ago) 4d11h
kube-apiserver-k8s-master-9 1/1 Running 5 (3h19m ago) 4d11h
kube-controller-manager-k8s-master-9 1/1 Running 6 (3h19m ago) 4d11h
kube-proxy-7vpxd 1/1 Running 5 (3h19m ago) 4d11h
kube-proxy-97555 1/1 Running 5 (3h19m ago) 4d9h
kube-scheduler-k8s-master-9 1/1 Running 6 (3h19m ago) 4d11h
root@k8s-master-9:~#
3.1.2.5 自动注入 sidecar
首先自动注入是需要跟命名空间挂勾,所以需要创建一个命名空间,只需要命名空间开启自动注入,后面创建的资源只要挂载到这个命名空间,那么这个命名空间下的所有资源都会自动注入 sidecar 了.
创建命名空间
kubectl create namespace my-istio-ns
root@k8s-master-9:~# kubectl create namespace my-istio-ns
namespace/my-istio-ns created
root@k8s-master-9:~#
给命名空间开启自动注入
kubectl label namespace my-istio-ns istio-injection=enabled
root@k8s-master-9:~# kubectl label namespace my-istio-ns istio-injection=enabled
namespace/my-istio-ns labeled
root@k8s-master-9:~#
创建资源指定命名空间即可
# 1. 创建资源指定命名空间
kubectl apply -f first-istio.yaml -n my-istio-ns
root@k8s-master-9:~# kubectl apply -f first-istio.yaml -n my-istio-ns
deployment.apps/first-istio created
service/first-istio created
root@k8s-master-9:~#
# 2. 查询 istio-my-istio-ns 命名空间下面是否存在资源
kubectl get pods -n my-istio-ns
root@k8s-master-9:~# kubectl get pods -n my-istio-ns
NAME READY STATUS RESTARTS AGE
first-istio-dcf55c475-g5ktg 2/2 Running 0 3m33s
root@k8s-master-9:~#
# 3.查看资源的明细
kubectl describe pod first-istio-dcf55c475-g5ktg -n my-istio-ns
root@k8s-master-9:~# kubectl describe pod first-istio-dcf55c475-g5ktg -n my-istio-ns
Name: first-istio-dcf55c475-g5ktg
Namespace: my-istio-ns
Priority: 0
Service Account: default
Node: k8s-node-13/192.168.222.134
Start Time: Thu, 18 Apr 2024 08:20:01 +0800
Labels: app=first-istio
pod-template-hash=dcf55c475
Annotations: <none>
Status: Running
IP: 10.88.0.45
IPs:
IP: 10.88.0.45
IP: 2001:4860:4860::2d
Controlled By: ReplicaSet/first-istio-dcf55c475
Containers:
first-istio:
Container ID: containerd://914e7534128a08a0bf4a5bbdd4566d1d9265dbb9665aa693ef2f44b975f71f3c
Image: registry.cn-hangzhou.aliyuncs.com/sixupiaofei/spring-docker-demo:1.0
Image ID: registry.cn-hangzhou.aliyuncs.com/sixupiaofei/spring-docker-demo@sha256:4e939e85c553d14351fec840ca54f179bfc76467cdb9e368b6e47d440c48d9f1
Port: 8080/TCP
Host Port: 0/TCP
State: Running
查询 istio-system 命名空间下面是否存在资源
root@k8s-master-9:~# kubectl get pods -n istio-system
root@k8s-master-9:~# kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-6f68dfd8f4-hfv8z 1/1 Running 0 17h
istio-egressgateway-9d7d57984-rq8pw 0/1 ContainerCreating 0 17h
istio-ingressgateway-5ff4fb69fc-s4rff 0/1 ContainerCreating 0 17h
jaeger-7d7d59b9d-mmpnx 1/1 Running 0 17h
kiali-588bc98cd-x5x6v 1/1 Running 0 17h
prometheus-7545dd48db-9dkvm 2/2 Running 1 (3h39m ago) 17h
root@k8s-master-9:~#
# 4. 查看指定命名空间的 Service
kubectl get svc -n my-istio-ns
root@k8s-master-9:~# kubectl get svc -n my-istio-ns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
first-istio ClusterIP 10.102.6.86 <none> 80/TCP 18m
root@k8s-master-9:~#
# 5. 删除指定命名空间的资源
root@k8s-master-9:~# kubectl delete -f first-istio.yaml -n my-istio-ns
deployment.apps "first-istio" deleted
service "first-istio" deleted
root@k8s-master-9:~#
# 再次查看指定命名空间资源
root@k8s-master-9:~# kubectl get svc -n my-istio-ns
No resources found in my-istio-ns namespace.
root@k8s-master-9:~#
总结:
不论是手动注入还是自动注入,其原理是一样的,在 yaml 文件里面来追加一个代理容器,而这个容器就是 sidecar ,推荐用自动方式来实现 sidecar 的注入。
Istio 监控功能
prometheus 和 grafana
-
prometheus 存储服务的监控数据,数据来自于 istio 组件 mixer 上报
-
Grafana 开源数据可视化工具,展示 Prometheus 收集到的监控数据
istio 已经默认我们把 grafana 和 prometheus 已经默认部署好了
1. 执行命令查看 istio 自带的组件
kubectl get pods -n istio-system
root@k8s-master-9:~# kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-6f68dfd8f4-hfv8z 1/1 Running 1 (14m ago) 42h
istio-egressgateway-9d7d57984-rq8pw 0/1 ContainerCreating 0 42h
istio-ingressgateway-5ff4fb69fc-s4rff 0/1 ContainerCreating 0 43h
jaeger-7d7d59b9d-mmpnx 1/1 Running 1 (14m ago) 42h
kiali-588bc98cd-x5x6v 1/1 Running 1 (14m ago) 42h
prometheus-7545dd48db-9dkvm 2/2 Running 3 (14m ago) 42h
root@k8s-master-9:~#
root@k8s-master-9:/home/tools/istio-1.21.1/manifests/examples/customresource# kubectl apply -f istio_v1alpha1_istiooperator_cr.yaml
istiooperator.install.istio.io/example-istiocontrolplane created
1. 查看 istio 的安装文件 istio-demo.yaml
vim /home/tools/istio-1.0.6/install/kubernetes/istio-demo.yaml
2. 监控 vim prometheus-ingress.yaml
root@k8s-master-9:~# vim prometheus-ingress.yaml
root@k8s-master-9:~# cat prometheus-ingress.yaml
apiVersion: networking.k8s.io/v1
# apiVersion: extensions/v1beta1
kind: Ingress
# kind: prometheus-ingress
metadata:
name: prometheus-ingress
namespace: istio-system
spec:
rules:
- host: prometheus.istio.qy.com
http:
paths:
- path: /
pathType: Prefix
backend:
# serviceName: prometheus
# servicePort: 9090
service:
name: prometheus-service
port:
number: 9090
root@k8s-master-9:~#
root@k8s-master-9:~# kubectl apply -f prometheus-ingress.yaml
ingress.networking.k8s.io/prometheus-ingress created
root@k8s-master-9:~#
3.可视化工具 vim grafana-ingress.yaml
root@k8s-master-9:~# vim grafana-ingress.yaml
root@k8s-master-9:~# cat grafana-ingress.yaml
apiVersion: networking.k8s.io/v1
# apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana-ingress
namespace: istio-system
spec:
rules:
- host: grafana.istio.qy.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grafana-service
port:
number: 3000
root@k8s-master-9:~#
root@k8s-master-9:~# kubectl apply -f grafana-ingress.yaml
ingress.networking.k8s.io/grafana-ingress created
root@k8s-master-9:~#
老师的 vim prometheus-ingress.yaml
apiVersion: extension/v1beta1
kind: Ingress
metadata:
name: prometheus-ingress
namespace: istio-system
spec:
rules:
- host: prometheus.istio.qy.com
http:
paths:
- path: /
backend:
serviceName: prometheus
servicePort:9090
执行时出了三个问题
1.版本问题
我们可以通过命令查出在我们使用这个资源时,所对应的版本是多少
[root@master ingress-controller]# kubectl explain ingress
KIND: Ingress
VERSION: networking.k8s.io/v1 //将其添加到配置文件既可
# 自己电脑
root@k8s-master-9:~# kubectl explain ingress
GROUP: networking.k8s.io
KIND: Ingress
VERSION: v1
2.路径类型没有指明
在 path: / 下面加上 'pathType: Prefix' 就可以了
3.服务端口问题
将 'serviceName: prometheus
servicePort:9090'修改为:
service:
name: prometheus-service
port:
number: 9090
这样修改,以上三个问题就可以完美解决
可以通过命令查出在我们使用这个资源时,所对应的版本是多少
1.版本问题
我们可以通过命令查出在我们使用这个资源时,所对应的版本是多少
[root@master ingress-controller]# kubectl explain ingress
KIND: Ingress
VERSION: networking.k8s.io/v1 //将其添加到配置文件既可
# 自己电脑
root@k8s-master-9:~# kubectl explain ingress
GROUP: networking.k8s.io
KIND: Ingress
VERSION: v1
2.路径类型没有指明
在 path: / 下面加上 'pathType: Prefix' 就可以了
3.服务端口问题
将 'serviceName: prometheus
servicePort:9090'修改为:
service:
name: prometheus-service
port:
number: 9090
这样修改,以上三个问题就可以完美解决
检查一下 ingress
root@k8s-master-9:~# kubectl get ingress -n istio-system
NAME CLASS HOSTS ADDRESS PORTS AGE
grafana-ingress <none> grafana.istio.qy.com 80 23m
prometheus-ingress <none> prometheus.istio.qy.com 80 28m
root@k8s-master-9:~#
检查service的详细信息
kubectl get svc -o wide -n istio-system
root@k8s-master-9:~# kubectl get svc -o wide -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
grafana ClusterIP 10.100.241.8 <none> 3000/TCP 4d app.kubernetes.io/instance=grafana,app.kubernetes.io/name=grafana
istio-egressgateway ClusterIP 10.96.64.255 <none> 80/TCP,443/TCP 4d app=istio-egressgateway,istio=egressgateway
istio-ingressgateway LoadBalancer 10.110.128.96 <pending> 15021:32436/TCP,80:30284/TCP,443:32098/TCP 4d app=istio-ingressgateway,istio=ingressgateway
jaeger-collector ClusterIP 10.107.122.233 <none> 14268/TCP,14250/TCP,9411/TCP,4317/TCP,4318/TCP 4d app=jaeger
kiali ClusterIP 10.97.20.237 <none> 20001/TCP 3d23h app.kubernetes.io/instance=kiali,app.kubernetes.io/name=kiali
prometheus ClusterIP 10.104.226.30 <none> 9090/TCP 4d app.kubernetes.io/component=server,app.kubernetes.io/instance=prometheus,app.kubernetes.io/name=prometheus
tracing ClusterIP 10.109.232.69 <none> 80/TCP,16685/TCP 4d app=jaeger
zipkin ClusterIP 10.109.108.103 <none> 9411/TCP 4d app=jaeger
root@k8s-master-9:~#
访问 prometheus
访问 grafana
项目案例: bookinfo
https://istio.io/latest/docs/tasks/traffic-management/
理解什么是 bookinfo
这是 istio 官方给我们提供的案例,Bookinfo 应用中的几个微服务是由不同的语言编写的。这些服务对 Istio 并无依赖,但是构成了一个有代表性的服务网格的例子:它由多个服务,多个语言构成,并且 'reviews' 服务具有多个版本。
以上架构由四个微服务构成
1.ProductPage
它是由 python 语言编写
作用:
a.接收外部请求
b.把请求分发到 Reviews 和 Details 微服务上
2.Reviews
它是由 java 语言编写
它包含了收集相关的评论 ,并且它还调用 Ratings 这个微服务
3.Ratings
它包含了有书籍评论相关的评级信息,它由JS(nodejs) 编写
4.Details
它包含书籍的信息,由 Ruby 语言编写
sidecar 自动注入到微服务
# 查看命名空间
kubectl get ns
root@k8s-master-9:~# kubectl get ns
NAME STATUS AGE
default Active 7d18h
istio-system Active 4d3h
kube-node-lease Active 7d18h
kube-public Active 7d18h
kube-system Active 7d18h
my-istio-ns Active 3d7h
root@k8s-master-9:~#
# 创建命名空间
kubectl create namespace bookinfo-ns
root@k8s-master-9:~# kubectl create namespace bookinfo-ns
namespace/bookinfo-ns created
root@k8s-master-9:~#
# 给命名空间打上 label 标签 表示开启自动注入 资源只要是挂载到 该命名空间,都会注入一个 sidecar
kubectl label namespace bookinfo-ns istio-injection=enabled
kubectl label namespace istio-system istio-injection=enabled
# 查看命名空间是否开启自动注入
kubectl get namespace -L istio-injection
root@k8s-master-9:~# kubectl label namespace bookinfo-ns istio-injection=enabled
namespace/bookinfo-ns labeled
root@k8s-master-9:~#
root@k8s-master-8:/home/tools/istio-1.0.6/install/kubernetes/helm/istio/charts/sidecarInjectorWebhook/templates#
kubectl label namespace istio-system istio-injection=enabled --overwrite
namespace/istio-system labeled
namespace: bookinfo-ns
启动 bookinfo
# 进入到安装目录
root@k8s-master-9:~# cd /home/tools/istio-1.21.1/samples/bookinfo/platform/kube/
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube# ll
total 112
drwxr-xr-x 2 root root 4096 Apr 6 00:47 ./
drwxr-xr-x 3 root root 4096 Apr 6 00:47 ../
-rw-r--r-- 1 root root 914 Apr 6 00:47 bookinfo-certificate.yaml
-rw-r--r-- 1 root root 1387 Apr 6 00:47 bookinfo-db.yaml
-rw-r--r-- 1 root root 1530 Apr 6 00:47 bookinfo-details-dualstack.yaml
-rw-r--r-- 1 root root 1358 Apr 6 00:47 bookinfo-details-v2.yaml
-rw-r--r-- 1 root root 1468 Apr 6 00:47 bookinfo-details.yaml
-rw-r--r-- 1 root root 8047 Apr 6 00:47 bookinfo-dualstack.yaml
-rw-r--r-- 1 root root 1743 Apr 6 00:47 bookinfo-ingress.yaml
-rw-r--r-- 1 root root 2009 Apr 6 00:47 bookinfo-mysql.yaml
-rw-r--r-- 1 root root 8342 Apr 6 00:47 bookinfo-psa.yaml
-rw-r--r-- 1 root root 1050 Apr 6 00:47 bookinfo-ratings-discovery-dualstack.yaml
-rw-r--r-- 1 root root 988 Apr 6 00:47 bookinfo-ratings-discovery.yaml
-rw-r--r-- 1 root root 1530 Apr 6 00:47 bookinfo-ratings-dualstack.yaml
-rw-r--r-- 1 root root 1545 Apr 6 00:47 bookinfo-ratings-v2-mysql-vm.yaml
-rw-r--r-- 1 root root 1774 Apr 6 00:47 bookinfo-ratings-v2-mysql.yaml
-rw-r--r-- 1 root root 1876 Apr 6 00:47 bookinfo-ratings-v2.yaml
-rw-r--r-- 1 root root 1468 Apr 6 00:47 bookinfo-ratings.yaml
-rw-r--r-- 1 root root 1592 Apr 6 00:47 bookinfo-reviews-v2.yaml
-rw-r--r-- 1 root root 920 Apr 6 00:47 bookinfo-versions.yaml
-rw-r--r-- 1 root root 7799 Apr 6 00:47 bookinfo.yaml
-rwxr-xr-x 1 root root 2517 Apr 6 00:47 cleanup.sh*
-rw-r--r-- 1 root root 1026 Apr 6 00:47 productpage-nodeport.yaml
-rw-r--r-- 1 root root 137 Apr 6 00:47 README.md
-----------------------------------------------------------
root@k8s-master-8:~# cd /home/tools/istio-1.0.6/samples/bookinfo/platform/kube
root@k8s-master-8:/home/tools/istio-1.0.6/samples/bookinfo/platform/kube# ll
total 100
drwxr-xr-x 3 root root 4096 Feb 9 2019 ./
drwxr-xr-x 4 root root 4096 Feb 9 2019 ../
-rw-r--r-- 1 root root 1412 Feb 9 2019 bookinfo-add-serviceaccount.yaml
-rw-r--r-- 1 root root 913 Feb 9 2019 bookinfo-certificate.yaml
-rw-r--r-- 1 root root 1128 Feb 9 2019 bookinfo-db.yaml
-rw-r--r-- 1 root root 1254 Feb 9 2019 bookinfo-details-v2.yaml
-rw-r--r-- 1 root root 1343 Feb 9 2019 bookinfo-details.yaml
-rw-r--r-- 1 root root 1368 Feb 9 2019 bookinfo-ingress.yaml
-rw-r--r-- 1 root root 1660 Feb 9 2019 bookinfo-mysql.yaml
-rw-r--r-- 1 root root 972 Feb 9 2019 bookinfo-ratings-discovery.yaml
-rw-r--r-- 1 root root 1423 Feb 9 2019 bookinfo-ratings-v2-mysql-vm.yaml
-rw-r--r-- 1 root root 1658 Feb 9 2019 bookinfo-ratings-v2-mysql.yaml
-rw-r--r-- 1 root root 1651 Feb 9 2019 bookinfo-ratings-v2.yaml
-rw-r--r-- 1 root root 1343 Feb 9 2019 bookinfo-ratings.yaml
-rw-r--r-- 1 root root 1186 Feb 9 2019 bookinfo-reviews-v2.yaml
-rw-r--r-- 1 root root 4359 Feb 9 2019 bookinfo.yaml
-rwxr-xr-x 1 root root 1596 Feb 9 2019 cleanup.sh*
-rw-r--r-- 1 root root 515 Feb 9 2019 istio-rbac-details-reviews.yaml
-rw-r--r-- 1 root root 958 Feb 9 2019 istio-rbac-enable.yaml
-rw-r--r-- 1 root root 556 Feb 9 2019 istio-rbac-namespace.yaml
-rw-r--r-- 1 root root 430 Feb 9 2019 istio-rbac-productpage.yaml
-rw-r--r-- 1 root root 450 Feb 9 2019 istio-rbac-ratings.yaml
drwxr-xr-x 2 root root 4096 Feb 9 2019 rbac/
-rw-r--r-- 1 root root 137 Feb 9 2019 README.md
root@k8s-master-8:/home/tools/istio-1.0.6/samples/bookinfo/platform/kube#
# 查看 bookinfo.yaml 它里面包含哪些镜像文件
cat bookinfo.yaml | grep image:
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube# cat bookinfo.yaml | grep image:
image: docker.io/istio/examples-bookinfo-details-v1:1.18.0
image: docker.io/istio/examples-bookinfo-ratings-v1:1.18.0
image: docker.io/istio/examples-bookinfo-reviews-v1:1.18.0
image: docker.io/istio/examples-bookinfo-reviews-v2:1.18.0
image: docker.io/istio/examples-bookinfo-reviews-v3:1.18.0
image: docker.io/istio/examples-bookinfo-productpage-v1:1.18.0
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube#
-----------------------------------
root@k8s-master-8:/home/tools/istio-1.0.6/samples/bookinfo/platform/kube# cat bookinfo.yaml | grep image: # bookinfo 所需要的镜像地址
image: istio/examples-bookinfo-details-v1:1.8.0
image: istio/examples-bookinfo-ratings-v1:1.8.0
image: istio/examples-bookinfo-reviews-v1:1.8.0
image: istio/examples-bookinfo-reviews-v2:1.8.0
image: istio/examples-bookinfo-reviews-v3:1.8.0
image: istio/examples-bookinfo-productpage-v1:1.8.0
root@k8s-master-8:/home/tools/istio-1.0.6/samples/bookinfo/platform/kube#
# 创建应用 执行 bookinfo.yaml 文件
kubectl apply -f bookinfo.yaml -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube# kubectl apply -f bookinfo.yaml -n bookinfo-ns
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube#
# 查看 bookinfo-ns 命名空间刚才创建的 pod
kubectl get pods -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube# kubectl get pods -n bookinfo-ns
NAME READY STATUS RESTARTS AGE
details-v1-698d88b-26hwj 1/1 Running 0 4m
productpage-v1-675fc69cf-qv49k 1/1 Running 0 4m
ratings-v1-6484c4d9bb-fc9pf 1/1 Running 0 4m
reviews-v1-5b5d6494f4-mkszt 1/1 Running 0 4m
reviews-v2-5b667bcbf8-4lx6g 1/1 Running 0 4m
reviews-v3-5b9bd44f4-bhhlt 1/1 Running 0 4m
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube#
# 查看命名空间是否开启自动注入
kubectl get namespace -L istio-injection
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube# kubectl get namespace -L istio-injection
NAME STATUS AGE ISTIO-INJECTION
bookinfo-ns Active 32m enabled
default Active 7d19h
istio-system Active 4d4h
kube-node-lease Active 7d19h
kube-public Active 7d19h
kube-system Active 7d19h
my-istio-ns Active 3d8h enabled
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube#
kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml | grep "namespaceSelector:" -A5
kubectl -n istio-system get configmap istio-sidecar-injector -o jsonpath='{.data.config}' | grep policy:
policy: enabled
kubectl label namespace istio-system istio-injection=disabled --overwrite
检查哪些 api 支持当前 Kubernetes 对象
kubectl api-resources | grep deployment
root@k8s-master-8:/home/tools/istio-1.0.6/samples/bookinfo/platform/kube# kubectl api-resources | grep deployment
deployments deploy apps/v1 true Deployment
root@k8s-master-8:/home/tools/istio-1.0.6/samples/bookinfo/platform/kube#
查看启动的服务
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube# kubectl get pods -n bookinfo-ns
NAME READY STATUS RESTARTS AGE
details-v1-698d88b-26hwj 1/1 Running 0 3h33m
productpage-v1-675fc69cf-qv49k 1/1 Running 0 3h33m
ratings-v1-6484c4d9bb-fc9pf 1/1 Running 0 3h33m
reviews-v1-5b5d6494f4-mkszt 1/1 Running 0 3h33m
reviews-v2-5b667bcbf8-4lx6g 1/1 Running 0 3h33m
reviews-v3-5b9bd44f4-bhhlt 1/1 Running 0 3h33m
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube#
查看 pod 明细
kubectl describe pods reviews-v3-5b9bd44f4-bhhlt -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube# kubectl describe pods reviews-v3-5b9bd44f4-bhhlt -n bookinfo-ns
Name: reviews-v3-5b9bd44f4-bhhlt
Namespace: bookinfo-ns
Priority: 0
Service Account: bookinfo-reviews
Node: k8s-node-13/192.168.222.134
Start Time: Sun, 21 Apr 2024 16:28:30 +0800
检查 service 服务
kubectl get svc -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube# kubectl get svc -n bookinfo-ns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.110.20.58 <none> 9080/TCP 4h28m
productpage ClusterIP 10.108.159.224 <none> 9080/TCP 4h28m
ratings ClusterIP 10.103.171.95 <none> 9080/TCP 4h28m
reviews ClusterIP 10.100.72.248 <none> 9080/TCP 4h28m
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube#
可以看到 service 的类似 clusterIP 类型,表示集群内部访问
验证 Bookinfo 应用是否正在运行
请在某个 Pod 中用 curl 命令对应用发送请求,例如 ratings
# 验证 bookinfo 是否在运行,所以我们要对某一个 pod 发送一个请求,请求的命令如下:
# 2. 执行命令 通过后台的方式来访问 ratings 服务
kubectl exec -it $(kubectl get pod -l app=ratings -n bookinfo-ns -o jsonpath='{.items[0].metadata.name}') -c ratings -n bookinfo-ns -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title> 表示启动成功
# 获取 pod 名称
kubectl get pod -l app=ratings -n bookinfo-ns -o jsonpath='{.items[0].metadata.name}'
kubectl get pod -l app=details -n bookinfo-ns -o jsonpath='{.items[0].metadata.name}'
1.app=ratings
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube# kubectl get pod -l app=ratings -n bookinfo-ns -o jsonpath='{.items[0].metadata.name}'
ratings-v1-6484c4d9bb-fc9pf
2.app=details
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube# kubectl get pod -l app=details -n bookinfo-ns -o jsonpath='{.items[0].metadata.name}'
details-v1-698d88b-26hwj
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube# kubectl get pods -n bookinfo-ns -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
details-v1-698d88b-26hwj 1/1 Running 0 4h57m 10.88.0.72 k8s-node-13 <none> <none>
productpage-v1-675fc69cf-qv49k 1/1 Running 0 4h57m 10.88.0.77 k8s-node-13 <none> <none>
ratings-v1-6484c4d9bb-fc9pf 1/1 Running 0 4h57m 10.88.0.75 k8s-node-13 <none> <none>
reviews-v1-5b5d6494f4-mkszt 1/1 Running 0 4h57m 10.88.0.74 k8s-node-13 <none> <none>
reviews-v2-5b667bcbf8-4lx6g 1/1 Running 0 4h57m 10.88.0.73 k8s-node-13 <none> <none>
reviews-v3-5b9bd44f4-bhhlt 1/1 Running 0 4h57m 10.88.0.76 k8s-node-13 <none> <none>
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube#
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube# kubectl exec -it $(kubectl get pod -l app=ratings -n bookinfo-ns -o jsonpath='{.items[0].metadata.name}') -c ratings -n bookinfo-ns -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
command terminated with exit code 6
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/platform/kube#
命令分析
kubectl get pod -l app=ratings -n bookinfo-ns -o jsonpath='{.items[0].metadata.name}'
:表示的是输出 ratings 这个运行时 pod 的名字
kubectl exec -it $(kubectl get pod -l app=ratings -n bookinfo-ns -o jsonpath='{.items[0].metadata.name}') -c ratings -n bookinfo-ns -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
: 进入到 ratings 内部。然后发送一个 http 测试,根据响应结果找到 title 标签
通过 Ingress 方式访问 创建 vim productpage-ingress.yaml
# 老师的
root@k8s-master-9:~# vim productpage-ingress.yaml
root@k8s-master-9:~# cat productpage-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: productpage-ingress
spec:
rules:
- host: productpage.istio.longchi.xyz
http:
paths:
- path: /
backend:
serviceName: productpage
servicePort: 9080
root@k8s-master-9:~#
以上代码解释:
定义了一个资源类型 'Ingress',定义了一个资源名称 'productpage-ingress',定义了一个域名 ' productpage.istio.longchi.xyz',域名跟我们的 '9080'端口绑定
# 自己的 vim productpage-ingress.yaml
root@k8s-master-9:~# cat productpage-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: productpage-ingress
namespace: bookinfo-ns
spec:
rules:
- host: productpage.istio.longchi.xyz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: productpage-service
port:
number: 9080
root@k8s-master-9:~#
# 查看一下 bookinfo-ns 这个命令空间下面的 pod 它的分布情况
kubectl get pods -o wide -n bookinfo-ns
root@k8s-master-9:~# kubectl get pods -o wide -n bookinfo-ns
NAME READY STATUS RESTARTS AGE IP NODE
details-v1-698d88b-26hwj 1/1 Running 1 (96m ago) 17h 10.88.0.81 k8s-node-13
productpage-v1-675fc69cf-qv49k 1/1 Running 1 (96m ago) 17h 10.88.0.92 k8s-node-13
ratings-v1-6484c4d9bb-fc9pf 1/1 Running 1 (96m ago) 17h 10.88.0.91 k8s-node-13
reviews-v1-5b5d6494f4-mkszt 1/1 Running 1 (96m ago) 17h 10.88.0.84 k8s-node-13
reviews-v2-5b667bcbf8-4lx6g 1/1 Running 1 (96m ago) 17h 10.88.0.95 k8s-node-13
reviews-v3-5b9bd44f4-bhhlt 1/1 Running 1 (96m ago) 17h 10.88.0.89 k8s-node-13
root@k8s-master-9:~#
# 执行命令 注意:一定要加上命令空间
kubectl apply -f productpage-ingress.yaml -n bookinfo-ns
root@k8s-master-9:~# kubectl apply -f productpage-ingress.yaml -n bookinfo-ns
ingress.networking.k8s.io/productpage-ingress created
root@k8s-master-9:~#
# 查看 Ingress
kubectl get ingress -n bookinfo-ns
root@k8s-master-9:~# kubectl get ingress -n bookinfo-ns
NAME CLASS HOSTS ADDRESS PORTS AGE
productpage-ingress <none> productpage.istio.longchi.xyz 80 7m32s
root@k8s-master-9:~#
查询 productpage 这个 pod 分布在那台服务器上执行命令
kubectl get pods -o wide -n bookinfo-ns
发现服务在 k8s-node-13 机器上
配置 hosts 文件
192.168.222.134 productpage.istio.longchi.xyz
执行命令
kubectl apply -f productpage-ingress.yaml -n bookinfo-ns
访问地址: productpage.istio.longchi.xyz
可以看到访问成功了
通过 istio 的 ingressgateway 访问
# gateway 访问 首先确定网关的 IP 和 端口
# 第一步,启动 bookinfo 进入到网关的安装目录
root@k8s-master-9:~# cd /home/tools/istio-1.21.1/samples/bookinfo/networking/
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# ls
bookinfo-gateway.yaml fault-injection-details-v1.yaml virtual-service-ratings-test-abort.yaml virtual-service-reviews-test-v2.yaml
certmanager-gateway.yaml virtual-service-all-v1.yaml virtual-service-ratings-test-delay.yaml virtual-service-reviews-v2-v3.yaml
destination-rule-all-mtls.yaml virtual-service-details-v2.yaml virtual-service-reviews-50-v3.yaml virtual-service-reviews-v3.yaml
destination-rule-all.yaml virtual-service-ratings-db.yaml virtual-service-reviews-80-20.yaml
destination-rule-reviews.yaml virtual-service-ratings-mysql-vm.yaml virtual-service-reviews-90-10.yaml
egress-rule-google-apis.yaml virtual-service-ratings-mysql.yaml virtual-service-reviews-jason-v2-v3.yaml
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
执行命令 kubectl apply -f bookinfo-gateway.yaml -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl apply -f bookinfo-gateway.yaml -n bookinfo-ns
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
# 检查一下 gateway 可以看到 gateway 已经正常启动
kubectl get gateway -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl get gateway -n bookinfo-ns
NAME AGE
bookinfo-gateway 2m5s
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
# 给网关配置环境变量
确定 Ingress 的 IP 和端口
现在 Bookinfo 服务启动并运行中,需要使应用程序可以从外部访问 Kubernetes 集群,例如使用浏览器。可以用 Istio Gateway 来实现这个目标。
1. 为应用程序定义 Ingress 网关
地址:/home/tools/istio-1.21.1/samples/bookinfo/networking 有一个 bookinfo-gateway.yaml
kubectl apply -f bookinfo-gateway.yaml -n bookinfo-ns
2. 查看 gateway
kubectl get gateway -n bookinfo-ns
有了gateway之后,我们需要配置一些环境变量
配置 gateway ip 环境变量
1.配置 gateway ip
export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.item[0].status.hostIP}')
# 把 ingressgateway 的 ip 设置成环境变量
分析命令意思
kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}'
# 表示获取 istio 组件 ingressgateway 组件的 ip
配置IP
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.item[0].status.hostIP}')
查询配置IP
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl get po -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].status.hostIP}'
192.168.222.134
也就是说 192.168.222.134 就是 ingressgateway 组件的 ip
2. 配置 gateway 端口
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
# 把 ingressgateway 的端口设置成环境变量
分析命令意思
kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}'
配置端口
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')")].nodePort}')
查询配置的端口
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}'
30284
# 表示获取 istio 组件 ingressgateway 组件的端口 30284
这个端口就是用来获取 istio 组件 ingressgateway 组件的端口
设置 gateway 地址
# 把IP和端口 合并组成一个 gateway 地址
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
此时,gateway 地址已经设置完毕
查看 ingress 环境端口
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# env | grep INGRESS_PORT
INGRESS_PORT=30284
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
总结:
gateay 与 ingress 区别
ingress 要指定一个 域名
gateway 他不需要指定域名,他只要设置 IP 和 端口就可以访问了
现在我们可以通过 gateway 或者 ingress 的方式来访问服务
那么此时我们可以对 Bookinfo 进行流量管理呢?
流量管理
放开 bookinfo 自定义路由权限
# 版本方式控制流量
我们所有的流量都要经过 gateway,所以放开 网关路由配置文件 这个配置文件非常重要,这个文件起到路由功能
必须要先执行这个文件,gateway 路由规则才可以自定义配置,而这个文件就
是 'destination-rule-all.yaml'
第一步,执行这个 'destination-rule-all.yaml' 网关路由配置文件
kubectl apply -f destination-rule-all.yaml -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl apply -f destination-rule-all.yaml -n bookinfo-ns
destinationrule.networking.istio.io/productpage created
destinationrule.networking.istio.io/reviews created
destinationrule.networking.istio.io/ratings created
destinationrule.networking.istio.io/details created
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
# 查看网关路由规则
kubectl get DestinationRule -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl get DestinationRule -n bookinfo-ns
NAME HOST AGE
details details 3m7s
productpage productpage 3m7s
ratings ratings 3m7s
reviews reviews 3m7s
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
每个版本都有一个微服务
下面我们来测试基于版本控制,把所有流量切换到 'reviews v3' 版本,只需要这些 'virtual-service-reviews-v3.yaml'这个配置文件就可以
kubectl apply -f virtual-service-reviews-v3.yaml -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl apply -f virtual-service-reviews-v3.yaml -n bookinfo-ns
virtualservice.networking.istio.io/reviews created
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
此时基于版本流量控制我们已经配置好了
基本版本方式控制
每个版本都有一个微服务
下面我们来测试基于版本控制,把所有流量切换到 'reviews v3' 版本,只需要这些 'virtual-service-reviews-v3.yaml'这个配置文件就可以
kubectl apply -f virtual-service-reviews-v3.yaml -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl apply -f virtual-service-reviews-v3.yaml -n bookinfo-ns
virtualservice.networking.istio.io/reviews created
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
此时基于版本流量控制我们已经配置好了
此时访问服务它就是刚才配置好的 v3 版本
删除基于版本控制流量的规则
kubectl delete -f virtual-service-reviews-v3.yaml -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl delete -f virtual-service-reviews-v3.yaml -n bookinfo-ns
virtualservice.networking.istio.io "reviews" deleted
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
此时基于版本控制流量规则已经删除
此时在访问服务可以看到版本切换效果
查看该文件 'virtual-service-reviews-v3.yaml' 具体内容
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# cat virtual-service-reviews-v3.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v3
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
基于权重的流量版本控制
istio 给我们提供了一个基于权重来控制流量的 yaml 文件,找到该文件 'virtual-service-reviews-50-v3.yaml'
查看该文件 'virtual-service-reviews-50-v3.yaml' 具体内容
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# vim virtual-service-reviews-50-v3.yaml
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# cat virtual-service-reviews-50-v3.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 50
- destination:
host: reviews
subset: v3
weight: 50
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
执行基于权重这个文件
kubectl apply -f virtual-service-reviews-50-v3.yaml -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl apply -f virtual-service-reviews-50-v3.yaml -n bookinfo-ns
virtualservice.networking.istio.io/reviews created
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
此时所有流量都会在 v1 和 v3 版本之间切换
删除基于权重的流量版本控制
kubectl delete -f virtual-service-reviews-50-v3.yaml -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl delete -f virtual-service-reviews-50-v3.yaml -n bookinfo-ns
virtualservice.networking.istio.io "reviews" deleted
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
此时已经删除基于权重的流量版本控制
此时再访问服务 三个版本都会出现,即此时又恢复到正常的流量切换
基于用户来控制流量版本
此我希望A1用户进来是 v1 版本,B1用户 进来的时候是 v3 版本,此时该如何设置?
在 bookinfo 里面官方也为我们准备了 一个 基于 用户来控制流量版本 的配置文件 'virtual-service-reviews-jason-v2-v3.yaml'
查看该配置文件 'virtual-service-reviews-jason-v2-v3.yaml' 的具体内容
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# cat virtual-service-reviews-jason-v2-v3.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v3
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
# 通过在头部增加一个 'jason' 来界定流量走向,当头部包含 'jason'的时候,就走 v2 版本,否则就是 v3 版本
下面我们也来执行一下这个配置文件
kubectl apply -f virtual-service-reviews-jason-v2-v3.yaml -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl apply -f virtual-service-reviews-jason-v2-v3.yaml -n bookinfo-ns
virtualservice.networking.istio.io/reviews created
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
此时规则创建成功
测试,刷新一下页面,头部没有 'jason',此时所有的流量都转发给 v3 版本
删除基于用户来控制流量版本
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl delete -f virtual-service-reviews-jason-v2-v3.yaml -n bookinfo-ns
virtualservice.networking.istio.io "reviews" deleted
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
此时又恢复到正常情况下
故障注入
下面演示 bookinfo 弹性
执行 'test.yaml' 配置文件
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# vim test.yaml
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# cat test.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- fault:
delay:
percentage:
value: 50.0
fixedDelay: 2s
route:
- destination:
host: reviews
subset: v3
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl apply -f test.yaml -n bookinfo-ns
virtualservice.networking.istio.io/reviews created # 表示设置成功
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
# 测试
删除故障注入
kubectl delete -f test.yaml -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl delete -f test.yaml -n bookinfo-ns
virtualservice.networking.istio.io "reviews" deleted
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
# 此时故障注入已经删除,页面访问正常
流量的迁移 完成以下步骤,就可以完成版本的迁移
场景:希望将流量从一个版本的微服务逐渐迁移到另外一个版本上面,在 istio 里面,我们可以通过配置一系列规则来实现这个目标。这些规则可以将一定的百分比流量路由到另外一个版本上去。我们可以把50%的流量发送到 v1 版本,然后再将 另外的 50% 流量发送到 v3 版本。最后等 v3 版本稳定后,再把100%的流量发送到 v3 版本,这样我们就可以完成版本的迁移。
第一步:
配置文件 'virtual-service-all-v1.yaml' 是让所有的流量都到 v1 版本 我们来执行一下这个脚本
kubectl apply -f virtual-service-all-v1.yaml -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl apply -f virtual-service-all-v1.yaml -n bookinfo-ns
virtualservice.networking.istio.io/productpage created
virtualservice.networking.istio.io/reviews created
virtualservice.networking.istio.io/ratings created
virtualservice.networking.istio.io/details created
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
# 配置文件 'virtual-service-all-v1.yaml' 是让所有的流量都到 v1 版本
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# cat virtual-service-all-v1.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: productpage
spec:
hosts:
- productpage
http:
- route:
- destination:
host: productpage
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: details
spec:
hosts:
- details
http:
- route:
- destination:
host: details
subset: v1
---
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
第二步:
# 流量会在 v1 版本 和 v3 版本之间切换
# 将 v1 版本50% 的流量迁移到 v3 版本配置文件 'virtual-service-reviews-50-v3.yaml' 脚本,执行命令
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl apply -f virtual-service-reviews-50-v3.yaml -n bookinfo-ns
virtualservice.networking.istio.io/reviews configured
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# ls
bookinfo-gateway.yaml fault-injection-details-v1.yaml virtual-service-ratings-mysql.yaml virtual-service-reviews-jason-v2-v3.yaml
certmanager-gateway.yaml test.yaml virtual-service-ratings-test-abort.yaml virtual-service-reviews-test-v2.yaml
destination-rule-all-mtls.yaml virtual-service-all-v1.yaml virtual-service-ratings-test-delay.yaml virtual-service-reviews-v2-v3.yaml
destination-rule-all.yaml virtual-service-details-v2.yaml virtual-service-reviews-50-v3.yaml virtual-service-reviews-v3.yaml
destination-rule-reviews.yaml virtual-service-ratings-db.yaml virtual-service-reviews-80-20.yaml
egress-rule-google-apis.yaml virtual-service-ratings-mysql-vm.yaml virtual-service-reviews-90-10.yaml
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
# 将 v1 版本 50% 的流量迁移到 v3 版本 脚本内容如下:
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# cat virtual-service-reviews-50-v3.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 50
- destination:
host: reviews
subset: v3
weight: 50
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
第三步:
# v3 版本稳定后,将流量从 v1 版本 全部 切换到 v3 版本 配置文件 'virtual-service-reviews-v3.yaml'脚本,执行命令,执行完这一步,就可以完成版本的迁移。
kubectl apply -f virtual-service-reviews-v3.yaml -n bookinfo-ns
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking# kubectl apply -f virtual-service-reviews-v3.yaml -n bookinfo-ns
virtualservice.networking.istio.io/reviews configured
root@k8s-master-9:/home/tools/istio-1.21.1/samples/bookinfo/networking#
此时已经完成流量迁移
体验 Istio 的 Observe(观察) 监控 bookinfo Mixer Dashboard
首先我们要来执行日志采集,下面我们来采集指标,自动为 istio 生成应用信息 'metrics-crd.yaml'脚本文件
1.创建一个新的 YAML 文件,用来保存 Istio 将自动生成和收集的新度量标准和日志流量配置
root@k8s-master-9:~# vim metrics-crd.yaml
root@k8s-master-9:~# cat metrics-crd.yaml
# Configuration for metrics instances
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
name: doublerequestcount
namespace: istio-system
spec:
value: "2" # count each request twice
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound","client","server")
source: source.workload.name | "unknown"
destination: destination.workload.name | "unknown"
message: '"twice the fun!"'
monitored_resource_type: '"UNSPECIFIED"'
---
# Configuration for a Prometheus handler
apiVersion: "config.istio.io/v1alpha2"
kind: prometheus
metadata:
name: doublehandler
namespace: istio-system
spec:
metrics:
root@k8s-master-9:~#
kubectl apply -f metrics-crd.yaml
instance.config.istio.io/doublerequestcount created
prometheus.config.istio.io/doublehandler created
rule.config.istio.io/doublerom created
检查
kubectl get instance -n istio-system
NAME AGE
doublerequestcount 20s
总结:
docker 可以把环境打包成一个镜像,所以很多环境都喜欢用 docker
分布式环境又出现管理 容器的问题 管理容器 涉及网络,持久化,监控等 用 kubernetes
kubernetes 可以很好的容器编排,它也是云服务基于设施平台,云原生可以帮我们解决服务的部署,日志监控,服务的控制等,
又出现了另一个问题:基于 kubernetes 的服务的管理,kubernetes只提供了服务的通信功能,就是 pod 与 pod 之间可以进行通信。但是服务的治理 kubernetes 是不具备的,
那么如何解决服务的治理问题: service mesh
service mesh 就是解决服务与服务通信问题,并且服务的治理可以与我们的业务代码脱离开来