Jerry Wang的Kyma学习笔记-2019-05-12

In a nutshell

Kyma allows you to connect applications and third-party services in a cloud-native environment. Use it to create extensions for the existing systems, regardless of the language they are written in. Customize extensions with minimum effort and time devoted to learning their configuration details.

With Kyma in hand, you can focus purely on coding since it ensures the following out-of-the-box functionalities:

  • Service-to-service communication and proxying (Istio Service Mesh)

  • In-built monitoring, tracing, and logging (Grafana, Prometheus, Jaeger, Loki)

  • Secure authentication and authorization (Dex, Service Identity, TLS, Role Based Access Control)

  • The catalog of services to choose from (Service Catalog, Service Brokers)

  • The development platform to run lightweight functions in a cost-efficient and scalable way (Serverless, Kubeless)

  • The endpoint to register Events and APIs of external applications (Application Connector)

  • The messaging channel to receive Events, enrich them, and trigger business flows using lambdas or services (Event Bus, NATS)

  • CLI supported by the intuitive UI (Console)

Main features

Major open-source and cloud-native projects, such as Istio, NATS, Kubeless, and Prometheus, constitute the cornerstone of Kyma. Its uniqueness, however, lies in the “glue” that holds these components together. Kyma collects those cutting-edge solutions in one place and combines them with the in-house developed features that allow you to connect and extend your enterprise applications easily and intuitively.

Kyma allows you to extend and customize the functionality of your products in a quick and modern way, using serverless computing or microservice architecture. The extensions and customizations you create are decoupled from the core applications, which means that:

  • Deployments are quick.

  • Scaling is independent from the core applications.

  • The changes you make can be easily reverted without causing downtime of the production system.

Last but not least, Kyma is highly cost-efficient. All Kyma native components and the connected open-source tools are written in Go. It ensures low memory consumption and reduced maintenance costs compared to applications written in other programming languages such as Java.

Technology stack

The entire solution is containerized and runs on a [Kubernetes]{.underline} cluster. Customers can access it easily using a single sign on solution based on the [Dex]{.underline} identity provider integrated with any [OpenID Connect]{.underline}-compliant identity provider or a SAML2-based enterprise authentication server.

The communication between services is handled by the [Istio]{.underline} Service Mesh component which enables security, traffic management, routing, resilience (retry, circuit breaker, timeouts), monitoring, and tracing without the need to change the application code. Build your applications using services provisioned by one of the many Service Brokers compatible with the [Open Service Broker API]{.underline}, and monitor the speed and efficiency of your solutions using [Prometheus]{.underline}, which gives you the most accurate and up-to-date monitoring data.

Key components

Kyma is built of numerous components but these three drive it forward:

  • Application Connector:

    • Simplifies and secures the connection between external systems and Kyma

    • Registers external Events and APIs in the Service Catalog and simplifies the API usage

    • Provides asynchronous communication with services and lambdas deployed in Kyma through Events

    • Manages secure access to external systems

    • Provides monitoring and tracing capabilities to facilitate operational aspects

  • Serverless:

    • Ensures quick deployments following a lambda function approach

    • Enables scaling independent of the core applications

    • Gives a possibility to revert changes without causing production system downtime

    • Supports the complete asynchronous programming model

    • Offers loose coupling of Event providers and consumers

    • Enables flexible application scalability and availability

  • Service Catalog:

    • Connects services from external sources

    • Unifies the consumption of internal and external services thanks to compliance with the Open Service Broker standard

    • Provides a standardized approach to managing the API consumption and access

    • Eases the development effort by providing a catalog of API and Event documentation to support automatic client code generation

This basic use case shows how the three components work together in Kyma:

Kyma and Knative - brothers in arms

Integration with Knative is a step towards Kyma modularization and the “slimming” approach which aims to extract some out-of-the-box components and provide you with a more flexible choice of tools to use in Kyma.

Both Kyma and Knative are Kubernetes and Istio-based systems that offer development and eventing platforms. The main difference, however, is their focus. While Knative concentrates more on providing the building blocks for running serverless workloads, Kyma focuses on integrating those blocks with external services and applications.

The diagram shows dependencies between the components:

Kyma and Knative cooperation focuses on replacing Kyma eventing with Knative eventing, and Kyma Serverless with Knative serving.

How to start

Minikube allows you to run Kyma locally, develop, and test your solutions on a small scale before you push them to a cluster. With the Installation and Getting Started guides at hand, you can start developing in a matter of minutes.

Read, learn, and try on your own to:

Components

Kyma is built on the foundation of the best and most advanced open-source projects which make up the components readily available for customers to use. This section describes the Kyma components.

Service Catalog

The Service Catalog lists all of the services available to Kyma users through the registered [Service Brokers]{.underline}. Use the Service Catalog to provision new services in the Kyma [Kubernetes]{.underline} cluster and create bindings between the provisioned service and an application.

Service Mesh

The Service Mesh is an infrastructure layer that handles service-to-service communication, proxying, service discovery, traceability, and security independent of the code of the services. Kyma uses the [Istio]{.underline} Service Mesh that is customized for the specific needs of the implementation.

Security

Kyma security enforces RBAC (Role Based Access Control) in the cluster. [Dex]{.underline} handles the identity management and identity provider integration. It allows you to integrate any [OpenID Connect]{.underline} or SAML2-compliant identity provider with Kyma using [connectors]{.underline}. Additionally, Dex provides a static user store which gives you more flexibility when managing access to your cluster.

Helm Broker

The Helm Broker is a Service Broker which runs in the Kyma cluster and deploys Kubernetes native resources using [Helm]{.underline} and Kyma bundles. A bundle is an abstraction layer over a Helm chart which allows you to represent it as a ClusterServiceClass in the Service Catalog. Use bundles to install the [GCP Broker]{.underline} and the [Azure Service Broker]{.underline} in Kyma.

Application Connector

The Application Connector is a proprietary Kyma solution. This endpoint is the Kyma side of the connection between Kyma and the external solutions. The Application Connector allows you to register the APIs and the Event Catalog, which lists all of the available events, of the connected solution. Additionally, the Application Connector proxies the calls from Kyma to external APIs in a secure way.

Event Bus

Kyma Event Bus receives Events from external solutions and triggers the business logic created with lambda functions and services in Kyma. The Event Bus is based on the [NATS Streaming]{.underline} open source messaging system for cloud-native applications.

Serverless

The Kyma Serverless component allows you to reduce the implementation and operation effort of an application to the absolute minimum. Kyma Serverless provides a platform to run lightweight functions in a cost-efficient and scalable way using JavaScript and Node.js. Kyma Serverless is built on the [Kubeless]{.underline} framework, which allows you to deploy lambda functions, and uses the [NATS]{.underline} messaging system that monitors business events and triggers functions accordingly.

Monitoring

Kyma comes bundled with tools that give you the most accurate and up-to-date monitoring data. [Prometheus]{.underline} open source monitoring and alerting toolkit provides this data, which is consumed by different add-ons, including [Grafana]{.underline} for analytics and monitoring, and [Alertmanager]{.underline} for handling alerts.

Tracing

The tracing in Kyma uses the [Jaeger]{.underline} distributed tracing system. Use it to analyze performance by scrutinizing the path of the requests sent to and from your service. This information helps you optimize the latency and performance of your solution.

Logging

Logging in Kyma uses [Loki]{.underline}, a Prometheus-like log management system.

Namespaces

A Namespace is a security and organizational unit which allows you to divide the cluster into smaller units to use for different purposes, such as development and testing.

Namespaces available for users are marked with the env: “true” label. The Kyma UI only displays the Namespaces marked with the env: “true” label.

Default Kyma Namespaces

Kyma comes configured with default Namespaces dedicated for system-related purposes. The user cannot modify or remove any of these Namespaces.

  • kyma-system - This Namespace contains all of the Kyma Core components.

  • kyma-integration - This Namespace contains all of the Application Connector components responsible for the integration of Kyma and external solutions.

  • kyma-installer - This Namespace contains all of the Kyma Installer components, objects, and Secrets.

  • istio-system - This Namespace contains all of the Istio-related components.

Namespaces for users in Kyma

Kyma comes with three Namespaces ready for you to use.

  • production

  • qa

  • stage

Create a new Namespace for users

Create a Namespace and mark it with the env: “true” label to make it available for Kyma users. Use this command to do that in a single step:

Click to copy

cat <<EOF | kubectl create -f -

apiVersion: v1

kind: Namespace

metadata:

name: my-namespace

labels:

env: “true”

EOF

Initially, the system deploys two template roles: kyma-reader-role and kyma-admin-role. The controller finds the template roles by filtering the roles available in the kyma-system Namespace by the label env: “true”. The controller copies these roles into the newly created Namespace.

Testing Kyma

For testing, the Kyma components use the Helm test concept. Place your test under the templates directory as a Pod definition that specifies a container with a given command to run.

Add a new test

The system bases tests on the Helm broker concept with one modification: adding a Pod label. Before you create a test, see the official [Chart Tests]{.underline} documentation. Then, add the “helm-chart-test”: “true” label to your Pod template.

See the following example of a test prepared for Dex:

Click to copy

# Chart tree

dex

├── Chart.yaml

├── README.md

├── templates

│   ├── tests

│   │ └── test-dex-connection.yaml

│   ├── dex-deployment.yaml

│   ├── dex-ingress.yaml

│   ├── dex-rbac-role.yaml

│   ├── dex-service.yaml

│   ├── pre-install-dex-account.yaml

│   ├── pre-install-dex-config-map.yaml

│   └── pre-install-dex-secrets.yaml

└── values.yaml

The test adds a new test-dex-connection.yaml under the templates/tests directory. This simple test calls the Dex endpoint with cURL, defined as follows:

Click to copy

apiVersion: v1

kind: Pod

metadata:

name: “test-{{ template “fullname” . }}-connection-dex”

annotations:

helm.sh/hook”: test-success

labels:

“helm-chart-test”: “true” # ! Our customization

spec:

hostNetwork: true

containers:

- name: “test-{{ template “fullname” . }}-connection-dex”

image: tutum/curl:alpine

command: ["/usr/bin/curl"]

args: [

“–fail”,

http://dex-service.{{ .Release.Namespace }}.svc.cluster.local:5556/.well-known/openid-configuration”

]

restartPolicy: Never

Test execution

All tests created for charts under /resources/core/ run automatically after starting Kyma. If any of the tests fails, the system prints the Pod logs in the terminal, then deletes all the Pods.

NOTE: If you run Kyma locally, by default, the system does not take into account the test’s exit code. As a result, the system does not terminate Kyma Docker container, and you can still access it. To force a termination in case of failing tests, use --exit-on-test-fail flag when executing run.sh script.

CI propagates the exit status of tests. If any test fails, the whole CI job fails as well.

Follow the same guidelines to add a test which is not a part of any core component. However, for test execution, see the Run a test manually section in this document.

Run a test manually

To run a test manually, use the testing.sh script located in the /installation/scripts/ directory which runs all tests defined for core releases. If any of the tests fails, the system prints the Pod logs in the terminal, then deletes all the Pods.

Another option is to run a Helm test directly on your release.

Click to copy

helm test {your_release_name}

You can also run your test on custom releases. If you do this, remember to always delete the Pods after a test ends.

Charts

Kyma uses Helm charts to deliver single components and extensions, as well as the core components. This document contains information about the chart-related technical concepts, dependency management to use with Helm charts, and chart examples.

Manage dependencies with Init Containers

The ADR 003: Init Containers for dependency management document declares the use of Init Containers as the primary dependency mechanism.

[Init Containers]{.underline} present a set of distinctive behaviors:

  • They always run to completion.

  • They start sequentially, only after the preceding Init Container completes successfully. If any of the Init Containers fails, the Pod restarts. This is always true, unless the restartPolicy equals never.

[Readiness Probes]{.underline} ensure that the essential containers are ready to handle requests before you expose them. At a minimum, probes are defined for every container accessible from outside of the Pod. It is recommended to pair the Init Containers with readiness probes to provide a basic dependency management solution.

Examples

Here are some examples:

  1. Generic

Click to copy

apiVersion: apps/v1beta2

kind: Deployment

metadata:

name: nginx-deployment

spec:

replicas: 3

selector:

matchLabels:

app: nginx

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx:1.7.9

ports:

- containerPort: 80

readinessProbe:

httpGet:

path: /healthz

port: 80

initialDelaySeconds: 30

timeoutSeconds: 1

Click to copy

apiVersion: v1

kind: Pod

metadata:

name: myapp-pod

spec:

initContainers:

- name: init-myservice

image: busybox

command: [‘sh’, ‘-c’, ‘until nslookup nginx; do echo waiting for nginx; sleep 2; done;’]

containers:

- name: myapp-container

image: busybox

command: [‘sh’, ‘-c’, ‘echo The app is running! && sleep 3600’]

  1. Kyma

Click to copy

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: helm-broker

labels:

app: helm-broker

spec:

replicas: 1

selector:

matchLabels:

app: helm-broker

strategy:

type: RollingUpdate

rollingUpdate:

maxUnavailable: 0

template:

metadata:

labels:

app: helm-broker

spec:

initContainers:

- name: init-helm-broker

image: eu.gcr.io/kyma-project/alpine-net:0.2.74

command: [‘sh’, ‘-c’, ‘until nc -zv service-catalog-controller-manager.kyma-system.svc.cluster.local 8080; do echo waiting for etcd service; sleep 2; done;’]

containers:

- name: helm-broker

ports:

- containerPort: 6699

readinessProbe:

tcpSocket:

port: 6699

failureThreshold: 3

initialDelaySeconds: 10

periodSeconds: 3

successThreshold: 1

timeoutSeconds: 2

Support for the Helm wait flag

High level Kyma components, such as core, come as Helm charts. These charts are installed as part of a single Helm release. To provide ordering for these core components, the Helm client runs with the --wait flag. As a result, Tiller waits for the readiness of all of the components, and then evaluates the readiness.

For Deployments, set the strategy to RollingUpdate and set the MaxUnavailable value to a number lower than the number of replicas. This setting is necessary, as readiness in Helm v2.10.0 is fulfilled if the number of replicas in ready state is not lower than the expected number of replicas:

Click to copy

ReadyReplicas >= TotalReplicas - MaxUnavailable

Chart installation details

The Tiller server performs the chart installation process. This is the order of operations that happen during the chart installation:

  • resolve values

  • recursively gather all templates with the corresponding values

  • sort all templates

  • render all templates

  • separate hooks and manifests from files into sorted lists

  • aggregate all valid manifests from all sub-charts into a single manifest file

  • execute PreInstall hooks

  • create a release using the ReleaseModule API and, if requested, wait for the actual readiness of the resources

  • execute PostInstall hooks

Notes

All notes are based on Helm v2.10.0 implementation and are subject to change in future releases.

  • Regardless of how complex a chart is, and regardless of the number of sub-charts it references or consists of, it’s always evaluated as one. This means that each Helm release is compiled into a single Kubernetes manifest file when applied on API server.

  • Hooks are parsed in the same order as manifest files and returned as a single, global list for the entire chart. For each hook the weight is calculated as a part of this sort.

  • Manifests are sorted by Kind. You can find the list and the order of the resources on the Kubernetes [Tiller]{.underline} website.

Glossary

  • resource is any document in a chart recognized by Helm or Tiller. This includes manifests, hooks, and notes.

  • template is a valid Go template. Many of the resources are also Go templates.

Deploy with a private Docker registry

Docker is a free tool to deploy applications and servers. To run an application on Kyma, provide the application binary file as a Docker image located in a Docker registry. Use the DockerHub public registry to upload your Docker images for free access to the public. Use a private Docker registry to ensure privacy, increased security, and better availability.

This document shows how to deploy a Docker image from your private Docker registry to the Kyma cluster.

Details

The deployment to Kyma from a private registry differs from the deployment from a public registry. You must provide Secrets accessible in Kyma, and referenced in the .yaml deployment file. This section describes how to deploy an image from a private Docker registry to Kyma. Follow the deployment steps:

  1. Create a Secret resource.

  2. Write your deployment file.

  3. Submit the file to the Kyma cluster.

Create a Secret for your private registry

A Secret resource passes your Docker registry credentials to the Kyma cluster in an encrypted form. For more information on Secrets, refer to the [Kubernetes documentation]{.underline}.

To create a Secret resource for your Docker registry, run the following command:

Click to copy

kubectl create secret docker-registry {secret-name} --docker-server={registry FQN} --docker-username={user-name} --docker-password={password} --docker-email={registry-email} --namespace={namespace}

Refer to the following example:

Click to copy

kubectl create secret docker-registry docker-registry-secret --docker-server=myregistry:5000 --docker-username=root --docker-password=password --docker-email=example@github.com --namespace=production

The Secret is associated with a specific Namespace. In the example, the Namespace is production. However, you can modify the Secret to point to any desired Namespace.

Write your deployment file

  1. Create the deployment file with the .yml extension and name it deployment.yml.

  2. Describe your deployment in the .yml file. Refer to the following example:

Click to copy

apiVersion: apps/v1beta2

kind: Deployment

metadata:

namespace: production # {production/stage/qa}

name: my-deployment # Specify the deployment name.

annotations:

sidecar.istio.io/inject: true

spec:

replicas: 3 # Specify your replica - how many instances you want from that deployment.

selector:

matchLabels:

app: app-name # Specify the app label. It is optional but it is a good practice.

template:

metadata:

labels:

app: app-name # Specify app label. It is optional but it is a good practice.

version: v1 # Specify your version.

spec:

containers:

- name: container-name # Specify a meaningful container name.

image: myregistry:5000/user-name/image-name:latest # Specify your image {registry FQN/your-username/your-space/image-name:image-version}.

ports:

- containerPort: 80 # Specify the port to your image.

imagePullSecrets:

- name: docker-registry-secret # Specify the same Secret name you generated in the previous step for this Namespace.

- name: example-secret-name # Specify your Namespace Secret, named `example-secret-name`.

  1. Submit you deployment file using this command:

Click to copy

kubectl apply -f deployment.yml

Your deployment is now running on the Kyma cluster.

Overview

Kyma is a complex tool which consists of many different [components]{.underline} that provide various functionalities to extend your application. This entails high technical requirements that can influence your local development process. To meet the customer needs, we ensured Kyma modularity. This way you can decide not to include a given component in the Kyma installation, or install it after the Kyma installation process.

To make the local development process easier, we introduced the Kyma Lite concept in which case some components are not included in the local installation process by default. These are the Kyma and Kyma Lite components:

Component Kyma Kyma Lite


core ✅ ✅
cms ✅ ✅
cluster-essentials ✅ ✅
application-connector ✅ ✅
ark ✅ ⛔️
assetstore ✅ ✅
dex ✅ ✅
helm-broker ✅ ✅
istio ✅ ✅
istio-kyma-patch ✅ ✅
jaeger ✅ ⛔️
logging ✅ ⛔️
monitoring ✅ ⛔️
prometheus-operator ✅ ⛔️
service-catalog ✅ ✅
service-catalog-addons ✅ ✅
nats-streaming ✅ ✅

Installation guides

Follow these installation guides to install Kyma locally or on a cluster:

Read rest of the installation documents to learn how to:

NOTE: Make sure to check whether the version of the documentation in the left pane of the kyma-project.io is compatible with your Kyma version.

Install Kyma locally

This Installation guide shows developers how to quickly deploy Kyma locally on the MacOS and Linux platforms. Kyma is installed locally using a proprietary installer based on a [Kubernetes operator]{.underline}. The document provides prerequisites, instructions on how to install Kyma locally and verify the deployment, as well as the troubleshooting tips.

Prerequisites

To run Kyma locally, clone [this]{.underline} Git repository to your machine and check out the latest release.

Additionally, download these tools:

Virtualization:

NOTE: To work with Kyma, use only the provided scripts and commands. Kyma does not work on a basic Minikube cluster that you can start using the minikube start command.

Set up certificates

Kyma comes with a local wildcard self-signed server.crt certificate that you can find under the /installation/certs/workspace/raw/ directory of the kyma repository. Trust it on the OS level for convenience.

Follow these steps to “always trust” the Kyma certificate on MacOS:

  1. Change the working directory to installation:

Click to copy

cd installation

  1. Run this command:

Click to copy

sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain certs/workspace/raw/server.crt

NOTE: “Always trusting” the certificate does not work with Mozilla Firefox.

To access the Application Connector and connect an external solution to the local deployment of Kyma, you must add the certificate to the trusted certificate storage of your programming environment. Read [this]{.underline} document to learn more.

Install Kyma

You can install Kyma either with all core subcomponents or only with the selected ones. This section describes how to install Kyma with all core subcomponents. Read [this]{.underline} document to learn how to install only the selected subcomponents.

CAUTION: Running the installation script deletes any previously existing cluster from your Minikube.

NOTE: Logging and Monitoring subcomponents are not included by default when you install Kyma on Minikube. You can install them using the instructions provided [here]{.underline}.

Follow these instructions to install Kyma from a release or from local sources:

  • From a release

  • From sources

  1. Change the working directory to installation:

Click to copy

cd installation

  1. Use the following command to run Kubernetes locally using Minikube:

Click to copy

./scripts/minikube.sh --domain “kyma.local” --vm-driver “hyperkit”

  1. Wait until the kube-dns Pod is ready. Run this script to setup Tiller:

Click to copy

./scripts/install-tiller.sh

  1. Go to [this]{.underline} page and choose the latest release.
  1. Export the release version as an environment variable. Run:

Click to copy

export LATEST={KYMA_RELEASE_VERSION}

  1. Deploy the Kyma Installer in your cluster from the $LATEST release:

Click to copy

kubectl apply -f https://github.com/kyma-project/kyma/releases/download/$LATEST/kyma-installer-local.yaml

  1. Configure the Kyma installation using the local configuration file from the $LATEST release:

Click to copy

wget -qO- https://github.com/kyma-project/kyma/releases/download/$LATEST/kyma-config-local.yaml | sed “s/minikubeIP: \”\"/minikubeIP: \"$(minikube ip)\"/g" | kubectl apply -f -

  1. To trigger the installation process, label the kyma-installation custom resource:

Click to copy

kubectl label installation/kyma-installation action=install

  1. By default, the Kyma installation is a background process, which allows you to perform other tasks in the terminal window. Nevertheless, you can track the progress of the installation by running this script:

Click to copy

./scripts/is-installed.sh

Read [this]{.underline} document to learn how to reinstall Kyma without deleting the cluster from Minikube. To learn how to test Kyma, see [this]{.underline} document.

Verify the deployment

Follow the guidelines in the subsections to confirm that your Kubernetes API Server is up and running as expected.

Verify the installation status using the is-installed.sh script

The is-installed.sh script is designed to give you clear information about the Kyma installation. Run it at any point to get the current installation status, or to find out whether the installation is successful.

If the script indicates that the installation failed, try to install Kyma again by re-running the run.sh script.

If the installation fails in a reproducible manner, don’t hesitate to create a [GitHub]{.underline} issue in the project or reach out to the [“installation” Slack channel]{.underline} to get direct support from the community.

Access the Kyma console

Access your local Kyma instance through [this]{.underline} link.

  • Click Login with Email and sign in with the admin@kyma.cx email address. Use the password contained in the admin-user Secret located in the kyma-system Namespace. To get the password, run:

Click to copy

kubectl get secret admin-user -n kyma-system -o jsonpath="{.data.password}" | base64 -D

  • Click the Namespaces section and select a Namespace from the drop-down menu to explore Kyma further.

Enable Horizontal Pod Autoscaler (HPA)

By default, the Horizontal Pod Autoscaler (HPA) is not enabled in your local Kyma installation, so you need to enable it manually.

Kyma uses the autoscaling/v1 stable version, which supports only CPU autoscaling. Once enabled, HPA automatically scales the number of lambda function Pods based on the observed CPU utilization.

NOTE: The autoscaling/v1 version does not support custom metrics. To use such metrics, you need the autoscaling/v2beta1 version.

Follow these steps to enable HPA:

  1. Enable the metrics server for resource metrics by running the following command:

Click to copy

minikube addons enable metrics-server

  1. Verify if the metrics server is active by checking the list of addons:

Click to copy

minikube addons list

Stop and restart Kyma without reinstalling

Use the minikube.sh script to restart the Minikube cluster without reinstalling Kyma. Follow these steps to stop and restart your cluster:

  1. Stop the Minikube cluster with Kyma installed. Run:

Click to copy

minikube stop

  1. Restart the cluster without reinstalling Kyma. Run:

Click to copy

./scripts/minikube.sh --domain “kyma.local” --vm-driver “hyperkit”

The script discovers that a Minikube cluster is initialized and asks if you want to delete it. Answering no causes the script to start the Minikube cluster and restarts all of the previously installed components. Even though this procedure takes some time, it is faster than a clean installation as you don’t download all of the required Docker images.

To verify that the restart is successful, run this command and check if all Pods have the RUNNING status:

Click to copy

kubectl get pods --all-namespaces

Troubleshooting

  1. If the Installer does not respond as expected, check the installation status using the is-installed.sh script with the --verbose flag added. Run:

Click to copy

scripts/is-installed.sh --verbose

  1. If the installation is successful but a component does not behave in an expected way, see if all deployed Pods are running. Run this command:

Click to copy

kubectl get pods --all-namespaces

The command retrieves all Pods from all Namespaces, the status of the Pods, and their instance numbers. Check if the STATUS column shows Running for all Pods. If any of the Pods that you require do not start successfully, perform the installation again.

If the problem persists, don’t hesitate to create a [GitHub]{.underline} issue or reach out to the [“installation” Slack channel]{.underline} to get direct support from the community.

  1. If you put your local running cluster into hibernation or use minikube stop and minikube start the date and time settings of Minikube get out of sync with the system date and time settings. As a result, the access token used to log in cannot be properly validated by Dex and you cannot log in to the console. To fix that, set the date and time used by your machine in Minikube. Run:

Click to copy

minikube ssh – docker run -i --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i date -u $(date -u +%m%d%H%M%Y)

Install Kyma on a cluster

This Installation guide shows developers how to quickly deploy Kyma on a cluster. Kyma is installed on a cluster using a proprietary installer based on a [Kubernetes operator]{.underline}. By default, Kyma is installed on a cluster with a wildcard DNS provided by [xip.io]{.underline}. Alternatively, you can provide your own domain for the cluster.

Follow these installation guides to install Kyma on a cluster depending on the supported cloud providers:

  • GKE

  • AKS

This Installation guide shows developers how to quickly deploy Kyma on a [Google Kubernetes Engine]{.underline} (GKE) cluster.

Prerequisites

TIP: Get a free domain for your cluster using services like [freenom.com]{.underline} or similar.

Prepare the GKE cluster

  1. Select a name for your cluster. Set the cluster name and the name of your GCP project as environment variables. Run:

Click to copy

export CLUSTER_NAME={CLUSTER_NAME_YOU_WANT}

export PROJECT={YOUR_GCP_PROJECT}

  1. Create a cluster in the europe-west1 region. Run:

Click to copy

gcloud container --project “$PROJECT” clusters \

create “$CLUSTER_NAME” --zone “europe-west1-b” \

–cluster-version “1.12” --machine-type “n1-standard-4” \

–addons HorizontalPodAutoscaling,HttpLoadBalancing,KubernetesDashboard

  1. Install Tiller on your GKE cluster. Run:

Click to copy

kubectl apply -f https://raw.githubusercontent.com/kyma-project/kyma/{RELEASE_TAG}/installation/resources/tiller.yaml

  1. Add your account as the cluster administrator:

Click to copy

kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account)

DNS setup and TLS certificate generation (optional)

NOTE: Execute instructions from this section only if you want to use your own domain. Otherwise, proceed to Prepare the installation configuration file section.

Delegate the management of your domain to Google Cloud DNS

Follow these steps:

  1. Export the domain name, project name, and DNS zone name as environment variables. Run the commands listed below:

Click to copy

export DOMAIN={YOUR_SUBDOMAIN}

export DNS_NAME={YOUR_DOMAIN}.

export PROJECT={YOUR_GOOGLE_PROJECT_ID}

export DNS_ZONE={YOUR_DNS_ZONE}

Example:

Click to copy

export DOMAIN=my.kyma-demo.ga

export DNS_NAME=kyma-demo.ga.

export PROJECT=kyma-demo-235208

export DNS_ZONE=myzone

  1. Create a DNS-managed zone in your Google project. Run:

Click to copy

gcloud dns --project=$PROJECT managed-zones create $DNS_ZONE --description= --dns-name=$DNS_NAME

Alternatively, create it through the GCP UI. Navigate go to Network Services in the Network section, click Cloud DNS and select Create Zone.

  1. Delegate your domain to Google name servers.

    • Get the list of the name servers from the zone details. This is a sample list:

Click to copy

ns-cloud-b1.googledomains.com.

ns-cloud-b2.googledomains.com.

ns-cloud-b3.googledomains.com.

ns-cloud-b4.googledomains.com.

  • Set up your domain to use these name servers.
  1. Check if everything is set up correctly and your domain is managed by Google name servers. Run:

Click to copy

host -t ns $DNS_NAME

A successful response returns the list of the name servers you fetched from GCP.

Get the TLS certificate

  1. Create a folder for certificates. Run:

Click to copy

mkdir letsencrypt

  1. Create a new service account and assign it to the dns.admin role. Run these commands:

Click to copy

gcloud iam service-accounts create dnsmanager --display-name “dnsmanager” --project “$PROJECT”

Click to copy

gcloud projects add-iam-policy-binding $PROJECT \

–member serviceAccount:dnsmanager@$PROJECT.iam.gserviceaccount.com --role roles/dns.admin

  1. Generate an access key for this account in the letsencrypt folder. Run:

Click to copy

gcloud iam service-accounts keys create ./letsencrypt/key.json --iam-account dnsmanager@$PROJECT.iam.gserviceaccount.com

  1. Run the Certbot Docker image with the letsencrypt folder mounted. Certbot uses the key to apply DNS challenge for the certificate request and stores the TLS certificates in that folder. Run:

Click to copy

docker run -it --name certbot --rm \

-v “$(pwd)/letsencrypt:/etc/letsencrypt” \

certbot/dns-google \

certonly \

-m YOUR_EMAIL_HERE --agree-tos --no-eff-email \

–dns-google \

–dns-google-credentials /etc/letsencrypt/key.json \

–server https://acme-v02.api.letsencrypt.org/directory \

-d “*.$DOMAIN”

  1. Export the certificate and key as environment variables. Run these commands:

Click to copy

export TLS_CERT=$(cat ./letsencrypt/live/$DOMAIN/fullchain.pem | base64 | sed ‘s/ /\\ /g’)

export TLS_KEY=$(cat ./letsencrypt/live/$DOMAIN/privkey.pem | base64 | sed ‘s/ /\\ /g’)

Prepare the installation configuration file

Using the latest GitHub release

NOTE: You can use Kyma version 0.8 or higher.

  1. Go to [this]{.underline} page and choose the latest release.
  1. Export the release version as an environment variable. Run:

Click to copy

export LATEST={KYMA_RELEASE_VERSION}

  1. Download the kyma-config-cluster.yaml and kyma-installer-cluster.yaml files from the latest release. Run:

Click to copy

wget https://github.com/kyma-project/kyma/releases/download/$LATEST/kyma-config-cluster.yaml

wget https://github.com/kyma-project/kyma/releases/download/$LATEST/kyma-installer-cluster.yaml

  1. Prepare the deployment file.

    • Run this command if you use the xip.io default domain:

Click to copy

cat kyma-installer-cluster.yaml <(echo -e “\n—”) kyma-config-cluster.yaml | sed -e “s/__.*__//g” > my-kyma.yaml

  • Run this command if you use your own domain:

Click to copy

cat kyma-installer-cluster.yaml <(echo -e “\n—”) kyma-config-cluster.yaml | sed -e “s/__DOMAIN__/$DOMAIN/g” | sed -e “s/__TLS_CERT__/$TLS_CERT/g” | sed -e “s/__TLS_KEY__/$TLS_KEY/g” | sed -e “s/__.*__//g” > my-kyma.yaml

NOTE: If you deploy Kyma with GKE version 1.12.6-gke.X and above, follow these steps to prepare the deployment file.

  • Run this command if you use the xip.io default domain:

Click to copy

cat kyma-installer-cluster.yaml <(echo -e “\n—”) kyma-config-cluster.yaml | sed -e “s/__PROMTAIL_CONFIG_NAME__/promtail-k8s-1-14.yaml/g” | sed -e “s/__.*__//g” > my-kyma.yaml

  • Run this command if you use your own domain:

Click to copy

cat kyma-installer-cluster.yaml <(echo -e “\n—”) kyma-config-cluster.yaml | sed -e “s/__PROMTAIL_CONFIG_NAME__/promtail-k8s-1-14.yaml/g” | sed -e “s/__DOMAIN__/$DOMAIN/g” | sed -e “s/__TLS_CERT__/$TLS_CERT/g” | sed -e “s/__TLS_KEY__/$TLS_KEY/g” | sed -e “s/__.*__//g” > my-kyma.yaml

  1. The output of this operation is the my_kyma.yaml file. Use it to deploy Kyma on your GKE cluster.

Using your own image

  1. Checkout [kyma-project]{.underline} and enter the root folder.
  1. Build an image that is based on the current Installer image and includes the current installation and resources charts. Run:

Click to copy

docker build -t kyma-installer:latest -f tools/kyma-installer/kyma.Dockerfile .

  1. Push the image to your Docker Hub:

Click to copy

docker tag kyma-installer:latest {YOUR_DOCKER_LOGIN}/kyma-installer:latest

docker push {YOUR_DOCKER_LOGIN}/kyma-installer:latest

  1. Prepare the deployment file:

    • Run this command if you use the xip.io default domain:

Click to copy

(cat installation/resources/installer.yaml ; echo “—” ; cat installation/resources/installer-config-cluster.yaml.tpl ; echo “—” ; cat installation/resources/installer-cr-cluster.yaml.tpl) | sed -e “s/__.*__//g” > my-kyma.yaml

  • Run this command if you use your own domain:

Click to copy

(cat installation/resources/installer.yaml ; echo “—” ; cat installation/resources/installer-config-cluster.yaml.tpl ; echo “—” ; cat installation/resources/installer-cr-cluster.yaml.tpl) | sed -e “s/__DOMAIN__/$DOMAIN/g” |sed -e “s/__TLS_CERT__/$TLS_CERT/g” | sed -e “s/__TLS_KEY__/$TLS_KEY/g” | sed -e “s/__.*__//g” > my-kyma.yaml

NOTE: If you deploy Kyma with GKE version 1.12.6-gke.X and above, follow these steps to prepare the deployment file.

  • Run this command if you use the xip.io default domain:

Click to copy

(cat installation/resources/installer.yaml ; echo “—” ; cat installation/resources/installer-config-cluster.yaml.tpl ; echo “—” ; cat installation/resources/installer-cr-cluster.yaml.tpl) | sed -e “s/__PROMTAIL_CONFIG_NAME__/promtail-k8s-1-14.yaml/g” | sed -e “s/__.*__//g” > my-kyma.yaml

  • Run this command if you use your own domain:

Click to copy

(cat installation/resources/installer.yaml ; echo “—” ; cat installation/resources/installer-config-cluster.yaml.tpl ; echo “—” ; cat installation/resources/installer-cr-cluster.yaml.tpl) | sed -e “s/__PROMTAIL_CONFIG_NAME__/promtail-k8s-1-14.yaml/g” | sed -e “s/__DOMAIN__/$DOMAIN/g” |sed -e “s/__TLS_CERT__/$TLS_CERT/g” | sed -e “s/__TLS_KEY__/$TLS_KEY/g” | sed -e “s/__.*__//g” > my-kyma.yaml

  1. The output of this operation is the my_kyma.yaml file. Modify it to fetch the proper image with the changes you made (/kyma-installer:latest). Use the modified file to deploy Kyma on your GKE cluster.

Deploy Kyma

  1. Configure kubectl to use your new cluster. Run:

Click to copy

gcloud container clusters get-credentials $CLUSTER_NAME --zone europe-west1-b --project $PROJECT

  1. Deploy Kyma using the my-kyma custom configuration file you created. Run:

Click to copy

kubectl apply -f my-kyma.yaml

  1. Check if the Pods of Tiller and the Kyma Installer are running:

Click to copy

kubectl get pods --all-namespaces

  1. Start Kyma installation:

Click to copy

kubectl label installation/kyma-installation action=install

  1. To watch the installation progress, run:

Click to copy

while true; do \

kubectl -n default get installation/kyma-installation -o jsonpath="{'Status: ‘}{.status.state}{’, description: '}{.status.description}"; echo; \

sleep 5; \

done

After the installation process is finished, the Status: Installed, description: Kyma installed message appears. In case of an error, you can fetch the logs from the Installer by running:

Click to copy

kubectl -n kyma-installer logs -l ‘name=kyma-installer’

Add the xip.io self-signed certificate to your OS trusted certificates

NOTE: Skip this section if you use your own domain.

After the installation, add the custom Kyma [xip.io]{.underline} self-signed certificate to the trusted certificates of your OS. For MacOS, run:

Click to copy

tmpfile=$(mktemp /tmp/temp-cert.XXXXXX) \

&& kubectl get configmap net-global-overrides -n kyma-installer -o jsonpath=’{.data.global\.ingress\.tlsCrt}’ | base64 --decode > $tmpfile \

&& sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain $tmpfile \

&& rm $tmpfile

Configure DNS for the cluster load balancer (optional)

NOTE: Execute instructions from this section only if you want to use your own domain.

  1. Export the domain of your cluster and DNS zone as environment variables. Run:

Click to copy

export DOMAIN=$(kubectl get cm net-global-overrides -n kyma-installer -o jsonpath=’{.data.global\.ingress\.domainName}’)

export DNS_ZONE={YOUR_DNS_ZONE}

  1. To add DNS entries, run these commands:

Click to copy

export EXTERNAL_PUBLIC_IP=$(kubectl get service -n istio-system istio-ingressgateway -o jsonpath="{.status.loadBalancer.ingress[0].ip}")

export APISERVER_PUBLIC_IP=$(kubectl get service -n kyma-system apiserver-proxy-ssl -o jsonpath="{.status.loadBalancer.ingress[0].ip}")

export REMOTE_ENV_IP=$(kubectl get service -n kyma-system application-connector-ingress-nginx-ingress-controller -o jsonpath="{.status.loadBalancer.ingress[0].ip}")

gcloud dns --project=$PROJECT record-sets transaction start --zone=$DNS_ZONE

gcloud dns --project=$PROJECT record-sets transaction add $EXTERNAL_PUBLIC_IP --name=\*.$DOMAIN. --ttl=60 --type=A --zone=$DNS_ZONE

gcloud dns --project=$PROJECT record-sets transaction add $REMOTE_ENV_IP --name=\gateway.$DOMAIN. --ttl=60 --type=A --zone=$DNS_ZONE

gcloud dns --project=$PROJECT record-sets transaction add $APISERVER_PUBLIC_IP --name=\apiserver.$DOMAIN. --ttl=60 --type=A --zone=$DNS_ZONE

gcloud dns --project=$PROJECT record-sets transaction execute --zone=$DNS_ZONE

Access Tiller (optional)

If you need to use Helm, you must establish a secure connection with Tiller by saving the cluster’s client certificate, key, and Certificate Authority (CA) to [Helm Home]{.underline}.

Additionally, you must add the --tls flag to every Helm command you run.

NOTE: Read [this]{.underline} document to learn more about TLS in Tiller.

Run these commands to save the client certificate, key, and CA to [Helm Home]{.underline}:

Click to copy

kubectl get -n kyma-installer secret helm-secret -o jsonpath="{.data[‘global\.helm\.ca\.crt’]}" | base64 --decode > “$(helm home)/ca.pem”;

kubectl get -n kyma-installer secret helm-secret -o jsonpath="{.data[‘global\.helm\.tls\.crt’]}" | base64 --decode > “$(helm home)/cert.pem”;

kubectl get -n kyma-installer secret helm-secret -o jsonpath="{.data[‘global\.helm\.tls\.key’]}" | base64 --decode > “$(helm home)/key.pem”;

Access the cluster

  1. To get the address of the cluster’s Console, check the name of the Console’s virtual service. The name of this virtual service corresponds to the Console URL. To get the virtual service name, run:

Click to copy

kubectl get virtualservice core-console -n kyma-system

  1. Access your cluster under this address:

Click to copy

https://{VIRTUAL_SERVICE_NAME}

NOTE: To log in to your cluster, use the default admin static user. To learn how to get the login details for this user, see [this]{.underline} document.

Custom component installation

Since Kyma is modular, you can remove some components so that they are not installed together with Kyma. You can also add some of them after the installation process. Read this document to learn how to do that.

Remove a component

NOTE: Not all components can be simply removed from the Kyma installation. In case of Istio and the Service Catalog, you must provide your own deployment of these components in the Kyma-supported version before you remove them from the installation process. See [this]{.underline} file to check the currently supported version of Istio. See [this]{.underline} file to check the currently supported version of the Service Catalog.

To disable a component from the list of components that you install with Kyma, remove this component’s entries from the appropriate file. The file differs depending on whether you install Kyma from the release or from sources, and if you install Kyma locally or on a cluster. The version of your component’s deployment must match the version that Kyma currently supports.

Installation from the release

  1. Download the [newest version]{.underline} of Kyma.

  2. Customize installation by removing a component from the list of components in the Installation resource. For example, to disable the Application Connector installation, remove this entry:

Click to copy

name: “application-connector”

namespace: “kyma-system”

  • from the kyma-config-local.yaml file for the local installation

  • from the kyma-config-cluster.yaml file for the cluster installation

  1. Follow the installation steps described in the [Install Kyma locally from the release]{.underline} document, or [Install Kyma on a GKE cluster]{.underline} accordingly.

Installation from sources

  1. Customize installation by removing a component from the list of components in the following Installation resource:
  1. Follow the installation steps described in the [Install Kyma locally from sources]{.underline} document, or [Install Kyma on a GKE cluster]{.underline} accordingly.

Verify the installation

  1. Check if all Pods are running in the kyma-system Namespace:

Click to copy

kubectl get pods -n kyma-system

  1. Sign in to the Kyma Console using the admin@kyma.cx email address as described in the [Install Kyma locally from the release]{.underline} document.

Add a component

NOTE: This section assumes that you already have your Kyma Lite local version installed successfully.

To install a component that is not installed with Kyma by default, modify the [Installation]{.underline} custom resource and add the component that you want to install to the list of components :

  1. Edit the resource:

Click to copy

kubectl edit installation kyma-installation

  1. Add the new component to the list of components, for example:

Click to copy

- name: “jaeger”

namespace: “kyma-system”

  1. Trigger the installation:

Click to copy

kubectl label installation/kyma-installation action=install

You can verify the installation status by calling ./installation/scripts/is-installed.sh in the terminal.

Update Kyma

This guide describes how to update Kyma deployed locally or on a cluster.

Prerequisites

Overview

Kyma consists of multiple components, installed as [Helm]{.underline} releases.

Update of an existing deployment can include:

  • changes in charts

  • changes in overrides

  • adding new releases

The update procedure consists of three main steps:

  • Prepare the update

  • Update the Kyma Installer

  • Trigger the update process

NOTE: In case of dependency conflicts or major changes between components versions, some updates may not be possible.

NOTE: Currently Kyma doesn’t support removing components as a part of the update process.

Prepare the update

  • If you update an existing component, make all required changes to the Helm charts of the component located in the [resource]{.underline} directory.

  • If you add a new component to your Kyma deployment, add a top-level Helm chart for that component. Additionally, run this command to edit the Installation custom resource and add the new component to the installed components list:

Click to copy

kubectl edit installation kyma-installation

NOTE: Read [this]{.underline} document to learn more about the Installation custom resource.

  • If you introduced changes in overrides, update the existing ConfigMaps and Secrets. Add new ConfigMaps and Secrets if required. See [this]{.underline} document for more information on overrides.

Update the Kyma Installer on a local deployment

  • Build a new image for the Kyma Installer:

Click to copy

./installation/scripts/build-kyma-installer.sh

NOTE: If you started Kyma with the run.sh script with a --vm-driver {value} parameter, provide the same parameter to the build-kyma-installer.sh script.

  • Restart the Kyma Installer Pod:

Click to copy

kubectl delete pod -n kyma-installer {INSTALLER_POD_NAME}

Update the Kyma Installer on a cluster deployment

  • Build a new image for the Kyma Installer:

Click to copy

docker build -t {IMAGE_NAME}:{IMAGE_TAG} -f tools/kyma-installer/kyma.Dockerfile .

  • Push the image to your Docker registry.

  • Redeploy the Kyma Installer Pod using the new image. Run this command to edit the Deployment configuration:

Click to copy

kubectl edit deployment kyma-installer -n kyma-installer

Change the image and imagePullPolicy attributes in this section:

Click to copy

spec:

containers:

- image: <your_image_name>:<your_tag>

imagePullPolicy: Always

NOTE: If the desired image name and imagePullPolicy is already set in the deployment configuration, restart the Pod by running kubectl delete pod -n kyma-installer {INSTALLER_POD_NAME}

Trigger the update process

Execute the following command to trigger the update process:

Click to copy

kubectl label installation/kyma-installation action=install

Reinstall Kyma

The custom scripts allow you to remove Kyma from a Minikube cluster and reinstall Kyma without removing the cluster.

NOTE: These scripts do not delete the cluster from your Minikube. This allows you to quickly reinstall Kyma.

  1. Use the clean-up.sh script to uninstall Kyma from the cluster. Run:

Click to copy

scripts/clean-up.sh

  1. Run this script to reinstall Kyma on an existing cluster:

Click to copy

cmd/run.sh --skip-minikube-start

Local installation scripts deep-dive

This document extends the [Install Kyma locally from sources]{.underline} guide with a detailed breakdown of the alternative local installation method which is the run.sh script.

The following snippet is the main element of the run.sh script:

Click to copy

if [[ ! $SKIP_MINIKUBE_START ]]; then

bash $SCRIPTS_DIR/minikube.sh --domain “$DOMAIN” --vm-driver “$VM_DRIVER” $MINIKUBE_EXTRA_ARGS

fi

bash $SCRIPTS_DIR/build-kyma-installer.sh --vm-driver “$VM_DRIVER”

if [ -z “$CR_PATH” ]; then

TMPDIR=`mktemp -d “$CURRENT_DIR/…/…/temp-XXXXXXXXXX”`

CR_PATH="$TMPDIR/installer-cr-local.yaml"

bash $SCRIPTS_DIR/create-cr.sh --output “$CR_PATH” --domain “$DOMAIN”

fi

bash $SCRIPTS_DIR/installer.sh --local --cr “$CR_PATH” --password “$ADMIN_PASSWORD”

rm -rf $TMPDIR

Subsequent sections provide details of all involved subscripts, in the order in which the run.sh script triggers them.

The minikube.sh script

NOTE: To work with Kyma, use only the provided scripts and commands. Kyma does not work on a basic Minikube cluster that you can start using the minikube start command.

The purpose of the installation/scripts/minikube.sh script is to configure and start Minikube. The script also checks if your development environment is configured to handle the Kyma installation. This includes checking Minikube and kubectl versions.

If Minikube is already initialized, the system prompts you to agree to remove the previous Minikube cluster.

  • If you plan to perform a clean installation, answer yes.

  • If you installed Kyma to your Minikube cluster and then stopped the cluster using the minikube stop command, answer no. This allows you to start the cluster again without reinstalling Kyma.

Minikube is configured to disable the default Nginx Ingress Controller.

NOTE: For the complete list of parameters passed to the minikube start command, refer to the installation/scripts/minikube.sh script.

Once Minikube is up and running, the script adds local installation entries to /etc/hosts.

The build-kyma-installer.sh script

The Kyma Installer is an application based on the [Kubernetes operator]{.underline}. Its purpose is to install Helm charts defined in the Installation custom resource. The Kyma Installer is a Docker image that bundles the Installer binary with Kyma charts.

The installation/scripts/build-kyma-installer.sh script extracts the Kyma-Installer image name from the installer.yaml deployment file and uses it to build a Docker image inside Minikube. This image contains local Kyma sources from the resources folder.

NOTE: For the Kyma Installer Docker image details, refer to the tools/kyma-installer/kyma.Dockerfile file.

The create-cr.sh script

The installation/scripts/create-cr.sh script prepares the Installation custom resource from the installation/resources/installer-cr.yaml.tpl template. The local installation scenario uses the default Installation custom resource. The Kyma Installer already contains local Kyma resources bundled, thus url is ignored by the Installer component.

NOTE: Read [this]{.underline} document to learn more about the Installation custom resource.

The installer.sh script

The installation/scripts/installer.sh script creates the default RBAC role, installs [Tiller]{.underline}, and deploys the Kyma Installer component.

NOTE: For the Kyma Installer deployment details, refer to the installation/resources/installer.yaml file.

The script applies the Installation custom resource and marks it with the action=install label, which triggers the Kyma installation.

In the process of installing Tiller, a set of TLS certificates is created and saved to [Helm Home]{.underline} to secure the connection between the client and the server.

NOTE: The Kyma installation runs in the background. Execute the ./installation/scripts/is-installed.sh script to follow the installation process.

The is-installed.sh script

The installation/scripts/is-installed.sh script shows the status of Kyma installation in real time. The script checks the status of the Installation custom resource. When it detects that the status changed to Installed, the script exits. If you define a timeout period and the status doesn’t change to Installed within that period, the script fetches the Installer logs. If you don’t set a timeout period, the script waits for the change of the status until you terminate it.

Installation

The installations.installer.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to control the Kyma Installer, a proprietary solution based on the [Kubernetes operator]{.underline} principles. To get the up-to-date CRD and show the output in the yaml format, run this command:

Click to copy

kubectl get crd installations.installer.kyma-project.io -o yaml

Sample custom resource

This is a sample CR that controls the Kyma Installer. This example has the action label set to install, which means that it triggers the installation of Kyma. The name and namespace fields in the components array define which components you install and Namespaces in which you install them.

NOTE: See the installer-cr.yaml.tpl file in the /installation/resources directory for the complete list of Kyma components.

Click to copy

apiVersion: “installer.kyma-project.io/v1alpha1

kind: Installation

metadata:

name: kyma-installation

labels:

action: install

finalizers:

- finalizer.installer.kyma-project.io

spec:

version: “1.0.0”

url: “https://sample.url.com/kyma_release.tar.gz

components:

- name: “cluster-essentials”

namespace: “kyma-system”

- name: “istio”

namespace: “istio-system”

- name: “prometheus-operator”

namespace: “kyma-system”

- name: “provision-bundles”

- name: “dex”

namespace: “kyma-system”

- name: “core”

namespace: “kyma-system”

Custom resource parameters

This table lists all the possible parameters of a given resource together with their descriptions:

Field Mandatory Description


metadata.name YES Specifies the name of the CR.
metadata.labels.action YES Defines the behavior of the Kyma Installer. Available options are install and uninstall.
metadata.finalizers NO Protects the CR from deletion. Read [this]{.underline} Kubernetes document to learn more about finalizers.
spec.version NO When manually installing Kyma on a cluster, specify any valid [SemVer]{.underline} notation string.
spec.url YES Specifies the location of the Kyma sources tar.gz package. For example, for the master branch of Kyma, the address is https://github.com/kyma-project/kyma/archive/master.tar.gz
spec.components YES Lists which components of Helm chart components to install or update.
spec.components.name YES Specifies the name of the component which is the same as the name of the component subdirectory in the resources directory.
spec.components.namespace YES Defines the Namespace in which you want the Installer to install, or update the component.
spec.components.release NO Provides the name of the Helm release. The default parameter is the component name.

Additional information

The Kyma Installer adds the status section which describes the status of Kyma installation. This table lists the fields of the status section.

Field Mandatory Description


status.state YES Describes the installation state. Takes one of four values.
status.description YES Describes the installation step the installer performs at the moment.
status.errorLog YES Lists all errors that happen during the installation.
status.errorLog.component YES Specifies the name of the component that causes the error.
status.errorLog.log YES Provides a description of the error.
status.errorLog.occurrences YES Specifies the number of subsequent occurrences of the error.

The status.state field uses one of the following four values to describe the installation state:

State Description


Installed Installation successful.
Uninstalled Uninstallation successful.
InProgress The Installer is still installing or uninstalling Kyma. No errors logged.
Error The Installer encountered a problem but it continues to try to process the resource.

Related resources and components

These components use this CR:

Component Description


Installer The CR triggers the Installer to install, update or delete of the specified components.

Sample service deployment on local

This tutorial is intended for the developers who want to quickly learn how to deploy a sample service and test it with Kyma installed locally on Mac.

This tutorial uses a standalone sample service written in the [Go]{.underline} language .

Prerequisites

To use the Kyma cluster and install the example, download these tools:

Steps

Deploy and expose a sample standalone service

Follow these steps:

  1. Deploy the sample service to any of your Namespaces. Use the stage Namespace for this guide:

Click to copy

kubectl create -n stage -f https://raw.githubusercontent.com/kyma-project/examples/master/http-db-service/deployment/deployment.yaml

  1. Create an unsecured API for your example service:

Click to copy

kubectl apply -n stage -f https://raw.githubusercontent.com/kyma-project/examples/master/gateway/service/api-without-auth.yaml

  1. Add the IP address of Minikube to the hosts file on your local machine for your APIs:

Click to copy

echo “$(minikube ip) http-db-service.kyma.local” | sudo tee -a /etc/hosts

  1. Access the service using the following call:

Click to copy

curl -ik https://http-db-service.kyma.local/orders

The system returns a response similar to the following:

Click to copy

HTTP/2 200

content-type: application/json;charset=UTF-8

vary: Origin

date: Mon, 01 Jun 2018 00:00:00 GMT

content-length: 2

x-envoy-upstream-service-time: 131

server: envoy

[]

Update your service’s API to secure it

Run the following command:

Click to copy

kubectl apply -n stage -f https://raw.githubusercontent.com/kyma-project/examples/master/gateway/service/api-with-auth.yaml

After you apply this update, you must include a valid bearer ID token in the Authorization header to access the service.

NOTE: The update might take some time.

Sample service deployment on a cluster

This tutorial is intended for the developers who want to quickly learn how to deploy a sample service and test it with the Kyma cluster.

This tutorial uses a standalone sample service written in the [Go]{.underline} language.

Prerequisites

To use the Kyma cluster and install the example, download these tools:

Steps

Get the kubeconfig file and configure the CLI

Follow these steps to get the kubeconfig file and configure the CLI to connect to the cluster:

  1. Access the Console UI of your Kyma cluster.

  2. Click Administration.

  3. Click the Download config button to download the kubeconfig file to a selected location on your machine.

  4. Open a terminal window.

  5. Export the KUBECONFIG environment variable to point to the downloaded kubeconfig. Run this command:

Click to copy

export KUBECONFIG={KUBECONFIG_FILE_PATH}

NOTE: Drag and drop the kubeconfig file in the terminal to easily add the path of the file to the export KUBECONFIG command you run.

  1. Run kubectl cluster-info to check if the CLI is connected to the correct cluster.

Set the cluster domain as an environment variable

The commands in this guide use URLs in which you must provide the domain of the cluster that you use. Export the domain of your cluster as an environment variable. Run:

Click to copy

export yourClusterDomain=’{YOUR_CLUSTER_DOMAIN}’

Deploy and expose a sample standalone service

Follow these steps:

  1. Deploy the sample service to any of your Namespaces. Use the stage Namespace for this guide:

Click to copy

kubectl create -n stage -f https://raw.githubusercontent.com/kyma-project/examples/master/http-db-service/deployment/deployment.yaml

  1. Create an unsecured API for your service:

Click to copy

curl -k https://raw.githubusercontent.com/kyma-project/examples/master/gateway/service/api-without-auth.yaml | sed “s/.kyma.local/.$yourClusterDomain/” | kubectl apply -n stage -f -

  1. Access the service using the following call:

Click to copy

curl -ik https://http-db-service.$yourClusterDomain/orders

The system returns a response similar to the following:

Click to copy

HTTP/2 200

content-type: application/json;charset=UTF-8

vary: Origin

date: Mon, 01 Jun 2018 00:00:00 GMT

content-length: 2

x-envoy-upstream-service-time: 131

server: envoy

[]

Update your service’s API to secure it

Run the following command:

Click to copy

curl -k https://raw.githubusercontent.com/kyma-project/examples/master/gateway/service/api-with-auth.yaml | sed “s/.kyma.local/.$yourClusterDomain/” | kubectl apply -n stage -f -

After you apply this update, you must include a valid bearer ID token in the Authorization header to access the service.

NOTE: The update might take some time.

Develop a service locally without using Docker

You can develop services in the local Kyma installation without extensive Docker knowledge or a need to build and publish a Docker image. The minikube mount feature allows you to mount a directory from your local disk into the local Kubernetes cluster.

This tutorial shows how to use this feature, using the service example implemented in Golang.

Prerequisites

Install [Golang]{.underline}.

Steps

Install the example on your local machine

  1. Install the example:

Click to copy

go get -insecure github.com/kyma-project/examples/http-db-service

  1. Navigate to installed example and the http-db-service folder inside it:

Click to copy

cd ~/go/src/github.com/kyma-project/examples/http-db-service

  1. Build the executable to run the application:

Click to copy

CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

Mount the example directory into Minikube

For this step, you need a running local Kyma instance. Read [this]{.underline} document to learn how to install Kyma locally.

  1. Open the terminal window. Do not close it until the development finishes.

  2. Mount your local drive into Minikube:

Click to copy

# Use the following pattern:

minikube mount {LOCAL_DIR_PATH}:{CLUSTER_DIR_PATH}`

# To follow this guide, call:

minikube mount ~/go/src/github.com/kyma-project/examples/http-db-service:/go/src/github.com/kyma-project/examples/http-db-service

See the example and expected result:

Click to copy

# Terminal 1

minikube mount ~/go/src/github.com/kyma-project/examples/http-db-service:/go/src/github.com/kyma-project/examples/http-db-service

Mounting /Users/{USERNAME}/go/src/github.com/kyma-project/examples/http-db-service into /go/src/github.com/kyma-project/examples/http-db-service on the minikube VM

This daemon process must stay alive for the mount to still be accessible…

ufs starting

Run your local service inside Minikube

  1. Create Pod that uses the base Golang image to run your executable located on your local machine:

Click to copy

# Terminal 2

kubectl run mydevpod --image=golang:1.9.2-alpine --restart=Never -n stage --overrides=’

{

“spec”:{

“containers”:[

{

“name”:“mydevpod”,

“image”:“golang:1.9.2-alpine”,

“command”: ["./main"],

“workingDir”:"/go/src/github.com/kyma-project/examples/http-db-service",

“volumeMounts”:[

{

“mountPath”:"/go/src/github.com/kyma-project/examples/http-db-service",

“name”:“local-disk-mount”

}

]

}

],

“volumes”:[

{

“name”:“local-disk-mount”,

“hostPath”:{

“path”:"/go/src/github.com/kyma-project/examples/http-db-service"

}

}

]

}

}

  1. Expose the Pod as a service from Minikube to verify it:

Click to copy

kubectl expose pod mydevpod --name=mypodservice --port=8017 --type=NodePort -n stage

  1. Check the Minikube IP address and Port, and use them to access your service.

Click to copy

# Get the IP address.

minikube ip

# See the example result: 192.168.64.44

# Check the Port.

kubectl get services -n stage

# See the example result: mypodservice NodePort 10.104.164.115 <none> 8017:32226/TCP 5m

  1. Call the service from your terminal.

Click to copy

curl {minikube ip}:{port}/orders -v

# See the example: curl http://192.168.64.44:32226/orders -v

# The command returns an empty array.

Modify the code locally and see the results immediately in Minikube

  1. Edit the main.go file by adding a new test endpoint to the startService function

Click to copy

router.HandleFunc("/test", func (w http.ResponseWriter, r *http.Request) {

w.Write([]byte(“test”))

})

  1. Build a new executable to run the application inside Minikube:

Click to copy

CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

  1. Replace the existing Pod with the new version:

Click to copy

kubectl get pod mydevpod -n stage -o yaml | kubectl replace --force -f -

  1. Call the new test endpoint of the service from your terminal. The command returns the Test string:

Click to copy

curl http://192.168.64.44:32226/test -v

Publish a service Docker image and deploy it to Kyma

Follow this tutorial to learn how to develop a service locally. You can immediately see all the changes made in a local Kyma installation based on Minikube, without building a Docker image and publishing it to a Docker registry, such as the Docker Hub.

Using the same example service, this tutorial explains how to build a Docker image for your service, publish it to the Docker registry, and deploy it to the local Kyma installation. The instructions base on Minikube, but you can also use the image that you create and the Kubernetes resource definitions that you use on the Kyma cluster.

NOTE: The deployment works both on local Kyma installation and on the Kyma cluster.

Steps

Build a Docker image

The http-db-service example used in this guide provides you with the Dockerfile necessary for building Docker images. Examine the Dockerfile to learn how it looks and how it uses the Docker Multistaging feature, but do not use it one-to-one for production. There might be custom LABEL attributes with values to override.

  1. In your terminal, go to the examples/http-db-service directory. If you did not follow the [Sample service deployment on local]{.underline} guide and you do not have this directory locally, get the http-db-service example from the [examples]{.underline} repository.

  2. Run the build with ./build.sh.

NOTE: Ensure that the new image builds and is available in your local Docker registry by calling docker images. Find an image called example-http-db-service and tagged as latest.

Register the image in the Docker Hub

This guide bases on Docker Hub. However, there are many other Docker registries available. You can use a private Docker registry, but it must be available in the Internet. For more details about using a private Docker registry, see the [this]{.underline} document.

  1. Open the [Docker Hub]{.underline} webpage.

  2. Provide all of the required details and sign up.

Sign in to the Docker Hub registry in the terminal

  1. Call docker login.

  2. Provide the username and password, and select the ENTER key.

Push the image to the Docker Hub

  1. Tag the local image with a proper name required in the registry: docker tag example-http-db-service {USERNAME}/example-http-db-service:0.0.1.

  2. Push the image to the registry: docker push {USERNAME}/example-http-db-service:0.0.1.

NOTE: To verify if the image is successfully published, check if it is available online at the following address: https://hub.docker.com/r/{USERNAME}/example-http-db-service/

Deploy to Kyma

The http-db-service example contains sample Kubernetes resource definitions needed for the basic Kyma deployment. Find them in the deployment folder. Perform the following modifications to use your newly-published image in the local Kyma installation:

  1. Go to the deployment directory.

  2. Edit the deployment.yaml file. Change the image attribute to {USERNAME}/example-http-db-service:0.0.1.

  3. Create the new resources in local Kyma using these commands: kubectl create -f deployment.yaml -n stage && kubectl create -f ingress.yaml -n stage.

  4. Edit your /etc/hosts to add the new http-db-service.kyma.local host to the list of hosts associated with your minikube ip. Follow these steps:

    • Open a terminal window and run: sudo vim /etc/hosts

    • Select the i key to insert a new line at the top of the file.

    • Add this line: {YOUR.MINIKUBE.IP} http-db-service.kyma.local

    • Type :wq and select the Enter key to save the changes.

  5. Run this command to check if you can access the service: curl https://http-db-service.kyma.local/orders. The response should return an empty array.

Helm overrides for Kyma installation

Kyma packages its components into [Helm]{.underline} charts that the [Installer]{.underline} uses. This document describes how to configure the Installer with override values for Helm [charts]{.underline}.

Overview

The Installer is a Kubernetes Operator that uses Helm to install Kyma components. Helm provides an overrides feature to customize the installation of charts, such as to configure environment-specific values. When using Installer for Kyma installation, users can’t interact with Helm directly. The installation is not an interactive process.

To customize the Kyma installation, the Installer exposes a generic mechanism to configure Helm overrides called user-defined overrides.

User-defined overrides

The Installer finds user-defined overrides by reading the ConfigMaps and Secrets deployed in the kyma-installer Namespace and marked with the installer:overrides label.

The Installer constructs a single override by inspecting the ConfigMap or Secret entry key name. The key name should be a dot-separated sequence of strings corresponding to the structure of keys in the chart’s values.yaml file or the entry in chart’s template. See the examples below.

Installer merges all overrides recursively into a single YAML stream and passes it to Helm during the Kyma installation/upgrade operation.

Common vs component overrides

The Installer looks for available overrides each time a component installation or update operation is due. Overrides for the component are composed from two sets: common overrides and component-specific overrides.

Kyma uses common overrides for the installation of all components. ConfigMaps and Secrets marked with the label installer:overrides, contain the definition. They require no additional label.

Kyma uses component-specific overrides only for the installation of specific components. ConfigMaps and Secrets marked with both installer:overrides and component: <name> labels, where <name> is the component name, contain the definition. Component-specific overrides have precedence over Common ones in case of conflicting entries.

Overrides Examples

Top-level charts overrides

Overrides for top-level charts are straightforward. Just use the template value from the chart (without leading “.Values.” prefix) as the entry key in the ConfigMap or Secret.

Example:

The Installer uses a core top-level chart that contains a template with the following value reference:

Click to copy

memory: {{ .Values.test.acceptance.ui.requests.memory }}

The chart’s default value test.acceptance.ui.requests.memory in the values.yaml file resolves the template. The following fragment of values.yaml shows this definition:

Click to copy

test:

acceptance:

ui:

requests:

memory: “1Gi”

To override this value, for example to “2Gi”, proceed as follows:

  • Create a ConfigMap in the kyma-installer Namespace, labelled with: installer:overrides (or reuse an existing one).

  • Add an entry test.acceptance.ui.requests.memory: 2Gi to the map.

Once the installation starts, the Installer generates overrides based on the map entries. The system uses the value of “2Gi” instead of the default “1Gi” from the chart values.yaml file.

For overrides that the system should keep in Secrets, just define a Secret object instead of a ConfigMap with the same key and a base64-encoded value. Be sure to label the Secret with installer:overrides.

Sub-chart overrides

Overrides for sub-charts follow the same convention as top-level charts. However, overrides require additional information about sub-chart location.

When a sub-chart contains the values.yaml file, the information about the chart location is not necessary because the chart and it’s values.yaml file are on the same level in the directory hierarchy.

The situation is different when the Installer installs a chart with sub-charts. All template values for a sub-chart must be prefixed with a sub-chart “path” that is relative to the top-level “parent” chart.

This is not an Installer-specific requirement. The same considerations apply when you provide overrides manually using the helm command-line tool.

Here is an example. There’s a core top-level chart, that the Installer installs. There’s an application-connector sub-chart in core with another nested sub-chart: connector-service. In one of its templates there’s a following fragment (shortened):

Click to copy

spec:

containers:

- name: {{ .Chart.Name }}

args:

- “/connectorservice”

- ‘–appName={{ .Chart.Name }}’

- “–domainName={{ .Values.global.domainName }}”

- “–tokenExpirationMinutes={{ .Values.deployment.args.tokenExpirationMinutes }}”

The following fragment of the values.yaml file in connector-service chart defines the default value for tokenExpirationMinutes:

Click to copy

deployment:

args:

tokenExpirationMinutes: 60

To override this value, such as to change "60 to “90”, do the following:

  • Create a ConfigMap in the kyma-installer Namespace labeled with installer:overrides or reuse existing one.

  • Add an entry application-connector.connector-service.deployment.args.tokenExpirationMinutes: 90 to the map.

Notice that the user-provided override key now contains two parts:

  • The chart “path” inside the top-level core chart: application-connector.connector-service

  • The original template value reference from the chart without the .Values. prefix: deployment.args.tokenExpirationMinutes.

Once the installation starts, the Installer generates overrides based on the map entries. The system uses the value of “90” instead of the default value of “60” from the values.yaml chart file.

Global overrides

There are several important parameters usually shared across the charts. Helm convention to provide these requires the use of the global override key. For example, to define the global.domain override, just use “global.domain” as the name of the key in ConfigMap or Secret for the Installer.

Once the installation starts, the Installer merges all of the map entries and collects all of the global entries under the global top-level key to use for installation.

Values and types

Installer generally recognizes all override values as strings. It internally renders overrides to Helm as a YAML stream with only string values.

There is one exception to this rule with respect to handling booleans: The system converts “true” or “false” strings that it encounters to a corresponding boolean value (true/false).

Merging and conflicting entries

When the Installer encounters two overrides with the same key prefix, it tries to merge them. If both of them represent a map (they have nested sub-keys), their nested keys are recursively merged. If at least one of keys points to a final value, the Installer performs the merge in a non-deterministic order, so either one of the overrides is rendered in the final YAML data.

It is important to avoid overrides having the same keys for final values.

Example of non-conflicting merge:

Two overrides with a common key prefix (“a.b”):

Click to copy

“a.b.c”: “first”

“a.b.d”: “second”

The Installer yields correct output:

Click to copy

a:

b:

c: first

d: second

Example of conflicting merge:

Two overrides with the same key (“a.b”):

Click to copy

“a.b”: “first”

“a.b”: “second”

The Installer yields either:

Click to copy

a:

b: “first”

Or (due to non-deterministic merge order):

Click to copy

a:

b: “second”

Kyma features and concepts in practice

The table contains a list of examples that demonstrate Kyma functionalities. You can run all of them locally or on a cluster. Examples are organized by a feature or concept they showcase. Each of them contains ready-to-use code snippets and the instructions in README.md documents.

Follow the links to examples’ code and content sources, and try them on your own.

Example Description Technology


[HTTP DB Service]{.underline} Test the service that exposes an HTTP API to access a database on the cluster. Go, MSSQL
[Event Service Subscription]{.underline} Test the example that demonstrates the publish and consume features of the Event Bus. Go
[Event Lambda Subscription]{.underline} Create functions, trigger them on Events, and bind them to services. Kubeless
[Gateway]{.underline} Expose APIs for functions or services. Kubeless
[Service Binding]{.underline} Bind a Redis service to a lambda function. Kubeless, Redis, NodeJS
[Call SAP Commerce]{.underline} Call SAP Commerce in the context of the end user. Kubeless, NodeJS
[Alert Rules]{.underline} Configure alert rules in Kyma. Prometheus
[Custom Metrics in Kyma]{.underline} Expose custom metrics in Kyma. Go, Prometheus
[Event Email Service]{.underline} Send an automated email upon receiving an Event. NodeJS
[Tracing]{.underline} Configure tracing for a service in Kyma. Go

Security

Overview

The security model in Kyma uses the Service Mesh component to enforce authorization through [Kubernetes Role Based Authentication]{.underline} (RBAC) in the cluster. The identity federation is managed through [Dex]{.underline}, which is an open-source, OpenID Connect identity provider.

Dex implements a system of connectors that allow you to delegate authentication to external OpenID Connect and SAML2-compliant Identity Providers and use their user stores. Read [this]{.underline} document to learn how to enable authentication with an external Identity Provider by using a Dex connector.

Out of the box, Kyma comes with its own static user store used by Dex to authenticate users. This solution is designed for use with local Kyma deployments as it allows to easily create predefined users’ credentials by creating Secret objects with a custom dex-user-config label. Read [this]{.underline} document to learn how to manage users in the static store used by Dex.

Kyma uses a group-based approach to managing authorizations. To give users that belong to a group access to resources in Kyma, you must create:

  • Role and RoleBinding - for resources in a given Namespace.

  • ClusterRole and ClusterRoleBinding - for resources available in the entire cluster.

The RoleBinding or ClusterRoleBinding must have a group specified as their subject. See [this]{.underline} document to learn how to manage Roles and RoleBindings.

NOTE: You cannot define groups for the static user store. Instead, bind the user directly to a role or a cluster role by setting the user as the subject of a RoleBinding or ClusterRoleBinding.

By default, there are five roles used to manage permissions in every Kyma cluster. These roles are:

  • kyma-essentials

  • kyma-view

  • kyma-edit

  • kyma-developer

  • kyma-admin

For more details about roles, read [this]{.underline} document.

NOTE: The Global permissions view in the Settings section of the Kyma Console UI allows you to manage bindings between user groups and roles.

Architecture

The following diagram illustrates the authorization and authentication flow in Kyma. The representation assumes the Kyma Console UI as the user’s point of entry.

  1. The user opens the Kyma Console UI. If the Console application doesn’t find a JWT token in the browser session storage, it redirects the user’s browser to the Open ID Connect (OIDC) provider, Dex.

  2. Dex lists all defined Identity Provider connectors to the user. The user selects the Identity Provider to authenticate with. After successful authentication, the browser is redirected back to the OIDC provider which issues a JWT token to the user. After obtaining the token, the browser is redirected back to the Console UI. The Console UI stores the token in the Session Storage and uses it for all subsequent requests.

  3. The Authorization Proxy validates the JWT token passed in the Authorization Bearer request header. It extracts the user and groups details, the requested resource path, and the request method from the token. The Proxy uses this data to build an attributes record, which it sends to the Kubernetes Authorization API.

  4. The Proxy sends the attributes record to the Kubernetes Authorization API. If the authorization fails, the flow ends with a 403 code response.

  5. If the authorization succeeds, the request is forwarded to the Kubernetes API Server.

NOTE: The Authorization Proxy can verify JWT tokens issued by Dex because Dex is registered as a trusted issuer through OIDC parameters during the Kyma installation.

Kubeconfig generator

The Kubeconfig generator is a proprietary tool that generates a kubeconfig file which allows the user to access the Kyma cluster through the Command Line Interface (CLI), and to manage the connected cluster within the permission boundaries of the user.

The Kubeconfig generator rewrites the ID token issued for the user by Dex into the generated kubeconfig file. The time to live (TTL) of the ID token is 8 hours, which effectively means that the TTL of the generated kubeconfig file is 8 hours as well.

The generator is a publicly exposed service. You can access it directly under the https://configurations-generator.{YOUR_CLUSTER_DOMAIN} address. The service requires a valid ID token issued by Dex to return a code 200 result.

Get the kubeconfig file and configure the CLI

Follow these steps to get the kubeconfig file and configure the CLI to connect to the cluster:

  1. Access the Console UI of your Kyma cluster.

  2. Click Administration.

  3. Click the Download config button to download the kubeconfig file to a selected location on your machine.

  4. Open a terminal window.

  5. Export the KUBECONFIG environment variable to point to the downloaded kubeconfig. Run this command:

Click to copy

export KUBECONFIG={KUBECONFIG_FILE_PATH}

NOTE: Drag and drop the kubeconfig file in the terminal to easily add the path of the file to the export KUBECONFIG command you run.

  1. Run kubectl cluster-info to check if the CLI is connected to the correct cluster.

NOTE: Exporting the KUBECONFIG environment variable works only in the context of the given terminal window. If you close the window in which you exported the variable, or if you switch to a new terminal window, you must export the environment variable again to connect the CLI to the desired cluster.

Alternatively, get the kubeconfig file by sending a GET request with a valid ID token issued for the user to the /kube-config endpoint of the https://configurations-generator.{YOUR_CLUSTER_DOMAIN} service. For example:

Click to copy

curl GET https://configurations-generator.{YOUR_CLUSTER_DOMAIN}/kube-config -H “Authorization: Bearer {VALID_ID_TOKEN}”

GraphQL

Kyma uses a custom [GraphQL]{.underline} implementation in the Console Backend Service and deploys an RBAC-based logic to control the access to the GraphQL endpoint. All calls to the GraphQL endpoint require a valid Kyma token for authentication.

The authorization in GraphQL uses RBAC, which means that:

  • All of the Roles, RoleBindings, ClusterRoles and CluserRoleBindings that you create and assign are effective and give the same permissions when users interact with the cluster resources both through the CLI and the GraphQL endpoints.

  • To give users access to specific queries you must create appropriate Roles and bindings in your cluster.

The implementation assigns GraphQL actions to specific Kubernetes verbs:

GraphQL action Kubernetes verb(s)


query get (for a single resource) list (for multiple resources)
mutation create, delete
subscription watch

NOTE: Due to the nature of Kubernetes, you can secure specific resources specified by their name only for queries and mutations. Subscriptions work only with entire resource groups, such as kinds, and therefore don’t allow for such level of granularity.

Available GraphQL actions

To access cluster resources through GraphQL, an action securing given resource must be defined and implemented in the cluster. See the [GraphQL schema]{.underline} file for the list of actions implemented in every Kyma cluster by default.

Secure a defined GraphQL action

This is an example GraphQL action implemented in Kyma out of the box.

Click to copy

IDPPreset(name: String!): IDPPreset @HasAccess(attributes: {resource: “IDPPreset”, verb: “get”, apiGroup: “authentication.kyma-project.io”, apiVersion: “v1alpha1”})

This query secures the access to [IDPPreset]{.underline} custom resources with specific names. To access it, the user must be bound to a role that allows to access:

To allow access specifically to the example query, create this RBAC role in the cluster and bind it to a user or a client:

Click to copy

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: Role

metadata:

name: kyma-idpp-query-example

rules:

- apiGroups: [“authentication.kyma-project.io”]

resources: [“idppresets”]

verbs: [“get”]

NOTE: To learn more about RBAC authorization in a Kubernetes cluster, read [this]{.underline} document.

GraphQL request flow

This diagram illustrates the request flow for the Console Backend Service which uses a custom [GraphQL]{.underline} implementation:

  1. The user sends a request with an ID token to the GraphQL application.

  2. The GraphQL application validates the user token and extracts user data required to perform [Subject Access Review]{.underline} (SAR).

  3. The [Kubernetes API Server]{.underline} performs SAR.

  4. Based on the results of SAR, the Kubernetes API Server informs the GraphQL application whether the user can perform the requested [GraphQL action]{.underline}.

  5. Based on the information provided by the Kubernetes API Server, the GraphQL application returns an appropriate response to the user.

NOTE: Read [this]{.underline} document to learn more about the custom GraphQL implementation in Kyma.

TLS in Tiller

Kyma comes with a custom installation of [Tiller]{.underline} which secures all incoming traffic with TLS certificate verification. To enable communication with Tiller, you must save the client certificate, key, and the cluster Certificate Authority (CA) to [Helm Home]{.underline}.

Saving the client certificate, key, and CA to [Helm Home]{.underline} is manual on cluster deployments. When you install Kyma locally, this process is handled by the run.sh script.

Additionally, you must add the --tls flag to every Helm command. If you don’t save the required certificates in Helm Home, or you don’t include the --tls flag when you run a Helm command, you get this error:

Click to copy

Error: transport is closing

Add certificates to Helm Home

To get the client certificate, key, and the cluster CA and add them to [Helm Home]{.underline}, run these commands:

Click to copy

kubectl get -n kyma-installer secret helm-secret -o jsonpath="{.data[‘global\.helm\.ca\.crt’]}" | base64 --decode > “$(helm home)/ca.pem”;

kubectl get -n kyma-installer secret helm-secret -o jsonpath="{.data[‘global\.helm\.tls\.crt’]}" | base64 --decode > “$(helm home)/cert.pem”;

kubectl get -n kyma-installer secret helm-secret -o jsonpath="{.data[‘global\.helm\.tls\.key’]}" | base64 --decode > “$(helm home)/key.pem”;

CAUTION: All certificates are saved to Helm Home under the same, default path. When you save certificates of multiple clusters to Helm Home, one set of certificates overwrites the ones that already exist in Helm Home. As a result, you must save the cluster certificate set to Helm Home every time you switch the cluster context you work in.

Development

To connect to the Tiller server, for example, using the [Helm GO library]{.underline}, mount the Helm client certificates into the application you want to connect. These certificates are stored as a Kubernertes Secret.

To get this Secret, run:

Click to copy

kubectl get secret -n kyma-installer helm-secret

Additionally, those secrets are also available as overrides during Kyma installation:

Override Description


global.helm.ca.crt Certificate Authority for the Helm client
global.helm.tls.crt Client certificate for the Helm client
global.helm.tls.key Client certificate key for the Helm client

Roles in Kyma

Kyma uses roles and groups to manage access in the cluster. Every cluster comes with five predefined roles which give the assigned users different level of permissions suitable for different purposes. These roles are defined as ClusterRoles and use the Kubernetes mechanism of aggregation which allows you to combine multiple ClusterRoles into a single ClusterRole. Use the aggregation mechanism to efficiently manage access to Kubernetes and Kyma-specific resources.

NOTE: Read [this]{.underline} Kubernetes documentation to learn more about the aggregation mechanism used to define Kyma roles.

You can assign any of the predefined roles to a user or to a group of users in the context of:

The predefined roles, arranged in the order of increasing access level, are:

Role Description


kyma-essentials The basic role required to allow the users to see the Console UI of the cluster. This role doesn’t give the user rights to modify any resources.
kyma-view The role for listing Kubernetes and Kyma-specific resources.
kyma-edit The role for to editing Kyma-specific resources.
kyma-developer The role created for developers who build implementations using Kyma. It allows you to list and edit Kubernetes and Kyma-specific resources.
kyma-admin The role with the highest permission level which gives access to all Kubernetes and Kyma resources and components with administrative rights.

NOTE: To learn more about the default roles and how they are constructed, see [this]{.underline} file.

Group

The groups.authentication.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format that represents user groups available in the ID provider in the Kyma cluster. To get the up-to-date CRD and show the output in the yaml format, run this command:

Click to copy

kubectl get crd groups.authentication.kyma-project.io -o yaml

Sample custom resource

This is a sample CR that represents an user group available in the ID provider in the Kyma cluster.

Click to copy

apiVersion: authentication.kyma-project.io/v1alpha1

kind: Group

metadata:

name: “sample-group”

spec:

name: “admins”

idpName: “github”

description: “‘admins’ represents the group of users with administrative privileges in the organization.”

This table analyses the elements of the sample CR and the information it contains:

Field Mandatory Description


metadata.name YES Specifies the name of the CR.
spec.name YES Specifies the name of the group.
spec.idpName YES Specifies the name of the ID provider in which the group exists.
spec.description NO Description of the group available in the ID provider.

IDPPreset

The idppresets.authentication.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format that represents presets of the Identity Provider configuration used to secure API through the Console UI. Presets are a convenient way to configure the authentication section in the API custom resource.

To get the up-to-date CRD and show the output in the yaml format, run this command:

Click to copy

kubectl get crd idppresets.authentication.kyma-project.io -o yaml

Sample custom resource

This is a sample CR used to create an IDPPreset:

Click to copy

apiVersion: authentication.kyma-project.io/v1alpha1

kind: IDPPreset

metadata:

name: “sample-idppreset”

spec:

issuer: https://example.com

jwksUri: https://example.com/keys

Custom resource parameters

This table lists all the possible parameters of a given resource together with their descriptions:

Parameter Mandatory Description


metadata.name YES Specifies the name of the CR.
spec.issuer YES Specifies the issuer of the JWT tokens used to access the services.
spec.jwksUri YES Specifies the URL of the OpenID Provider’s public key set to validate the signature of the JWT token.

Usage in the UI

The issuer and jwksUri fields originate from the [Api CR]{.underline} specification. In most cases, these values are reused many times. Use the IDPPreset CR to store these details in a single object and reuse them in a convenient way. In the UI, the IDPPreset CR allows you to choose a preset with details of a specific identity provider from the drop-down menu instead of entering them manually every time you expose a secured API. Apart from consuming IDPPresets, you can also manage them in the Console UI. To create and delete IDPPresets, select IDP Presets from the Integration section.

Related resources and components

These components use this CR:

Name Description


IDP Preset Generates a Go client which allows components and tests to create, delete, or get IDP Preset resources.
Console Backend Service Enables the IDP Preset management with GraphQL API.

Update TLS certificate

The TLS certificate is a vital security element. Follow this tutorial to update the TLS certificate in Kyma.

NOTE: This procedure can interrupt the communication between your cluster and the outside world for a limited period of time.

Prerequisites

  • New TLS certificates

  • Kyma administrator access

Steps

  1. Export the new TLS certificate and key as environment variables. Run:

Click to copy

export KYMA_TLS_CERT=$(cat {NEW_CERT_PATH})

export KYMA_TLS_KEY=$(cat {NEW_KEY_PATH})

  1. Update the Ingress Gateway certificate. Run:

Click to copy

cat <<EOF | kubectl apply -f -

apiVersion: v1

kind: Secret

type: kubernetes.io/tls

metadata:

name: istio-ingressgateway-certs

namespace: istio-system

data:

tls.crt: $(echo “$KYMA_TLS_CERT” | base64)

tls.key: $(echo “$KYMA_TLS_KEY” | base64)

EOF

  1. Update the kyma-system Namespace certificate:

Click to copy

cat <<EOF | kubectl apply -f -

apiVersion: v1

kind: Secret

type: Opaque

metadata:

name: ingress-tls-cert

namespace: kyma-system

data:

tls.crt: $(echo “$KYMA_TLS_CERT” | base64)

EOF

  1. Update the kyma-integration Namespace certificate:

Click to copy

cat <<EOF | kubectl apply -f -

apiVersion: v1

kind: Secret

type: Opaque

metadata:

name: ingress-tls-cert

namespace: kyma-integration

data:

tls.crt: $(echo “$KYMA_TLS_CERT” | base64)

EOF

  1. Restart the Ingress Gateway Pod to apply the new certificate:

Click to copy

kubectl delete pod -l app=istio-ingressgateway -n istio-system

  1. Restart the Pods in the kyma-system Namespace to apply the new certificate:

Click to copy

kubectl delete pod -l tlsSecret=ingress-tls-cert -n kyma-system

  1. Restart the Pods in the kyma-integration Namespace to apply the new certificate:

Click to copy

kubectl delete pod -l tlsSecret=ingress-tls-cert -n kyma-integration

Manage static users in Dex

Create a new static user

To create a static user in Dex, create a Secret with the dex-user-config label set to true. Run:

Click to copy

cat <<EOF | kubectl apply -f -

apiVersion: v1

kind: Secret

metadata:

name: {SECRET_NAME}

namespace: {SECRET_NAMESPACE}

labels:

“dex-user-config”: “true”

data:

email: {BASE64_USER_EMAIL}

username: {BASE64_USERNAME}

password: {BASE64_USER_PASSWORD}

type: Opaque

EOF

NOTE: If you don’t specify the Namespace in which you want to create the Secret, the system creates it in the default Namespace.

The following table describes the fields that are mandatory to create a static user. If you don’t include any of these fields, the user is not created.

Field Description


data.email Base64-encoded email address used to sign-in to the console UI. Must be unique.
data.username Base64-encoded username displayed in the console UI.
data.password Base64-encoded user password. There are no specific requirements regarding password strength, but it is recommended to use a password that is at least 8-characters-long.

Create the Secrets in the cluster before Dex is installed. The Dex init-container with the tool that configures Dex generates user configuration data basing on properly labeled Secrets, and adds the data to the ConfigMap.

If you want to add a new static user after Dex is installed, restart the Dex Pod. This creates a new Pod with an updated ConfigMap.

Bind a user to a Role or a ClusterRole

A newly created static user has no access to any resources of the cluster as there is no Role or ClusterRole bound to it.By default, Kyma comes with the following ClusterRoles:

  • kyma-admin: gives full admin access to the entire cluster

  • kyma-edit: gives full access to all Kyma-managed resources

  • kyma-developer: gives full access to Kyma-managed resources and basic Kubernetes resources

  • kyma-view: allows viewing and listing all of the resources of the cluster

  • kyma-essentials: gives a set of minimal view access right to use the Kyma Console

To bind a newly created user to the kyma-view ClusterRole, run this command:

Click to copy

kubectl create clusterrolebinding {BINDING_NAME} --clusterrole=kyma-view --user={USER_EMAIL}

To check if the binding is created, run:

Click to copy

kubectl get clusterrolebinding {BINDING_NAME}

Add an Identity Provider to Dex

Add external, OpenID Connect compliant authentication providers to Kyma using [Dex connectors]{.underline}. Follow this tutorial to add a GitHub connector and use it to authenticate users in Kyma.

NOTE: Groups in the Github are represented as teams. See [this]{.underline} document to learn how to manage teams in Github.

Prerequisites

To add a GitHub connector to Dex, [register]{.underline} a new OAuth application in GitHub. Set the authorization callback URL to https://dex.kyma.local/callback. After you complete the registration, [request]{.underline} for an organization approval.

NOTE: To authenticate in Kyma using GitHub, the user must be a member of a GitHub [organization]{.underline} that has at least one [team]{.underline}.

Configure Dex

Register the GitHub Dex connector by editing the dex-config-map.yaml ConfigMap file located in the kyma/resources/dex/templates directory. Follow this template:

Click to copy

connectors:

- type: github

id: github

name: GitHub

config:

clientID: {GITHUB_CLIENT_ID}

clientSecret: {GITHUB_CLIENT_SECRET}

redirectURI: https://dex.kyma.local/callback

orgs:

- name: {GITHUB_ORGANIZATION}

This table explains the placeholders used in the template:

Placeholder Description


GITHUB_CLIENT_ID Specifies the application’s client ID.
GITHUB_CLIENT_SECRET Specifies the application’s client Secret.
GITHUB_ORGANIZATION Specifies the name of the GitHub organization.

Configure authorization rules

To bind Github groups to default Kyma roles, add the bindings section to [this]{.underline} file. Follow this template:

Click to copy

bindings:

kymaAdmin:

groups:

- “{GITHUB_ORGANIZATION}:{GITHUB_TEAM_A}”

kymaView:

groups:

- “{GITHUB_ORGANIZATION}:{GITHUB_TEAM_B}”

TIP: You can bind GitHub teams to any of the five predefined Kyma roles. Use these aliases: kymaAdmin, kymaView, kymaDeveloper, kymaEdit or kymaEssentials. To learn more about the predefined roles, read [this]{.underline} document.

This table explains the placeholders used in the template:

Placeholder Description


GITHUB_ORGANIZATION Specifies the name of the GitHub organization.
GITHUB_TEAM_A Specifies the name of GitHub team to bind to the kyma-admin role.
GITHUB_TEAM_B Specifies the name of GitHub team to bind to the kyma-view role.

Overview

The Service Catalog groups reusable, integrated services from all Service Brokers registered in Kyma. Its purpose is to provide an easy way for Kyma users to access services that the Service Brokers manage and use them in their applications.

Due to the fact that Kyma runs on Kubernetes, you can easily instantiate a service that a third party provides and maintains, such as a database. You can consume it from Kyma without extensive knowledge about the clustering of such a datastore service and the responsibility for its upgrades and maintenance. You can also easily provision an instance of the software offering that a Service Broker registered in Kyma exposes, and bind it to an application running in the Kyma cluster.

You can perform the following operations in the Service Catalog:

  • Expose the consumable services by listing them with all the details, including the documentation and the consumption plans.

  • Consume the services by provisioning them in a given Namespace.

  • Bind the services to the applications through Secrets.

NOTE: Kyma uses the Service Catalog based on the one provided by Kubernetes. Kyma also supports the experimental version of the Service Catalog without api-server and etcd. Read this document for more information.

Architecture

The diagram and steps describe the Service Catalog workflow and the roles of specific cluster and Namespace-wide resources in this process:

  1. The Kyma installation results in the registration of the default Service Brokers in the Kyma cluster. The Kyma administrator can manually register other ClusterServiceBrokers in the Kyma cluster. The Kyma user can also register a Service Broker in a given Namespace.

  2. Inside the cluster, each ClusterServiceBroker exposes services that are ClusterServiceClasses in their different variations called ClusterServicePlans. Similarly, the ServiceBroker registered in a given Namespace exposes ServiceClasses and ServicePlans only in this specific Namespace.

  3. In the Console UI or CLI, the Kyma user lists all exposed cluster-wide and Namespace-specific services and requests to create instances of those services in the Namespace.

  4. The Kyma user creates bindings to the ServiceInstances to allow the given applications to access the provisioned services.

Resources

This document includes an overview of resources that the Kyma Service Catalog provides.

NOTE: The “Cluster” prefix in front of resources means they are cluster-wide. Resources without that prefix refer to the Namespace scope.

  • ClusterServiceBroker is an endpoint for a set of managed services that a third party offers and maintains.

  • ClusterServiceClass is a managed service exposed by a given ClusterServiceBroker. When a cluster administrator registers a new Service Broker in the Service Catalog, the Service Catalog controller obtains new services exposed by the Service Broker and renders them in the cluster as ClusterServiceClasses. A ClusterServiceClass is synonymous with a service in the Service Catalog.

  • ClusterServicePlan is a variation of a ClusterServiceClass that offers different levels of quality, configuration options, and the cost of a given service. Contrary to the ClusterServiceClass, which is purely descriptive, the ClusterServicePlan provides technical information to the ClusterServiceBroker on this part of the service that the ClusterServiceBroker can expose.

  • ServiceBroker is any Service Broker registered in a given Namespace where it exposes ServiceClasses and ServicePlans that are available only in that Namespace.

  • ServiceClass is a Namespace-scoped representation of a ClusterServiceClass. Similarly to the ClusterServiceClass, it is synonymous with a service in the Service Catalog.

  • ServicePlan is a Namespace-scoped representation of a ClusterServicePlan.

  • ServiceInstance is a provisioned instance of a ClusterServiceClass to use in one or more cluster applications.

  • ServiceBinding is a link between a ServiceInstance and an application that cluster users create to request credentials or configuration details for a given ServiceInstance.

  • Secret is a basic resource to transfer credentials or configuration details that the application uses to consume a ServiceInstance. The service binding process leads to the creation of a Secret.

  • ServiceBindingUsage is a Kyma custom resource that allows the ServiceBindingUsage controller to inject Secrets into a given application.

  • UsageKind is a Kyma custom resource that defines which resources can be bound with the ServiceBinding and how to bind them.

Provisioning and binding

Provisioning a service means creating an instance of a service. When you consume a specific ClusterServiceClass or a ServiceClass, and the system provisions a ServiceInstance, you need credentials for this service. To obtain credentials, create a ServiceBinding resource using the Service Catalog API. One instance can have numerous bindings to use in the application. When you raise a binding request, the system returns the credentials in the form of a Secret. The system creates a Secret in a given Namespace.

NOTE: The security in Kyma relies on the Kubernetes concept of a Namespace which is a security boundary. If the Secret exists in the Namespace, the administrator can inject it to any Deployment. The Service Broker cannot prevent other applications from consuming a created Secret. Therefore, to ensure a stronger level of isolation and security, use a dedicated Namespace and request separate bindings for each Deployment.

The Secret allows you to run the service successfully. However, a problem appears each time you need to change the definition of the yaml file in the Deployment to specify the Secrets’ usage. The manual process of editing the file is tedious and time-consuming. Kyma handles it by offering a custom resource called ServiceBindingUsage. This custom resource applies the Kubernetes PodPreset resource and allows you to enforce an automated flow in which the Binding Usage Controller injects ServiceBindings into a given Deployment or Function.

Provisioning and binding flow

The diagram shows an overview of interactions between all resources related to Kyma provisioning and binding, and the reverting, deprovisioning, and unbinding operations.

Kyma provisioning and binding{width=“9.66388888888889in” height=“4.544444444444444in”}

The process of provisioning and binding invokes the creation of three custom resources:

  • ServiceInstance

  • ServiceBinding

  • ServiceBindingUsage

The system allows you to create these custom resources in any order, but within a timeout period.

When you invoke the deprovisioning and unbinding actions, the system deletes all three custom resources. Similar to the creation process dependencies, the system allows you to delete ServiceInstance and ServiceBinding in any order, but within a timeout period. However, before you delete the ServiceBinding, make sure you remove the ServiceBindingUsage first. For more details, see the section on deleting a ServiceBinding.

Provision a service

To provision a service, create a ServiceInstance custom resource. Generally speaking, provisioning is a process in which the Service Broker creates a new instance of a service. The form and scope of this instance depends on the Service Broker.

Kyma provisioning{width=“8.604166666666666in” height=“2.3361111111111112in”}

Create a ServiceBinding

Kyma binding operation consists of two phases:

  1. The system gathers the information necessary to connect to the ServiceInstance and authenticate it. The Service Catalog handles this phase directly, without the use of any additional Kyma custom resources.

  2. The system must make the information it collected available to the application. Since the Service Catalog does not provide this functionality, you must create a ServiceBindingUsage custom resource.

Kyma binding{width=“9.395833333333334in” height=“3.977777777777778in”}

TIP: You can create the ServiceBinding and ServiceBindingUsage resources at the same time.

Define other types of resources

The UsageKind is a cluster-wide custom resource which allows you to bind a ServiceInstance to any kind of resource. By default, Kyma provides two UsageKinds which enable binding either to a Deployment or Function. You can add more UsageKinds if you want to bind your ServiceInstance to other types of resources. The UsageKind contains information on how the binding to these custom resources is conducted. The ServiceBindingUsage uses this information to inject Secrets to the application.

Kyma UsageKind{width=“8.50763888888889in” height=“6.074305555555555in”}

Delete a ServiceBinding

Kyma unbinding can be achieved in two ways:

  • Delete the ServiceBindingUsage. The Service Binding Usage Controller deletes the Secret injection, but the Secret itself still exists in the Namespace.

  • Delete the ServiceBinding. It deletes the Secret and triggers the deletion of all related ServiceBindingUsages.

Kyma unbinding{width=“9.940277777777778in” height=“4.081944444444445in”}

Deprovision a service

To deprovision a given service, delete the ServiceInstance custom resource. As part of this operation, the Service Broker deletes any resource created during the provisioning. When the process completes, the service becomes unavailable.

Kyma deprovisioning{width=“8.604166666666666in” height=“2.3361111111111112in”}

NOTE: You can deprovision a service only if no corresponding ServiceBinding for a given ServiceInstance exists.

Service Catalog backup and restore

In the production setup, the Service Catalog uses the etcd database cluster which is defined in the Service Catalog [etcd-stateful]{.underline} sub-chart. The etcd backup operator executes the backup procedure.

Backup

To execute the backup process, set the following values in the [core]{.underline} chart:

Property name Description


global.etcdBackup.enabled If set to true, the [etcd-operator]{.underline} chart and the Service Catalog backup sub-chart install the CronJob which periodically executes the etcd backup application. The etcd-operator also creates a Secret with the storage-account and storage-key keys. For more information on how to configure the backup CronJob, see the etcd backup documentation.
global.etcdBackup.containerName The Azure Blob Storage (ABS) container to store the backup.
etcd-operator.backupOperator.abs.storageAccount The name of the storage account for the ABS. It stores the value for the storage-account Secret key.
etcd-operator.backupOperator.abs.storageKey The key value of the storage account for the ABS. It stores the value for the storage-key Secret key.

NOTE: If you set the storageAccount, storageKey, and containerName properties, the global.etcdBackup.enabled must be set to true.

Restore

Follow these steps to restore the etcd cluster from the existing backup.

  1. Export the ABS_PATH environment variable with the path to the last successful backup file.

Click to copy

export ABS_PATH=$(kubectl get cm -n kyma-system sc-recorded-etcd-backup-data -o=jsonpath=’{.data.abs-backup-file-path-from-last-success}’)

export BACKUP_FILE_NAME=etcd.backup

  1. Download the backup to the local workstation using the portal or the Azure CLI. Set the downloaded file path:

Click to copy

export BACKUP_FILE_NAME=/path/to/downloaded/file

  1. Copy the backup file to every running Pod of the StatefulSet.

Click to copy

for i in {0…2};

do

kubectl cp ./$BACKUP_FILE_NAME kyma-system/service-catalog-etcd-stateful-$i:/$BACKUP_FILE_NAME

done

  1. Restore the backup on every Pod of the StatefulSet.

Click to copy

for i in {0…2};

do

remoteCommand="etcdctl snapshot restore /$BACKUP_FILE_NAME "

remoteCommand+="–name service-catalog-etcd-stateful-$i --initial-cluster "

remoteCommand+=“service-catalog-etcd-stateful-0=https://service-catalog-etcd-stateful-0.service-catalog-etcd-stateful.kyma-system.svc.cluster.local:2380,”

remoteCommand+=“service-catalog-etcd-stateful-1=https://service-catalog-etcd-stateful-1.service-catalog-etcd-stateful.kyma-system.svc.cluster.local:2380,”

remoteCommand+="service-catalog-etcd-stateful-2=https://service-catalog-etcd-stateful-2.service-catalog-etcd-stateful.kyma-system.svc.cluster.local:2380 "

remoteCommand+="–initial-cluster-token etcd-cluster-1 "

remoteCommand+="–initial-advertise-peer-urls https://service-catalog-etcd-stateful-:2380$i.service-catalog-etcd-stateful.kyma-system.svc.cluster.local"

kubectl exec service-catalog-etcd-stateful-$i -n kyma-system – sh -c “rm -rf service-catalog-etcd-stateful-$i.etcd”

kubectl exec service-catalog-etcd-stateful-$i -n kyma-system – sh -c “rm -rf /var/run/etcd/backup.etcd”

kubectl exec service-catalog-etcd-stateful-$i -n kyma-system – sh -c “$remoteCommand”

kubectl exec service-catalog-etcd-stateful-$i -n kyma-system – sh -c “mv -f service-catalog-etcd-stateful-$i.etcd /var/run/etcd/backup.etcd”

kubectl exec service-catalog-etcd-stateful-$i -n kyma-system – sh -c “rm $BACKUP_FILE_NAME”

done

  1. Delete old Pods.

Click to copy

kubectl delete pod service-catalog-etcd-stateful-0 service-catalog-etcd-stateful-1 service-catalog-etcd-stateful-2 -n kyma-system

Experimental features

The Service Catalog requires its own instance of api-server and etcd, which increases the complexity of the cluster configuration and maintenance costs. In case of api-server downtime, all Service Catalog resources are unavailable. For this reason, Kyma developers contribute to the Service Catalog project to remove the dependency on these external components and replace them with a native Kubernetes solution - CustomResourceDefinitions (CRDs).

Enable CRDs

To enable the CRDs feature in the Service Catalog, override the service-catalog-apiserver.enabled and service-catalog-crds.enabled parameters in the installation file:

Click to copy

apiVersion: v1

kind: ConfigMap

metadata:

name: service-catalog-overrides

namespace: kyma-installer

labels:

installer: overrides

component: service-catalog

kyma-project.io/installation: “”

data:

etcd-stateful.etcd.resources.limits.memory: 256Mi

etcd-stateful.replicaCount: “1”

service-catalog-apiserver.enabled: “false”

service-catalog-crds.enabled: “true”

Click to copy

apiVersion: v1

kind: ConfigMap

metadata:

name: service-catalog-overrides

namespace: kyma-installer

labels:

installer: overrides

component: service-catalog

kyma-project.io/installation: “”

data:

service-catalog-apiserver.enabled: “false”

service-catalog-crds.enabled: “true”

ServiceBindingUsage

The servicebindingusages.servicecatalog.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to inject Secrets to the application. To get the up-to-date CRD and show the output in the yaml format, run this command:

Click to copy

kubectl get crd servicebindingusages.servicecatalog.kyma-project.io -o yaml

Sample custom resource

This is a sample resource in which the ServiceBindingUsage injects a Secret associated with the redis-instance-binding ServiceBinding to the redis-client Deployment in the production Namespace. This example has the conditions.status field set to true, which means that the ServiceBinding injection is successful. If this field is set to false, the message and reason fields appear.

Click to copy

apiVersion: servicecatalog.kyma-project.io/v1alpha1

kind: ServiceBindingUsage

metadata:

name: redis-client-binding-usage

namespace: production

“ownerReferences”: [

{

“apiVersion”: “servicecatalog.k8s.io/v1beta1”,

“kind”: “ServiceBinding”,

“name”: “redis-instance-binding”,

“uid”: “65cc140a-db6a-11e8-abe7-0242ac110023”

}

],

spec:

serviceBindingRef:

name: redis-instance-binding

usedBy:

kind: deployment

name: redis-client

parameters:

envPrefix:

name: “pico-bello”

status:

conditions:

- lastTransitionTime: 2018-06-26T10:52:05Z

lastUpdateTime: 2018-06-26T10:52:05Z

status: “True”

type: Ready

Custom resource parameters

This table lists all the possible parameters of a given resource together with their descriptions:

Parameter Mandatory Description


metadata.name YES Specifies the name of the CR.
metadata.namespace YES Specifies the Namespace in which the CR is created.
metadata.ownerReferences YES Contains an ownerReference to the binding specified at spec.serviceBindingRef.name field if the binding exists.
spec.serviceBindingRef.name YES Specifies the name of the ServiceBinding.
spec.usedBy YES Specifies the application into which the Secret is injected.
spec.usedBy.kind YES Specifies the name of the UsageKind custom resource.
spec.usedBy.name YES Specifies the name of the application.
spec.parameters.envPrefix NO Defines the prefix of the environment variables that the ServiceBindingUsage injects. The prefixing is disabled by default.
spec.parameters.envPrefix.name YES Specifies the name of the prefix. This field is mandatory if envPrefix is specified.
status.conditions NO Specifies the state of the ServiceBindingUsage.
status.conditions.lastTransitionTime NO Specifies time when the Service Binding Usage Controller processes the ServiceBindingUsage for the first time or when the status.conditions.status field changes.
status.conditions.lastUpdateTime NO Specifies the time of the last ServiceBindingUsage condition update.
status.conditions.status NO Specifies whether the status of the status.conditions.type field is true or false.
status.conditions.type NO Defines the type of the condition. The value of this field is always ready.
message NO Describes in a human-readable way why the ServiceBinding injection has failed.
reason NO Specifies a unique, one-word, CamelCase reason for the condition’s last transition.

Related resources and components

These are the resources related to this CR:

Custom resource Description


UsageKind Provides information on where to inject Secrets.
ServiceBinding Provides Secrets to inject.

These components use this CR:

Component Description


Service Binding Usage Controller Reacts to every action of creating, updating, or deleting ServiceBindingUsages in all Namespaces, and uses ServiceBindingUsage data to inject binding.
Console Backend Service Exposes the given CR to the Console UI. It also allows you to create and delete a ServiceBindingUsage.

UsageKind

The usagekinds.servicecatalog.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define which resources can be bound with the ServiceBinding and how to bind them. To get the up-to-date CRD and show the output in the yaml format, run this command:

Click to copy

kubectl get crd usagekinds.servicecatalog.kyma-project.io -o yaml

Sample custom resource

This is a sample resource that allows you to bind a given resource with the ServiceBinding. This example has a resource section specified as function. You can adjust this section to point to any other kind of resource.

Click to copy

apiVersion: servicecatalog.kyma-project.io/v1alpha1

kind: UsageKind

metadata:

name: function

spec:

displayName: Function

resource:

group: kubeless.io

kind: function

version: v1beta1

labelsPath: spec.deployment.spec.template.metadata.labels

Custom resource parameters

This table lists all the possible parameters of a given resource together with their descriptions:

Parameter Mandatory Description


metadata.name YES Specifies the name of the CR.
spec.displayName YES Provides a human-readable name of the UsageKind.
spec.resource YES Specifies a resource which is bound with the ServiceBinding. The target resource is specified by its resource group, kind, and version.
spec.resource.group YES Specifies the group of the resource.
spec.resource.kind YES Specifies the kind of the resource.
spec.resource.version YES Specifies the version of the resource.
spec.labelsPath YES Specifies a path to the key that contains labels which are later injected into Pods.

Related resources and components

These are the resources related to this CR:

Custom resource Description


ServiceBindingUsage Contains the reference to the UsageKind.

These components use this CR:

Component Description


Service Binding Usage Controller Uses the UsageKind spec.resource and spec.labelsPath parameters to find a resource and a path to which it should inject Secrets.
Console Backend Service Exposes the given CR to the Console UI.

CLI reference

Management of the Service Catalog is based on Kubernetes resources and the custom resources specifically defined for Kyma. Manage all of these resources through kubectl.

Details

This section describes the resource names to use in the kubectl command line, the command syntax, and examples of use.

Resource types

Service Catalog operations use the following resources:

Singular name Plural name


clusterservicebroker clusterservicebrokers
clusterserviceclass clusterserviceclasses
clusterserviceplan clusterserviceplans
secret secrets
servicebinding servicebindings
servicebindingusage servicebindingusages
servicebroker servicebrokers
serviceclass serviceclasses
serviceinstance serviceinstances
serviceplan serviceplans

Syntax

Follow the kubectl syntax, kubectl {command} {type} {name} {flags}, where:

  • {command} is any command, such as describe.

  • {type} is a resource type, such as clusterserviceclass.

  • {name} is the name of a given resource type. Use {name} to make the command return the details of a given resource.

  • {flags} specifies the scope of the information. For example, use flags to define the Namespace from which to get the information.

Examples

The following examples show how to create a ServiceInstance, how to get a list of ClusterServiceClasses and a list of ClusterServiceClasses with human-readable names, a list of ClusterServicePlans, and a list of all ServiceInstances.

  • Create a ServiceInstance using the example of the Redis ServiceInstance for the 0.1.40 version of the Service Catalog:

Click to copy

cat <<EOF | kubectl create -f -

apiVersion: servicecatalog.k8s.io/v1beta1

kind: ServiceInstance

metadata:

name: my-instance

namespace: stage

spec:

clusterServiceClassExternalName: redis

clusterServicePlanExternalName: micro

parameters:

“imagePullPolicy”: “Always”

EOF

  • Get the list of all ClusterServiceClasses:

Click to copy

kubectl get clusterserviceclasses

  • Get the list of all ClusterServiceClasses and their human-readable names:

Click to copy

kubectl get clusterserviceclasses -o=custom-columns=NAME:.metadata.name,EXTERNAL\ NAME:.spec.externalName

  • Get the list of all ClusterServicePlans and associated ClusterServiceClasses:

Click to copy

kubectl get clusterserviceplans -o=custom-columns=NAME:.metadata.name,EXTERNAL\ NAME:.spec.externalName,EXTERNAL\ SERVICE\ CLASS:.spec.clusterServiceClassRef

  • Get the list of all ServiceInstances from all Namespaces:

Click to copy

kubectl get serviceinstances --all-namespaces

Register a broker in the Service Catalog

This tutorial shows you how to register a broker in the Service Catalog. The broker can be either a Namespace-scoped ServiceBroker or a cluster-wide ClusterServiceBroker. Follow this guide to register a cluster-wide or Namespace-scoped version of the sample UPS Broker.

Prerequisites

Steps

  1. Clone the [service-catalog]{.underline} repository:

Click to copy

git clone https://github.com/kubernetes-incubator/service-catalog.git

  1. Check out one of the official tags. For example:

Click to copy

git fetch --all --tags --prune

git checkout tags/v0.1.39 -b v0.1.39

  1. Create the ups-broker Namespace:

Click to copy

kubectl create namespace ups-broker

  1. Run this command to install the chart with the ups-broker name in the ups-broker Namespace:

Click to copy

helm install ./charts/ups-broker --name ups-broker --namespace ups-broker

  1. Register a broker:
  • Run this command to register a ClusterServiceBroker:

Click to copy

kubectl create -f ./contrib/examples/walkthrough/ups-clusterservicebroker.yaml

  • To register the UPS Broker as a ServiceBroker in the ups-broker Namespace, run:

Click to copy

kubectl create -f ./contrib/examples/walkthrough/ups-servicebroker.yaml -n ups-broker

After you successfully register your ServiceBroker or ClusterServiceBroker, the Service Catalog periodically fetches services from this broker and creates ServiceClasses or ClusterServiceClasses from them.

  1. Check the status of your broker:
  • To check the status of your ClusterServiceBroker, run:

Click to copy

kubectl get clusterservicebrokers ups-broker -o jsonpath="{.status.conditions}"

  • To check the status of the ServiceBroker, run:

Click to copy

kubectl get servicebrokers ups-broker -n ups-broker -o jsonpath="{.status.conditions}"

The output looks as follows:

Click to copy

{

“lastTransitionTime”: “2018-10-26T12:03:32Z”,

“message”: “Successfully fetched catalog entries from broker.”,

“reason”: “FetchedCatalog”,

“status”: “True”,

“type”: “Ready”

}

  1. View Service Classes that this broker provides:
  • To check the ClusterServiceClasses, run:

Click to copy

kubectl get clusterserviceclasses

  • To check the ServiceClasses, run:

Click to copy

kubectl get serviceclasses -n ups-broker

These are the UPS Broker Service Classes:

Click to copy

NAME EXTERNAL NAME

4f6e6cf6-ffdd-425f-a2c7-3c9258ad2468 user-provided-service

5f6e6cf6-ffdd-425f-a2c7-3c9258ad2468 user-provided-service-single-plan

8a6229d4-239e-4790-ba1f-8367004d0473 user-provided-service-with-schemas

Overview

UI Contracts are contracts between the Service Catalog views in the Kyma Console UI and the Open Service Broker API (OSBA) specification.

There are three types of OSBA fields:

  • Mandatory fields which are crucial to define

  • Optional fields which you can but do not have to define

  • Conventions which are proposed fields that can be passed in the metadata object

The Service Catalog is OSBA-compliant, which means that you can register a Service Class that has only the mandatory fields. However, it is recommended to provide more detailed Service Class definitions for better user experience.

In the Kyma Console UI, there are two types of views:

  • Catalog view

  • Instances view

Read the Catalog view and Instances view documents to:

  • Understand the contract mapping between the Kyma Console UI and the OSBA

  • Learn which fields are primary to define, to provide the best user experience

  • See which fields are used as fallbacks if you do not provide the primary ones

Catalog view

This document describes the mapping of OSBA service objects, plan objects, and conventions in the Kyma Console Catalog view.

Catalog page

These are the OSBA fields used in the main Catalog page:

Number OSBA field Fallbacks Description


(1) metadata.displayName name*, id* If metadata.displayName, name, or id fields are not present, a given Service Class does not appear on the landing page.
(2) metadata.providerDisplayName - If not provided, UI does not display this information.
(3) description* - If not provided, UI does not display this information.
(4) metadata.labels** - If not provided, UI does not display any labels.
(5) metadata.labels.local** and/or metadata.labels.showcase** - If not provided, it is not possible to choose a Basic Filter.
(6) tags - If not provided, it is not possible to filter by Tag.
(7) metadata.labels.connected-app** - If not provided, it is not possible to choose Connected Applications.
(8) metadata.providerDisplayName - If not provided, it is not possible to filter by Provider.

*Fields with an asterisk are required OSBA attributes.

**metadata.labels is the custom object that is not defined in the OSBA metadata convention.

alt text{width=“19.559722222222224in” height=“16.522222222222222in”}

Catalog Details page

These are the OSBA fields used in the detailed Service Class view:

Number OSBA field Fallbacks Description


(1) metadata.displayName name*, id* -
(2) metadata.providerDisplayName - If not provided, UI does not display this information.
(3) not related to OSBA - -
(4) metadata.documentationUrl - If not provided, the link with documentation does not appear.
(5) metadata.supportUrl - If not provided, the link with support does not appear.
(6) tags - If not provided, UI does not display tags.
(7) metadata.longDescription description* If not provided, the General Information panel does not appear.
(8) not related to OSBA - -

*Fields with an asterisk are required OSBA attributes.

alt text{width=“12.343055555555555in” height=“5.067361111111111in”}

Add to Namespace

These are the OSBA fields used in the Add to Namespace window:

Number OSBA field Fallbacks Description


(1) plan.metadata.displayName plan.name*, plan.id*
(2) not related to OSBA -
(3) not related to OSBA -

*Fields with an asterisk are required OSBA attributes.

alt text{width=“6.992361111111111in” height=“2.1118055555555557in”}

Plan schema

A plan object in the OSBA can have the schemas field. Schema is used to generate a form which enables provisioning of the Service Class.

See the sample schema:

Click to copy

{

“$schema”: “http://json-schema.org/draft-04/schema#”,

“properties”: {

“imagePullPolicy”: {

“default”: “IfNotPresent”,

“enum”: [

“Always”,

“IfNotPresent”,

“Never”

],

“title”: “Image pull policy”,

“type”: “string”

},

“redisPassword”: {

“default”: “”,

“format”: “password”,

“description”: “Redis password. Defaults to a random 10-character alphanumeric string.”,

“title”: “Password (Defaults to a random 10-character alphanumeric string)”,

“type”: “string”

}

},

“type”: “object”

}

This sample renders in the following way:

alt text{width=“8.753472222222221in” height=“3.2090277777777776in”}

Follow these rules when you design schema objects:

  • If the field has limited possible values, use the enum field. It renders as a dropdown menu, so it prevents the user from making mistakes.

  • If the field is required for the Service Class, mark it as required. UI blocks provisioning if you do not fill in the required fields.

  • Fill the default value for a field whenever possible, it makes the provisioning faster.

  • If the field, such as the password field, must be starred, use the format key with the password value.

Instances View

This document describes the mapping of OSBA service objects, plan objects, and conventions in the Kyma Console Instances view.

Service Instances page

These are the OSBA fields used in the main Instances page:

Number OSBA field Fallbacks Description


(1) not related to OSBA - It is the name of the Service Instance, created during service provisioning.
(2) metadata.displayName name*, id* If not provided, UI does not display this information.
(3) plan.metadata.displayName plan.name*, plan.id* If not provided, UI does not display this information.
(4) not related to OSBA -
(5) not related to OSBA -

*Fields with an asterisk are required OSBA attributes.

alt text{width=“8.141666666666667in” height=“2.9256944444444444in”}

Service Instance Details page

These are the OSBA fields used in the detailed Service Instance view:

Number OSBA field Fallbacks Description


(1) metadata.displayName name*, id* -
(2) plan.metadata.displayName plan.name*, plan.id* -
(3) metadata.documentationUrl - If not provided, UI does not display this information
(4) metadata.supportUrl - If not provided, UI does not display this information
(5) description* - If not provided, UI does not display this information

*Fields with an asterisk are required OSBA attributes.

alt text{width=“13.843055555555555in” height=“4.872916666666667in”}

Overview

A Service Broker is a server compatible with the Open Service Broker API specification. Each Service Broker registered in Kyma presents the services it offers to the Service Catalog and manages their lifecycle.

The Service Catalog lists all services that the Service Brokers offer. Use the Service Brokers to:

  • Provision and deprovision an instance of a service.

  • Create and delete a ServiceBinding to link a ServiceInstance to an application.

Each of the Service Brokers available in Kyma performs these operations in a different way. See the documentation of a given Service Broker to learn how it operates.

The Kyma Service Catalog is currently integrated with the following Service Brokers:

You can also install these brokers using the Helm Broker’s bundles:

To get the bundles that the Helm Broker provides, go to the [bundles]{.underline} repository. To build your own Service Broker, follow the Open Service Broker API specification. For details on how to register a sample Service Broker in the Service Catalog, see this tutorial.

NOTE: The Service Catalog has the Istio sidecar injected. To enable the communication between the Service Catalog and Service Brokers, either inject Istio sidecar into all brokers or disable mutual TLS authentication.

GCP Broker

NOTE: The Google Cloud Platform (GCP) Service Broker is in the experimental phase.

Google Cloud Platform (GCP) Service Broker is an implementation of the Open Service Broker (OSB) API hosted on GCP. It simplifies the delivery of GCP services to applications that run on Kyma. By creating GCP resources and managing their corresponding permissions, the Service Broker makes it easy to consume GCP services from within a Kubernetes cluster.

Kyma provides Namespace-scoped Google Cloud Platform Service Broker. In each Namespace, you can configure the Google Cloud Platform Broker against different Google Cloud Platforms. Install the Google Cloud Platform Service Broker by provisioning the Google Cloud Platform Service Broker class provided by the Helm Broker.

Service Catalog view without GCP Classes{width=“9.716666666666667in” height=“3.5444444444444443in”}

Once you provision the Google Cloud Platform Service Broker class, the Google Cloud Platform Service Broker classes are available in the Service Catalog view in a given Namespace.

Service Catalog view without GCP Classes{width=“9.716666666666667in” height=“7.627083333333333in”}

For more information about provisioning the Google Cloud Platform Service Broker class, go to the service class overview in the Service Catalog UI.

Azure Service Broker

The Microsoft Azure Service Broker is an open-source, Open Service Broker-compatible API server that provisions managed services in the Microsoft Azure public cloud. Kyma provides Namespace-scoped Azure Service Broker. In each Namespace, you can configure the Azure Service Broker against different subscriptions. Install the Azure Service Broker by provisioning the Azure Service Broker class provided by the Helm Broker.

azure broker class{width=“9.627083333333333in” height=“3.3958333333333335in”}

Once you provision the Azure Service Broker class, the Azure Service Broker classes are available in the Service Catalog view in a given Namespace. The Azure Service Broker provides these Service Classes to use with the Service Catalog:

  • Azure SQL Database

  • Azure Database for MySQL

  • Azure Redis Cache

  • Azure Application Insights

  • Azure CosmosDB

  • Azure Event Hubs

  • Azure IoT Hub

  • Azure Key Vault

  • Azure SQL Database

  • Azure SQL Database Failover Group

  • Azure Service Bus

  • Azure Storage

  • Azure Text Analytics

See the details of each Service Class and its specification in the Service Catalog UI. For more information about the Service Brokers, see this document.

NOTE: Kyma uses the Microsoft Azure Service Broker open source project. To ensure the best performance and stability of the product, Kyma uses a version of the Azure Service Broker that precedes the newest version released by Microsoft.

汪子熙 CSDN认证博客专家 前端框架 Node.js SAP
Jerry Wang,2007 年从电子科技大学计算机专业硕士毕业后加入 SAP 成都研究院工作至今。Jerry 是 SAP 社区导师,SAP 中国技术大使。14 多年的 SAP 产品开发生涯,Jerry 曾经先后参与 SAP Business ByDesign,SAP CRM,SAP Cloud for Customer,SAP S/4HANA,SAP Commerce Cloud(电商云)等标准产品的研发工作。
相关推荐
©️2020 CSDN 皮肤主题: 博客之星2020 设计师:CY__ 返回首页
实付 49.90元
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。

余额充值