kubernetes_kubernetes毫无痛苦地发展

kubernetes

我们将与我们的首席技术官Nick Brook谈谈我们如何为农业食品行业构建最佳的在线平台。 (We speak to our Chief Technology Officer, Nick Brook, on how we are building the best online platform for the Agrifood sector.)

没有人为您支付DevOps (No-one is paying you for DevOps)

Software only gets difficult when you put it in front of customers. Unfortunately that’s pretty much a necessity if you want to get paid, so what’s the best way to do it? For many years the solution involved essentially the same mechanism: take your code, bundle it up somehow, copy it across to a server somewhere, unpack it, and hope for the best.

仅当您将软件放在客户面前时,软件才会变得困难。 不幸的是,如果您想获得报酬,这几乎是必须的,那么什么是最好的方法呢? 多年以来,该解决方案基本上采用了相同的机制:将您的代码,以某种方式捆绑在一起,将其复制到某处的服务器中,对其进行解压缩并希望达到最佳效果。

Writing the scripts that do this safely, consistently and without breaking your precious live deployment is a difficult and time consuming undertaking, so a number of offerings appeared that took this pain away (Heroku, AWS Beanstalk, App Engine, Firebase etc), making app deployment as simple as a single “git push”.

编写脚本来安全,一致地执行此脚本,而又不会破坏宝贵的实时部署是一项艰巨且耗时的工作,因此出现了许多消除了这一麻烦的产品( HerokuAWS BeanstalkApp EngineFirebase等),部署就像一个“ git push”一样简单。

For simple applications with limited dependencies this is still the way to go. Customers aren’t paying you for your lovely DevOps: they’re paying you to reliably deliver a service. As such any time spent developing and maintaining DevOps systems is time you’re not spending improving your product and gaining market traction. So if you have a simple application with standard dependencies, then use any of the above solutions: they’ll all work, and frankly it doesn’t matter which one you pick

对于具有有限依赖性的简单应用程序,这仍然是可行的方法。 客户不会为您的可爱DevOps付钱:他们是为了可靠地提供服务而付钱。 因此,花在开发和维护DevOps系统上的任何时间都是您无需花费时间来改善产品和获得市场吸引力的时间。 因此,如果您有一个具有标准依赖项的简单应用程序,则可以使用以上任何一种解决方案:它们都可以使用,坦率地说,选择哪一个都不重要

但是我太复杂了 (But I’m just too complex)

Some systems just don’t fit easily into these simplified frameworks:

一些系统只是不容易融入这些简化的框架中:

  • Your applications may have complicated or “non-standard” system dependencies

    您的应用程序可能具有复杂或“非标准”的系统依赖性
  • You’ve designed your system as a set of self-contained microservices, communicating over APIs

    您已将系统设计为一组独立的微服务,并通过API进行通信
  • You need fine-grained control of resource allocation and scaling

    您需要对资源分配和扩展进行细粒度的控制

So if I’m not using one of the GitOps frameworks, I’m back to writing scripts. The tools here have changed over the years, and the servers have evolved from a large noisy thing in your office, to a large noisy thing in a big room somewhere in the world, to a fully virtual thing inside a large noisy thing somewhere in the world, but essentially it’s the same recipe: write scripts.

因此,如果我不使用任何一种GitOps框架,那么我将返回编写脚本。 的 这些年来,这里的工具已经发生了变化,服务器已经从您办公室中的一个嘈杂的事物发展到了世界某个地方的一个大房间中的一个很大的声音,再到一个世界某个地方的一个巨大的噪声中的完全虚拟的东西。 ,但本质上是相同的方法:编写脚本。

There’s a number of problems with this approach — I’ll outline the main ones:

这种方法存在很多问题-我将概述主要问题:

剧本之痛 (Script pain)

DevOps scripts are hard to write, hard to maintain, and nearly impossible to test. Sure, there are lots of frameworks that try to take some of the pain away (e.g Chef, Puppet, Fabric), but you’ll still end up writing reams of code.

DevOps脚本很难编写,难以维护并且几乎无法测试。 当然,有很多框架可以减轻一些麻烦(例如Chef,Puppet,Fabric),但是您最终仍将编写大量代码。

损坏的构建和停机时间 (Broken builds and downtime)

So you’ve packaged up your code and pushed it to a server. How do you ensure that when you unpack it and switch it on that you’re not going to bring your app down in flames? Deployments really shouldn’t cause downtime, so to do this properly you’ll need to take each server offline, update the code, test it’s working somehow, take it back online, and move onto the next server.

因此,您已经打包了代码并将其推送到服务器。 如何确保在打开包装并打开它时不会导致应用程序崩溃? 部署确实不会造成停机,因此要正确执行此操作,您需要使每台服务器脱机,更新代码,测试其工作方式,使其重新联机并移至下一台服务器。

重复性和一致性 (Repeatability and consistency)

With a scripting approach it’s much harder to reproduce the same environment locally. It’s also much harder to guarantee that you know exactly what code is running on all your servers. If you need to bring a server offline for any reason, when you bring it back you’ll need to ensure that it’s running exactly the same code as all the other servers.

使用脚本方法,要在本地复制相同的环境要困难得多。 确保您确切知道所有服务器上正在运行的代码也要困难得多。 如果出于任何原因需要使服务器脱机,则在将其恢复时,需要确保其运行的代码与所有其他服务器完全相同。

回滚 (Rollback)

So you’ve carefully deployed your code one server at a time and nothing has gone wrong, but 5 minutes later you notice a nasty bug that’s going to cause untold pain and misery. You either need to bake rollback into your “deploy and unpack” scripts, or repeat the whole deploy process with a previous version of your code.

因此,您一次已经仔细地将代码部署在一台服务器上,并且没有发生任何问题,但是5分钟后,您注意到一个令人讨厌的错误,它将引起难以言喻的痛苦和痛苦。 您要么需要将回滚烘焙到“部署和解压缩”脚本中,要么用以前的代码版本重复整个部署过程。

爆炸式服务器 (Exploding servers)

It happens — servers can break for a multitude of reasons, although to be fair it’s far more likely that it will be down to your code. Running out of memory or disk on a Linux box is pretty catastrophic: your server will either become completely unresponsive, slow to a crawl (Java Garbage Collect Sawtooth of Doom — I’m looking at you), or explode in flames.

它发生了-服务器可能由于多种原因而崩溃,尽管公平地说,它很可能取决于您的代码。 Linux机器上的内存或磁盘用完将是灾难性的:您的服务器将变得完全无响应,爬行缓慢(Java垃圾收集《毁灭战士》的锯齿-我在看着您),或者起火爆炸。

A clean death is certainly preferable here: your server will just disappear from the load balancer, and life will carry on. The worst possible case is where your app is extremely unwell, but not sufficiently poorly that the load balancer will notice. This is your “super-spreader at the rave” situation, and you need to avoid it at all costs.

干净的死在这里肯定是更好的选择:您的服务器将从负载均衡器中消失,而寿命将继续。 最糟糕的情况是您的应用程序非常不适,但还不够糟糕,负载均衡器会注意到。 这就是您的“超级传播者”,您需要不惜一切代价避免这种情况。

资源效率低下 (Resource inefficiency)

It’s very hard with this approach to properly pack code into servers, as there’s no way to limit the resources needed by each service (e.g ram / cpu). The naive approach is just to fire up a new machine for each service: this tends to massive underutilise resources.

这种方法很难将代码正确打包到服务器中,因为没有办法限制每个服务(例如ram / cpu)所需的资源。 天真的方法只是为每种服务启动一台新机器:这往往会大量利用资源不足。

救援容器 (Containers to the rescue)

I’ve written multiple variants of deploy scripts over the years, for various languages. They all tried to cope with the above problems, some more successfully than others. They were fiddly to write, difficult to maintain, and still left most of the above issues unsolved.

这些年来,我为多种语言编写了多种部署脚本。 他们都试图解决上述问题,其中一些问题比其他问题更成功。 他们很笨拙,难以维护,仍然没有解决以上大多数问题。

Luckily I don’t have to do it ever again: enter the Whale and the Wheel:

幸运的是,我不必再做一次:进入“鲸鱼与摩天轮”:

Image for post

A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. The whole works. Once you’ve created your image, you can run it anywhere. But isn’t this the same as a Virtual Machine? Well, sort of, but a Docker container is much more lightweight: it runs natively on the silicon rather than in an emulation layer, so it’s much more efficient. For a more detailed explanation head on over to the Docker website

Docker容器映像是一个轻量级的,独立的,可执行的软件软件包,其中包含运行应用程序所需的一切:代码,运行时,系统工具,系统库和设置。 整个作品。 创建映像后,您可以在任何地方运行它。 但这与虚拟机不一样吗? 很好,但是Docker容器要轻得多:它在硅片上本地运行,而不是在仿真层中运行,因此效率更高。 有关更详细的说明,请转到Docker网站

A Docker image is built using a Dockerfile. This is a set of instructions that tells Docker how to construct the image: here’s one of the simplest possible examples:

Docker映像是使用Dockerfile构建的 。 这是一组指令,告诉Docker如何构造映像:这是最简单的示例之一:

Image for post

The important bit is the first line: Docker images inherit from other images. In this case some helpful person has built a minimal Alpine Linux containing Python 3.7: we just inherit from that, copy in our code, and tell it how to run the app. You can of course just build from another image (e.g Ubuntu), although your final image will be quite a bit larger.

重要的一点是第一行:Docker映像从其他映像继承。 在这种情况下,一些乐于助人的人构建了一个包含Python 3.7的最小Alpine Linux :我们只是从中继承,复制代码,并告诉其如何运行该应用程序。 当然,您可以仅从另一个映像(例如Ubuntu)进行构建,尽管最终映像将大很多。

Docker images are stored in repositories such as DockerHub which is essentially source control for images: that’s where Docker will find the “python:3.7-alpine” image above. As you’d expect there are a bunch of pre-built images for every conceivable application, and in true open source style the Dockerfile is always accessible if you want to see how they are built.

Docker映像存储在诸如DockerHub之类的存储库中,而DockerHub本质上是映像的源代码控制:这就是Docker将在上面找到“ python:3.7-alpine”映像的地方。 正如您所期望的,每个可能的应用程序都有许多预构建的映像,并且如果您想查看它们是如何构建的,则以真正的开源样式可以始终访问Dockerfile。

Tags are used to organise images: the above image is only tagged with the Python version, so the owner could deploy a later version on the same tag (e.g with security fixes). For your own code you would tag with something unique to your code (e.g the Git SHA), as that will uniquely define that iteration of the container.

标签用于组织图像:上面的图像仅用Python版本标记,因此所有者可以在同一标签上部署更高版本(例如,使用安全修复程序)。 对于您自己的代码,您可以使用代码特有的标记(例如Git SHA),因为这将唯一地定义容器的迭代。

很好-但是我该如何部署呢? (Nice — but how do I deploy this?)

So I’ve built my Docker images for my applications, and I can run them locally: how do I deploy them? Well this is where Kubernetes comes into the picture. Kubernetes is a system for automating deployment, scaling, and management of containerised applications, with built-in support for load balancing, progressing rollouts, resource management (“bin packaging”) and self-healing.

因此,我为应用程序构建了Docker映像,并且可以在本地运行它们:如何部署它们? 嗯,这就是Kubernetes出现的地方。 Kubernetes是一个用于自动化容器化应用程序的部署,扩展和管理的系统,它内置了对负载平衡,渐进式部署,资源管理(“箱包装”)和自我修复的支持。

With Kubernetes you define what your ecosystem should look like in config files, and then Kubernetes makes it so. This is declarative rather than imperative: Kubernetes has a fixed view of your deployment, so it’s entirely reproducible. If any one component breaks, Kubernetes can self-heal, because it knows what should be running.

使用Kubernetes,您可以在配置文件中定义您的生态系统的外观,然后Kubernetes做到这一点。 这是声明性的而不是命令性的 :Kubernetes对您的部署有固定的看法,因此它是完全可复制的。 如果任何一个组件发生故障,Kubernetes都能自我修复,因为它知道应该运行什么。

A set of Kubernetes config files act as a nice layer of abstraction over the actual silicon, defining:

一组Kubernetes配置文件在实际的芯片上充当了很好的抽象层,定义了:

  • Load balancers

    负载均衡器
  • Secrets

    机密
  • Service deployments

    服务部署
  • Disk volumes

    磁盘卷
  • Horizontal scaling rules

    水平缩放规则

The deployment config is the interesting bit: this is where we define the Docker image we want to deploy, the number of instances we need, and any required environment settings. We also specify how much CPU and memory we’ll need: not only does this neatly solve our “out of memory” issue above, it will let Kubernetes automatically pack services onto the cluster to make optimal use of the resources. This can save you a lot of money: we shaved over 60% off our bill in our migration, and that’s before we’ve done any serious optimisation.

部署配置很有趣:在这里,我们定义了要部署的Docker映像,所需实例的数量以及任何所需的环境设置。 我们还指定了我们需要多少CPU和内存:这不仅可以巧妙地解决我们上面的“内存不足”问题,还可以让Kubernetes自动将服务打包到群集上,以最佳利用资源。 这可以为您节省很多钱:在迁移过程中,我们将费用削减了60%以上,而那是在我们进行任何认真的优化之前。

没有停机时间 (No downtime deploys)

Deployment definitions let you specify “liveness” checks that Kubernetes can use to determine whether the service is actually alive and can therefore be added to it’s load balancer.

部署定义使您可以指定Kubernetes可以用来确定服务是否实际处于活动状态的“活动性”检查,因此可以将其添加到其负载均衡器中。

It actually goes a bit deeper than that: during a deployment, Kubernetes will create new container instances and let them run for a bit to see if they’re actually working. Only when it’s happy all is well will it connect the container to the load balancer, and eventually tear down the old container.

它实际上比这更深入:在部署期间,Kubernetes将创建新的容器实例,并让它们运行一会儿以查看它们是否真正在工作。 只有当一切都好时,它才会将容器连接到负载均衡器,并最终拆除旧容器。

This means that it’s very hard to deploy a broken build, so long as the check you’ve defined is a good indication of liveness. If the service is broken, the deployment just won’t take, and the existing system will be untouched. You’ve also always got the option of just rolling back to the previous version: deployment rollback is a built in feature, and very quick.

这意味着部署已损坏的构建非常困难,只要您定义的检查可以很好地表明其活动性即可。 如果服务中断,部署将不会进行,现有系统将保持不变。 您还可以选择仅回滚到以前的版本:部署回滚是一项内置功能,并且非常快速。

一个简单的Kubernetes配置文件 (A simple Kubernetes config file)

Here’s a very simple example of a config file for an app. We’re assuming that the Docker image contains an app running on port 8001:

这是一个简单的应用程序配置文件示例。 我们假设Docker映像包含在端口8001上运行的应用程序:

Image for post
Image for post

接下来呢? (Where next?)

Since we’ve shifted the focus from packaging our apps up on demand to pre-building them and pushing them to a repository, it’s a simple step to then automate this. Again there are lots of services that will help you do this e.g Google Cloud Build, CircleCI, Bitbucket Pipelines, Weaveworks, etc, but they all work on the same principle: after a Git push, run some tests. If the tests pass, build the Docker image and push it, then ask Kubernetes to deploy the changes.

由于我们已经将重点从按需打包应用程序转移到了预先构建应用程序并将其推送到存储库中,因此这是使此过程自动化的简单步骤。 再次有很多服务可以帮助您做到这一点,例如Google Cloud BuildCircleCIBitbucket PipelinesWeaveworks等,但是它们都遵循相同的原理:在Git推送之后,运行一些测试。 如果测试通过,则构建Docker映像并将其推送,然后要求Kubernetes部署更改。

At Kisanhub we actually use a combination of Cloud Build and Pipelines, with Slack integration: if you’re interested head on over to my talk where I discuss it in more detail:

在Kisanhub,我们实际上将Cloud Build和Pipelines与Slack集成结合使用:如果您有兴趣,请继续我的演讲 ,在此我将对其进行详细讨论:

Image for post

关于作者| KisanHub首席技术官Nick Brook (About the Author | Nick Brook, CTO at KisanHub)

An experienced Software Architect and Developer, in the past 2 years at KisanHub, Nick has brought his expertise in a variety of languages and platforms to the ever-expanding Engineering team.

Nick在过去两年的KisanHub中是一位经验丰富的软件架构师和开发人员,他将他在多种语言和平台中的专业知识带给了不断扩大的工程团队。

如果您想进一步了解KisanHub,请我们联系hello@kisanhub.com (Please get in touch if you would like to know more about KisanHub hello@kisanhub.com)

翻译自: https://medium.com/kisanhub/kubernetes-devops-without-the-pain-d3c9950b73e5

kubernetes

### 回答1: 错误:没有提供配置,请尝试设置kubernetes_master环境变量。 这个错误通常是由于缺少Kubernetes的配置信息导致的。解决方法是设置kubernetes_master环境变量,指定Kubernetes的主节点地址。具体操作可以参考Kubernetes的官方文档或者相关教程。 ### 回答2: “no configuration has been provided, try setting kubernetes_master environment variable”这个错误信息通常出现在使用Kubernetes命令行工具(kubectl)时,因为在执行kubectl命令的时候,kubectl需要获取集群的相关配置信息,如API Server地址、Token等等。如果这些配置信息未被正确设置,kubectl就会出现“error: no configuration has been provided”这个错误提示。 解决这个问题可以尝试以下几种方法: 1. 设置Kubernetes master环境变量 通过设置环境变量来告诉kubectl如何连接到Kubernetes集群。Kubernetes master环境变量指定了Kubernetes API服务器的地址,你可以将其设置为Kubernetes Master节点的IP地址或域名。在命令行中执行如下命令: ``` export KUBECONFIG=/path/to/kubeconfig.yaml export KUBERNETES_MASTER=http://your-kubernetes-master:8080 ``` 2. 在命令中指定kubeconfig文件 使用--kubeconfig选项可以让kubectl从指定文件中获取集群的配置信息。例如: ``` kubectl --kubeconfig=/path/to/kubeconfig.yaml get pods ``` 3. 检查kubeconfig文件中的配置信息 如果配置信息正确,那么可以打开kubeconfig文件来检查其中是否包含正确的集群信息,例如apiServer地址、证书、token等。 以上是一些解决“error: no configuration has been provided, try setting kubernetes_master environment variable”错误的方法,如果以上解决方法仍然无法解决问题,那么也可以考虑咨询相关专业人员。 ### 回答3: 这个错误提示意味着在运行Kubernetes命令时没有配置正确的环境变量。Kubernetes是一个开源的容器编排系统,它可以让开发者快速、自动化地部署、扩展和管理容器化应用程序。 在使用Kubernetes时,需要正确设置相关的环境变量以便Kubernetes能够访问和管理集群。这些环境变量包括Kubernetes Master的地址、证书和凭证等信息。 解决该错误的步骤如下: 1. 设置Kubernetes Master环境变量。可以通过以下命令来设置: ``` export KUBECONFIG=path/to/kubeconfig.yaml ``` 其中path/to/kubeconfig.yaml是指向Kubernetes Master的配置文件路径。该配置文件包含了Kubernetes API Server的地址、证书以及其他凭证信息。 2. 检查Kubernetes Master的地址是否正确。在设置Kubernetes Master环境变量时,需要确保指定的地址是正确的。可以通过命令行或者其他工具来验证Kubernetes Master是否正常运行。 3. 检查证书和凭证是否正确。在访问Kubernetes Master时,需要使用正确的证书和凭证。如果证书或凭证不正确,可能会导致连接失败。 在解决该错误之后,可以通过Kubernetes命令来管理容器化应用程序,并通过Kubernetes Dashboard等工具来监控和调试Kubernetes集群的状态。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值