使用Kubernetes + HaProxy托管可扩展的CTF挑战

A step by step guide on how to deploy and load test scalable containers on a k8 cluster!

有关如何在k8集群上部署和加载可伸缩容器的测试的逐步指南!

There is only one thing that CTF participants hate more than a boring CTF: a CTF with challenges that keep going down :)

CTF参与者最讨厌的事情莫过于无聊的CTF:挑战不断下降的CTF :)

Deploying CTF challenges is different from any normal server deployment or DevOps job because you’re intentionally deploying services that are vulnerable to break, and you got to be ready with fallback measures to minimize downtime when that does happen.

部署CTF挑战与任何常规服务器部署或DevOps作业均不同,因为您有意部署容易受到破坏的服务,并且您已准备好采用备用措施以最大程度地减少发生故障时的停机时间。

That’s what we had in mind while we picked using Kubernetes to deploy challenges for csictf 2020. We wanted to ensure that:

这就是我们在选择使用Kubernetes部署csictf 2020挑战时想到的 。 我们希望确保:

  1. The challenges are deployed in a scalable manner. It should be trivial for us to scale up/scale down resources for a challenge in response to dynamic traffic.

    挑战以可扩展的方式部署。 对于我们来说,根据动态流量来增加/缩小资源以应对挑战应该是微不足道的。
  2. The load has to be equally balanced between multiple instances of a challenge.

    挑战的多个实例之间的负载必须相等。
  3. If a challenge does go down, we should have a strategy to quickly bring it up again, back to its initial state.

    如果挑战确实消失了,我们应该有一种策略可以Swift将其再次提出来,使其恢复到初始状态。

In this article, we’ll go over how you can set up a Kubernetes cluster to deploy challenges in such a manner, as to satisfy exactly these goals.

在本文中,我们将介绍如何设置Kubernetes集群以这种方式部署挑战,从而完全满足这些目标。

Kubernetes术语快速复习 (A quick refresher on Kubernetes terminology)

(Skip ahead to the next section if you already know about k8 deployments, pods, and services)

(如果您已经了解k8部署,pod和服务,请跳到下一部分)

Image for post

A Kubernetes cluster consists of nodes and deployments.

一个 Kubernetes集群由节点部署组成

Nodes are physical machines running inside your cluster. For example, if you were using a cloud provider, each VM instance you make would be a single node on the cluster.

节点是在群集内运行的物理计算机。 例如,如果您使用的是云提供商,则您制作的每个VM实例将是群集上的单个节点。

A deployment is an abstract term that refers to one or more running instances of a container you want to deploy on the cluster. In simple words, if you want to run a container (or multiple instances of a container) on your cluster, you create a deployment telling Kubernetes: “Hey, here’s my container’s image, I want you to pick nodes on the cluster and deploy my container onto these nodes”.

部署是一个抽象术语,它表示要在群集上部署的容器的一个或多个运行实例。 简而言之,如果您想在集群上运行一个容器(或一个容器的多个实例),您将创建一个告诉Kubernetes的部署:“嘿,这是我的容器的映像,我希望您选择集群上的节点并部署我的容器容器到这些节点上”。

A deployment consists of pods. A pod is an actual running instance of your container. When you create a deployment, Kubernetes goes ahead and creates pods and assigns them to run on nodes on the cluster. The powerful thing about Kubernetes is that you can tell it how many pods you want a deployment to have, and it will take care of ensuring that those many pods are always running on your cluster, and are moreover, equally distributed between nodes. In Kubernetes, this is referred to as ensuring that the cluster always has “minimum availability”.

部署由pod组成。 Pod是容器实际运行实例 。 创建部署时,Kubernetes会继续创建Pod并将其分配给在集群上的节点上运行。 关于Kubernetes的强大功能是,您可以告诉它要部署的Pod有多少个,并且它将确保所有的Pod始终在集群上运行,并且在节点之间平均分配。 在Kubernetes中,这被称为确保集群始终具有“ 最低可用性”

Once you have pods running on the cluster, you need a way to expose these containers running on the cluster to the outside world. A service does just that. There are three kinds of services in k8: Node Ports, Load Balancers, and Cluster IPs. We will be using just Node Ports in this article, but in brief, a node port tells Kubernetes, “Hey, can you expose a port on all nodes the cluster and link that port to pods running under a deployment? And also make sure that load is equally distributed between all the pods :)” This can be hard to wrap your head around, but I recommend this great article to understand k8 services. Here’s a nice diagram from that article about NodePorts:

一旦在集群上运行了pod,就需要一种将这些在集群上运行的容器暴露给外界的方法。 服务就是这样做的。 k8中提供三种服务:节点端口,负载均衡器和群集IP。 在本文中,我们将仅使用节点端口,但简单来说,节点端口告诉Kubernetes:“嘿,您可以在集群的所有节点上公开一个端口并将该端口链接到在部署下运行的Pod吗? 并且还要确保所有Pod之间均等地分配负载:)”这可能很难使您头脑清醒,但是我推荐这篇很棒的文章来了解k8服务。 这是该文章中有关NodePorts的漂亮图表:

Image for post
Exposing Port 30000 as a node port to pods on a cluster. (Source: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0)
将端口30000作为节点端口公开给集群上的Pod。 (来源: https : //medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 )

在Google Cloud上设置K8集群 (Setting up a K8 cluster on Google Cloud)

The first thing you want to do is to provision a cluster on your cloud provider. We’ll be going or how to do this on GCP, but this should be possible on any provider in general.

您要做的第一件事是在云提供商上配置群集。 我们将在GCP上进行操作或如何执行此操作,但是一般而言,在任何提供商上都应该可行。

Instructions also available on the official google cloud docs

Google官方云文档中也提供了说明

You want to start by installing the Google Cloud SDK:

您首先要安装Google Cloud SDK

# Add apt sources
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list


# Install deps
sudo apt-get install apt-transport-https ca-certificates gnupg


# Import google cloud public key
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -


# Update apt package index, install cloud sdk
sudo apt-get update && sudo apt-get install google-cloud-sdk

Make sure you have a google cloud project setup (you can create one here) and enable Google Kubernetes Engine for that project

确保您有一个Google Cloud项目设置(可以在此处创建一个)并为该项目启用Google Kubernetes Engine

Now run gcloud init and authenticate the CLI with your google cloud account. Make sure to set the default project as the project you created above, and the zone as default GCP zone.

现在运行gcloud init并使用您的Google云帐户对CLI进行身份验证。 确保将默认项目设置为您在上面创建的项目,并将区域设置为默认GCP区域。

Now, let’s create the cluster!

现在,让我们创建集群!

First, decide how many nodes you want on your cluster, what is the size of each of these nodes (the machine type), and which region are the nodes going to run on. If you want help deciding, refer to this article in our series, with LOADS of statistics from our CTF.

首先,确定集群上需要多少个节点,每个节点的大小是多少(机器类型),以及要在哪个区域运行节点。 如果您需要帮助来决定,请参考我们系列中的这篇文章 ,以及来自CTF的统计信息。

For the size of each node, you can run the following command to list all possible machine types (refer to google cloud’s docs for details about these types)

对于每个节点的大小,您可以运行以下命令以列出所有可能的计算机类型(有关这些类型的详细信息,请参阅Google Cloud的文档)

gcloud compute machine-types list

gcloud compute machine-types list

To list the possible regions, you can run the following command

要列出可能的区域,可以运行以下命令

gcloud compute zones list

gcloud compute zones list

Once you have these planned, run the following command to create the cluster:

计划好这些之后,运行以下命令来创建集群:

gcloud container clusters create cluster-name \
    --zone compute-zone \
    --machine-type <machine-type you chose> \
    --num-nodes <number of nodes in the cluster> \
    --tags challenges

Note the tags option, these will assign a tag to each VM instance on the node. This is very important, as we’ll be creating firewall rules later to expose ports on these nodes, and the tags will help us to target just instances on the cluster.

注意标签选项,它们将为节点上的每个VM实例分配一个标签。 这非常重要,因为稍后我们将创建防火墙规则以暴露这些节点上的端口,并且标记将帮助我们仅将目标对准群集上的实例。

Now you should have a GKE cluster setup, let’s deploy a sample challenge to the cluster.

现在您应该有一个GKE集群设置,让我们向集群部署一个示例挑战。

使用kubectl在集群上部署挑战 (Using kubectl to deploy challenges on the cluster)

First things first, you have to ensure that all your challenges are containerized. You can refer to this article in our series to setup Dockerfiles for CTF challenges.

首先,您必须确保所有挑战都已容器化 。 您可以参考我们的系列文章中的文章来设置CTF挑战的Dockerfile。

We’re going to assume you have a docker image with the name challenge-image setup locally in the next few steps.

在接下来的几步中,我们将假设您有一个本地名为challenge-image的docker镜像。

First, you need to push the image to a registry. If you’re using GCP, you can use the Google Container Registry, or even Github provides a free private registry.

首先,您需要将映像推送到注册表。 如果您使用的是GCP,则可以使用Google容器注册表,甚至Github都提供免费的私有注册表。

For GCR, you can push the image as such:

对于GCR,您可以按以下方式推送图像:

gcloud auth configure-docker

gcloud auth configure-docker

docker tag challenge-image gcr.io/project-id/challenge-image

docker tag challenge-image gcr.io/project-id/challenge-image

docker push gcr.io/project-id/challenge-image

docker push gcr.io/project-id/challenge-image

Once you have the image pushed to a registry, we need to create a k8 deployment to deploy this image on our cluster.

将映像推送到注册表后,我们需要创建k8 部署以将该映像部署到群集中。

For this, we need to create a deployment.yml file describing our deployment, and a k8 service to expose that deployment using a NodePort:

为此,我们需要创建一个deployment.yml文件来描述我们的部署,并创建一个k8服务以使用NodePort公开该部署:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: challenge-name # REPLACE challenge-name and challenge-category to your challenges's name and category
  labels:
    category: challenge-category # We assign labels to the deployment to link it to a service later, and to help manage deployments
    challenge: challenge-name
spec:
  replicas: 3 # The no of replicas sets the no of instances/pods of the challenge deployed on the cluster
  selector:
    matchLabels:
      category: challenge-category
      challenge: challenge-name
  template:
    metadata:
      labels:
        category: challenge-category
        challenge: challenge-name
    spec:
      containers:
      - name: challenge-container
        image: gcr.io/project-id/challenge-image:tag # Set this URL to your challenge container's image
        resources: # Resource limits for the container. These are important, in case people manage to max out CPU/RAM on your challenge
          limits:
            cpu: 100m
            memory: 150Mi
          requests:
            cpu: 10m
            memory: 30Mi
        ports: # Port exposed by the container, you can add multiple
        - containerPort: 9999
          name: port-9999
---
apiVersion: v1
kind: Service
metadata: # Set the challenge-name/challenge-category SAME as the deployment, otherwise they won't link to each other
  name: challenge-name
  labels:
    category: challenge-category
    challenge: challenge-category
spec:
  type: NodePort
  selector:
    category: challenge-category
    challenge: challenge-name
  ports:
    - port: 9999 # The port exposed by the container
      name: port-9999
      targetPort: 9999 # The port exposed by the container
      nodePort: 30001 # The port that is exposed on each Node on the cluster

Refer to the comments in the file for an explanation of each section, and don’t forget to change the challenge-name and challenge-category , the ports exposed, the image URL, the no of replicas, etc. to match your use case.

请参阅文件中的注释以获取每个部分的说明,并且不要忘记更改challenge-namechallenge-category ,公开的端口,图像URL,副本数等,以匹配您的用例。 。

Once you have the ymlfile setup, you can now deploy it to the cluster with a simple command:

设置yml文件后,您现在可以使用简单的命令将其部署到集群中:

kubectl apply -f deployment.yml

kubectl apply -f deployment.yml

Verify that the deployment and service are running by using (note that you can use -l, in general, to filter by any label you created!)

通过使用以下命令验证部署和服务是否正在运行(注意,通常可以使用-l来过滤创建的任何标签!)

kubectl get deployments,services -l challenge=challenge-name

kubectl get deployments,services -l challenge=challenge-name

Image for post
An example challenge deployment from csictf, running 2 replicas
来自csictf的示例挑战部署,运行2个副本

And that’s all there is to it, the challenge is now running on the cluster. Pick any node from your cluster, get its external IP, and try navigating to IP:NodePort (Where NodePort is the nodePort value you set in the yml file). You should see your challenge running on that port!

仅此而已,挑战现在正在集群上运行。 从集群中选择任何节点,获取其外部IP,然后尝试导航至IP:NodePort (其中NodePort是您在yml文件中设置的nodePort值)。 您应该看到在该端口上运行的挑战!

Note that no matter which node’s IP you use, k8 will take care of routing the request to a node that is running the pod!

请注意,无论您使用哪个节点的IP,k8都会将请求路由到运行Pod的节点!

Note: You will need to expose firewall ports on your cloud provider if it by default blocks incoming connections on all ports. In the case of gcp if you followed the instructions in the previous section, we can use gcloud to apply a firewall rule to allow port 30001(for example) on all nodes with the tag challenges :

注意:如果默认情况下阻止云提供商在所有端口上的传入连接,则需要在云提供商上公开防火墙端口。 对于gcp如果您按照上一节中的说明进行操作,那么我们可以使用gcloud来应用防火墙规则,以在所有带有标签challenges节点上允许端口30001 (例如):

gcloud compute firewall-rules create challenge-name \
                            --allow tcp:30001 \
                            --priority 1000 \
                            --target-tags challenges

部署更多挑战 (Deploying more challenges)

Just follow the same procedure above for each challenge, create a deployment.yml file, and use kubectl apply to deploy the challenge. Make sure to update the labels and name for each deployment/service, otherwise, you might overwrite on on top of an existing challenge!

只需对每个挑战执行上述相同步骤,创建一个deployment.yml文件,并使用kubectl apply来部署挑战。 确保更新每个部署/服务的标签和名称,否则,您可能会在现有挑战的基础上覆盖!

Image for post
List of deployments from our CTF, csictf 2020
我们的CTF,csictf 2020的部署列表

If you set labels for challenge categories as we did in the same yml file above, you can also filter by a category, and perform operations on just a subset of the deployments, which is very handy during the CTF!

如果像我们在上述相同的yml文件中那样为挑战类别设置标签,您还可以按类别进行过滤,并仅对部分部署执行操作,这在CTF期间非常方便!

Image for post
Example 1: Viewing port numbers for all “pwn” challenges
示例1:查看所有“ pwn”挑战的端口号
Image for post
Example 2: Restart all containers running “web” challenges
示例2:重新启动所有运行“ Web”挑战的容器

将更新/更改应用于部署 (Applying updates/changes to a deployment)

The beautiful part about the apply command in the previous section is that later if you want to make changes to the same deployment (for example, update the tag of the image to push changes to the challenge, or changing the no of replicas), just modify the yml file, and as long as you have the same labels to match the deployment, k8 will apply the changes to the same deployment when you run the command again.

上一节中关于apply命令的美好之处在于,稍后,如果您想对同一部署进行更改(例如,更新映像的标签以将更改推送到质询,或更改副本数),只需修改yml文件,并且只要您具有与部署匹配的相同标签,当您再次运行命令时,k8会将更改应用于相同的部署。

Sometimes, making changes like updating the port exposed might cause a conflict that k8 can’t handle, and it will throw an exception. In that case, first, delete the deployment with

有时,进行诸如更新暴露的端口之类的更改可能会导致k8无法处理的冲突,并且将引发异常。 在这种情况下,首先,使用以下命令删除部署

kubectl delete -f deployment.yml

kubectl delete -f deployment.yml

And then apply the yml file again

然后再次应用yml文件

kubectl apply -f deployment.yml

kubectl apply -f deployment.yml

注意: (Note:)

You may have noticed that this deployment process gets a bit hard to manage as you have more and more challenges, as you have one yml file per challenge. We built a CLI tool just to automate this process of creating a deployment and a service. Refer to this article on ctfup and CI/CD in our series to know more about how to use it, or how you can build a similar tool for your use case!

您可能已经注意到,由于挑战越来越多,每个挑战只有一个yml文件,因此部署过程变得有些困难。 我们构建了一个CLI工具,只是为了自动化创建部署和服务的过程。 请参阅我们系列中有关ctfup和CI / CD的这篇文章,以了解有关如何使用它的更多信息,或者如何为您的用例构建类似的工具!

节点之间的负载平衡和速率限制 (Load Balancing between Nodes and Rate limiting)

You may have noticed that currently, we accessed the deployment by accessing a single node’s IP and let Kubernetes then route the connection to the right node. But this is susceptible to an attacker overwhelming a single node with a lot of packets. K8 would still route the connections in a round-robin fashion to pods on the cluster, but the real issue is that it’s possible for an attacker to still overwhelm a single node with a lot of network requests.

您可能已经注意到,当前,我们通过访问单个节点的IP来访问部署,然后让Kubernetes将连接路由到正确的节点。 但是,这很容易受到攻击者淹没具有大量数据包的单个节点的攻击。 K8仍将以循环方式将连接路由到群集上的Pod,但是真正的问题是,攻击者可能仍会淹没具有大量网络请求的单个节点。

There are several solutions to fix this issue:

有几种解决此问题的解决方案:

  1. Instead of NodePort you can use a LoadBalancer k8 service. This means that your cloud provider will handle the load balancing between nodes for you. The main issue with this approach is though, creating one load balancer rule per challenge can get costly. (For our 4 day CTF, we estimated 50$ would be spent if we went ahead with this, just on load balancer rules)

    可以使用LoadBalancer k8服务代替NodePort 。 这意味着您的云提供商将为您处理节点之间的负载平衡。 但是,这种方法的主要问题是,为每个挑战创建一个负载均衡器规则可能会增加成本。 (对于我们的4天周转资金,如果按照负载均衡器规则进行操作,我们估计将花费50美元)

  2. You can handle load balancing on the DNS level, by creating multiple A records against the same domain name (round-robin DNS). But a more persistent attacker could still just access one node by obtaining its IP address!

    您可以通过针对同一域名(轮询DNS)创建多个A记录来处理DNS级别的负载平衡。 但是,更具持久性的攻击者仍然可以通过获取其IP地址来访问一个节点!
  3. You can roll out your own VM instance running a reverse proxy like Nginx, or HaProxy to balance the load between nodes. This is the option we went with in our CTF, as it also sets up rate limiting for each challenge :)

    您可以推出自己的VM实例,该实例运行诸如Nginx或HaProxy之类的反向代理以平衡节点之间的负载。 这是我们在CTF中使用的选项,因为它还为每个挑战设置了速率限制:)

This is completely optional, but if you want to set up such a load balancing solution too, you can refer to the next section, but most CTFs probably can get away without doing this too.

这是完全可选的,但是如果您也想设置这种负载平衡解决方案,则可以参考下一节,但是大多数CTF可能也可以不这样做而逃脱。

在群集前面设置HaProxy负载均衡器 (Setting up a HaProxy Load Balancer in front of your cluster)

Start by provisioning another VM which will act as a reverse proxy to your challenges cluster. You can generally use a really small sized machine for this (1vCPU or lesser, 500MB-1GB RAM), as all this machine is doing is routing requests :).

首先,提供另一个虚拟机,该虚拟机将充当挑战群集的反向代理。 通常,您可以为此使用小型计算机(1vCPU或更小,500MB-1GB RAM),因为该计算机所做的只是路由请求:)。

Install HaProxy on the machine:

在计算机上安装HaProxy:

sudo apt update

sudo apt update

sudo apt install haproxy

sudo apt install haproxy

Edit /etc/default/haproxy and append ENABLED=1 to the file to enable HaProxy

编辑/etc/default/haproxy并将ENABLED=1附加到文件以启用HaProxy

nano /etc/default/haproxy

nano /etc/default/haproxy

Now, let’s set up a HaProxy config file at /etc/haproxy/haproxy.cfg :

现在,让我们在/etc/haproxy/haproxy.cfg设置一个HaProxy配置文件:

# The first two global and default sections
# are just the ones present by default in the config file
# We leave these unchanged


global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
	stats timeout 30s
	user haproxy
	group haproxy
	daemon


	# Default SSL material locations
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private


	# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets


defaults
	log	global
	mode	http
	option	httplog
	option	dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http


# Setup stats admin panel on port 8080 so we can view load statistics during the CTF
listen stats
    bind *:8080
    mode http
    stats enable
    stats uri /
    # DON'T FORGET TO CHANGE THE CREDENTIALS BELOW!!
    stats auth username:password


# Setup a haproxy table to store connection information for each user IP adress
# We'll use in each challenge to limit no of connections and the connection rate
# for users
backend Abuse
	stick-table type ip size 1m expire 10m store conn_rate(3s),conn_cur


# Set the detault mode as TCP, so pwn challenges and netcat challenges work
# Also set connection timeouts
# most importantly, set the default backend to the cluster. We create this backend
# in the end of this file
defaults
	mode tcp
	default_backend chall-cluster
  	timeout connect 5000
  	timeout client  50000
  	timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http




# The below configurations have configurations for each and every challenge
# For each case, we setup rules to reject connections in our blacklist file
# and also setup rate limiting rules to a maximum connection rate of 50 every
# 3 seconds, and a maximum of 50 simultaneous connections


# Note that its possible to just create one frontend section and bind to multiple ports
# too, by doing something like
#
# frontend challenges
# 	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
# 	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
# 	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
# 	tcp-request connection track-sc1 src table Abuse
# 	bind *:30000-50000
#
#
# The reason we create multiple frontends, is just so that we can monitor them
# individually on the stats admin panel that we created above in this file. If you
# don't need to monitor on an individual challenge level, then just use the above 
# frontend rule and omit all the ones below


# Change these to your challenges and ports, obviously


# PWN
frontend pwn-intended-0x1
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30001
frontend pwn-intended-0x2
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30007
frontend pwn-intended-0x3
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30013
frontend global-warming
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30023
frontend smash
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30046


# WEB
frontend body-count
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30202
frontend cascade
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30203
frontend ccc
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30125
frontend file-library
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30222
frontend mr-rami
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30231
frontend oreo
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30243
frontend the-confused-deputy
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30256
frontend the-usual-suspects
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30279
frontend warm-up
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30272
frontend secure-portal
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30281


# MISC
frontend escape-plan
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30419
frontend friends
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30425
frontend prison-break
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30407


# REV
frontend blaise
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30808
frontend vietnam
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30814
frontend aka
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30611
frontend where-am-i
	tcp-request connection reject if { src -f /etc/haproxy/blacklist.lst }
	tcp-request connection reject if { src_conn_rate(Abuse) ge 50 }
	tcp-request connection reject if { src_conn_cur(Abuse) ge 50 }
	tcp-request connection track-sc1 src table Abuse
	bind *:30623




# Lastly, create the chall-cluster backend
# We setup HaProxy to use round robin load balancing
# Add a server statement for each node's IP in your cluster
backend chall-cluster
	mode tcp
	balance roundrobin
        server node1 10.154.0.19
        server node2 10.154.0.22
        server node3 10.154.0.21

I’ve left comments in the file explaining what each section does, don’t forget to modify it according to your needs.

我在文件中留下了注释,解释了每个部分的作用,不要忘记根据您的需要对其进行修改。

Once the config file is set up, just run

一旦配置文件设置好,就可以运行

sudo systemctl restart haproxy

sudo systemctl restart haproxy

And HaProxy should now be running and load balancing+rate limiting connections to your challenges! (Make sure you have opened the required firewall ports on the machine running HaProxy too)

HaProxy现在应该正在运行,并且负载平衡+速率限制连接可以应对您的挑战! (确保您也已在运行HaProxy的计算机上打开了所需的防火墙端口)

奖励:对集群进行负载测试 (Bonus: Load testing your cluster)

The best way to load test your cluster, it to attack it using an army of pods from another Kubernetes cluster :)

负载测试集群的最佳方法是使用来自另一个Kubernetes集群的一系列Pod攻击集群:)

This is mostly out of scope from this article, but I recommend reading this great tutorial in google cloud’s official docs. We used the same process before our CTF, using locust (a load testing framework) running on top of a GKE cluster to raid some challenges with requests, so we can get an idea of how many replicas for each challenge would be “enough” during the CTF.

这在本文的讨论范围之外, 但我建议您在google cloud的官方文档中阅读本教程 。 我们在CTF之前使用蝗虫使用了相同的过程 (负载测试框架)在GKE集群上运行以突击请求带来的一些挑战,因此我们可以了解在CTF期间“挑战”足够多的副本。

You can refer to this GitHub issue on our repo, where we posted some results from our load testing.

您可以在我们的仓库中参考这个GitHub问题 ,我们在其中发布了负载测试的一些结果

您到达了尽头 (You reached The End)

In this article, we went over how you can setup a k8 cluster to deploy CTF challenges on, and also setting up HaProxy to load balance connections to nodes on the cluster. If you’re interested in more aspects of hosting a CTF, like setting up CI/CD to deploy challenges, or on statistics/budget planning from a real CTF, do refer to other articles in our series below!

在本文中,我们介绍了如何设置k8集群以在其上部署CTF挑战,以及如何设置HaProxy来平衡与集群上节点的连接。 如果您对托管CTF的更多方面感兴趣,例如设置CI / CD来部署挑战,或者对来自实际CTF的统计数据/预算计划感兴趣,请参阅下面我们系列中的其他文章!

翻译自: https://medium.com/csictf/using-kubernetes-haproxy-to-host-scalable-ctf-challenges-a4720b6a9bbc

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值