使用kubernetes的rdfox高可用性设置

As of today, users have the option to run RDFox in Docker using official images from Oxford Semantic Technologies. In this article I will describe how to deploy these images to a multi-zone Kubernetes cluster to achieve a high-availability, read-only configuration.

到目前为止,用户可以选择使用牛津语义技术公司的官方映像在Docker中运行RDFox。 在本文中,我将描述如何将这些映像部署到多区域Kubernetes集群以实现高可用性,只读配置。

RDFox is a high-performance knowledge graph and semantic reasoning engine. It is an in-memory solution, which allows flexible incremental addition and retraction of data, and incremental reasoning. It is mathematically validated at the University of Oxford. Since v3, RDFox also offers the ability to incrementally save updates to persistent storage for easier restarts.

RDFox是高性能的知识图和语义推理引擎。 它是一种内存解决方案,它允许灵活地增量添加和撤回数据以及增量推理。 牛津大学对其进行了数学验证。 从v3开始,RDFox还提供了将更新增量保存到持久性存储中的功能,以便于重新启动。

Previously, customers wishing to run RDFox in Docker have had to build their own images from the official release distributions, resulting in additional application development and maintenance effort. This morning we announced some good news on that front:

以前,希望在Docker中运行RDFox的客户必须从官方发行版本中构建自己的映像,从而导致额外的应用程序开发和维护工作。 今天早上,我们宣布了这方面的一些好消息:

Kubernetes is the most popular container orchestration platform, with -as-a-service offerings from all three major cloud vendors and a large ecosystem of supporting tools. Although originally best at orchestrating stateless containers, the platform has gradually added support for workloads which require stable, persistent storage through the StatefulSet resource type. Using this, developers can ensure that each replica within a set has its own stable storage and network identity, better matching the requirements of replicated data stores.

Kubernetes是最流行的容器编排平台,具有来自所有三大主要云供应商的即服务产品以及庞大的支持工具生态系统。 尽管最初该平台最擅长编排无状态容器,但该平台已逐渐增加了对需要通过StatefulSet资源类型进行稳定,持久存储的工作负载的支持。 使用此功能,开发人员可以确保集合中的每个副本都有其自己的稳定存储和网络身份,从而更好地满足复制数据存储的需求。

目标 (The Goal)

In this article, I will walk through how to build and deploy a high-availability, read-only RDFox service to a Kubernetes cluster. The target cluster used to test the setup was built using the Amazon EKS Architecture Quick Start, which provisions one Kubernetes node in each of the three availability zones within the chosen region. To achieve our desired setup, we will define a StatefulSet specifying three RDFox replicas. Kubernetes will automatically distribute these across the region’s three availability zones, provisioning an Elastic Block Store (EBS) volume in the correct zone to act as each replica’s server directory.

在本文中,我将逐步介绍如何将高可用性只读RDFox服务构建和部署到Kubernetes集群。 用于测试设置的目标集群是使用Amazon EKS Architecture Quick Start构建的,该指南在所选区域的三个可用性区域的每个区域中提供了一个Kubernetes节点。 为了实现所需的设置,我们将定义一个StatefulSet,指定三个RDFox副本。 Kubernetes将自动在区域的三个可用区域中分配这些资源,并在正确的区域中配置一个弹性块存储(EBS)卷以充当每个副本的服务器目录。

Although tested on AWS, the configuration should be readily adaptable to other cloud providers or on-premise Kubernetes clusters. To help with this, I will point out the parts of the configuration which are specific to AWS services.

尽管已在AWS上进行了测试,但该配置应易于适应其他云提供商或本地Kubernetes集群。 为了解决这个问题,我将指出配置中特定于AWS服务的部分。

Note that the described setup is intended as an example only and omits details which would be important in a production setup such as security controls and resource limit configuration. With that caveat out of the way, let’s dive in to some YAML!

请注意,所描述的设置仅用作示例,并省略了在生产设置(例如安全控制和资源限制配置)中很重要的细节。 无需担心,让我们深入了解YAML!

定义对象 (Defining the Objects)

As noted above, Kubernetes’s support for stateful workloads is via the StatefulSet resource type which will be our main resource. Every instance of this type of resource requires its own headless Service object to be responsible for the network identities of the pods in the set. In addition, we will define a second, load-balanced Service for clients that don’t care which instance they’re talking to. Finally, we will define an Ingress resource to expose the load-balanced service to the outside world for a quick test.

如上所述,Kubernetes对有状态工作负载的支持是通过StatefulSet资源类型进行的,该资源类型将是我们的主要资源。 这种类型的资源的每个实例都需要其自己的无头服务对象来负责集合中容器的网络标识。 另外,我们将为不在乎正在与哪个实例通信的客户端定义第二个负载平衡服务。 最后,我们将定义一个Ingress资源,以将负载平衡的服务暴露给外界,以进行快速测试。

In addition to the four main objects, which we will define in YAML, we will manually add secrets to the cluster to hold role credentials and a license key. We will also use a pre-populated Elastic File System (EFS) volume containing the data to be loaded into each replica during the initialisation stage. EFS is chosen for this role because, unlike EBS, its file systems are accessible across availability zones, enabling us to share a single copy of the initialisation data with all three replicas.

除了我们将在YAML中定义的四个主要对象之外,我们还将手动向集群添加机密以保存角色凭据和许可证密钥。 我们还将使用预填充的弹性文件系统(EFS)卷,其中包含要在初始化阶段加载到每个副本中的数据。 之所以选择EFS是因为与EBS不同,它的文件系统可在各个可用区域中访问,这使我们能够与所有三个副本共享一个初始化数据副本。

无头服务 (The Headless Service)

To start with, let’s examine the headless Service which we’ll name rdfox-set. It is defined as follows:

首先,让我们研究无头服务,我们将其命名为rdfox-set 。 定义如下:

apiVersion: v1
kind: Service
metadata:
  name: rdfox-set
  labels:
    app: rdfox-app
spec:
  ports:
  - port: 80
    targetPort: rdfox-endpoint
  clusterIP: None
  selector:
    app: rdfox-app

The purpose of this service is to define a network domain within which the Pods belonging to our StatefulSet will be assigned stable host names. On lines 9 and 10, we specify that the service should listen on port 80 and that requests should be routed to the port named rdfox-endpoint on the selected Pods. We will define this port later, in our StatefulSet. Line 11, specifying clusterIP: None is what defines this as a headless service. The selector specified on lines 12–13 tells Kubernetes that we want traffic for this service to be routed to Pods labelled with app: rdfox-app. All pretty simple so far but here comes the big one…

这项服务的目的是定义在其中属于我们StatefulSet舱体将被赋予稳定的主机名的网络域名。 在第9行和第10行,我们指定该服务应在端口80上侦听,并且请求应路由到所选Pods上名为rdfox-endpoint的端口。 稍后,我们将在StatefulSet中定义此端口。 第11行,指定clusterIP: None它定义为无头服务。 在第12-13行指定的选择器告诉Kubernetes我们希望将此服务的流量路由到标有app: rdfox-app 。 到目前为止,一切都还算简单,但是大的来了……

有状态集 (The Stateful Set)

Our StatefulSet object is where the bulk of our configuration lives. It begins as follows:

我们的StatefulSet对象是我们大部分配置所在的位置。 它开始如下:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: rdfox-stateful-set
spec:
  selector:
    matchLabels:
      app: rdfox-app
  serviceName: "rdfox-set"
  replicas: 3

The above lines declare the StatefulSet’s type, name, Pod selector, the name of the headless Service we created for it and finally the number of replicas we want. The definition continues as follows:

上面行声明了StatefulSet的类型,名称, Pod选择器,我们为其创建的无头服务的名称以及最终所需的副本数。 定义继续如下:

template:
    metadata:
      labels:
        app: rdfox-app
    spec:
      containers:
        - name: rdfox
          image: oxfordsemantic/rdfox:3.1.1
          args: ['-license-file', '/license/RDFox.lic', 'daemon']
          ports:
            - name: rdfox-endpoint
              containerPort: 12110
              protocol: TCP
          volumeMounts:
            - name: license
              mountPath: "/license"
              readOnly: true
            - name: server-directory
              mountPath: "/home/rdfox/.RDFox"
      initContainers:
        - name: init-server-directory
          image: oxfordsemantic/rdfox-init:3.1.1
          env:
            - name: RDFOX_ROLE
              valueFrom:
                secretKeyRef:
                  name: first-role-credentials
                  key: rolename
            - name: RDFOX_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: first-role-credentials
                  key: password
            - name: RDFOX_LICENSE_CONTENT
              valueFrom:
                secretKeyRef:
                  name: rdfox-license
                  key: RDFox.lic
          volumeMounts:
            - name: shell-root-directory
              mountPath: "/data"
              readOnly: true
            - name: server-directory
              mountPath: "/home/rdfox/.RDFox"
      volumes:
        - name: license
          secret:
            secretName: rdfox-license
            items:
              - key: RDFox.lic
                path: RDFox.lic
        - name: shell-root-directory
          persistentVolumeClaim:
              claimName: family-init-vol-claim

This section of the definition defines the template for the Pods that the StatefulSet will manage. It begins by ensuring that they all carry the labelapp: rdfox-app so that they are matched by the selectors defined in the earlier part of the StatefulSet and ulimately in both Services. After that we begin, on line 5, the spec field for the template which determines what each replica Pod will contain.

定义的此部分定义StatefulSet将管理的Pod的模板。 首先要确保它们都带有标签app: rdfox-app以便它们与StatefulSet的较早部分中定义的选择器以及两个Service中的定义器进行匹配。 之后,我们在第5行开始,为模板的spec字段确定每个Pod副本将包含的内容。

The containers field beginning on line 6 defines the main rdfox container using the official Docker image for RDFox v3.1.1, oxfordsemantic/rdfox:3.1.1. It exposes the default port for the image (12110) with name rdfox-endpoint, matching the definition in our headless Service resource. It also specifies a volume mount for the server directory on line 18 to the default server directory location of the image. Since we need each replica to have a different logical volume mounted in this role, the name used here refers not to one of the existing volumes, declared in lines 45–54 of this section, but to a PersistentVolumeClaim declared in the volumeClaimTemplates field in the last section of this resource’s definition below.

从第6行开始的containers字段使用RDFox v3.1.1的官方Docker映像oxfordsemantic/rdfox:3.1.1定义了主rdfox容器。 它公开了名称为rdfox-endpoint的映像(12110)的默认端口,该端口与我们无头服务资源中的定义匹配。 它还在第18行的服务器目录中指定卷安装到映像的默认服务器目录位置。 由于我们需要每个副本在此角色中安装不同的逻辑卷,因此此处使用的名称不是指在本节第45–54行中声明的现有卷之一,而是指在volumeClaimTemplates字段中声明的PersistentVolumeClaim 。该资源定义的最后部分如下。

The initContainers field beginning on line 20 declares an initialisation step for each Pod that belongs to the the StatefulSet. This container, named init-server-directory, must complete successfully before the StatefulSet controller will attempt to start the main rdfox container within each Pod. It specifies oxfordsemantic/rdfox-init:3.1.1, the companion for oxfordsemantic/rdfox:3.1.1, as its image. The companion image is provided to make it easy to prepare the server directory before mounting it to containers using the main image. This includes changing the ownership of the directory to the default user for the image and initialising the directory using RDFox. Some data store containers include scripts within their main image to make their initialisation step invisible to users. Although this is slightly more convenient, it means that the containers must be started as root and then retain superuser capabilities throughout their lifetime even though they are only used as the container starts up. For RDFox, we recommend running only the companion image as root with CAP_CHOWN, CAP_SETUID and CAP_SETGID capabilities and then running the main image as its default non-root user.

从第20行开始的initContainers字段为属于StatefulSet的每个Pod声明一个初始化步骤。 此容器名为init-server-directory ,必须成功完成,然后StatefulSet控制器将尝试启动每个Pod中的主rdfox容器。 它指定oxfordsemantic/rdfox-init:3.1.1 (它是oxfordsemantic/rdfox:3.1.1的同伴)作为其图像。 提供随附的映像可以使在使用主映像将服务器目录安装到容器之前轻松准备服务器目录。 这包括将目录的所有权更改为图像的默认用户,并使用RDFox初始化目录。 一些数据存储容器在其主映像中包含脚本,以使初始化步骤对用户不可见。 尽管这样做稍微方便些,但是这意味着容器必须以root用户身份启动,然后即使它们仅在容器启动时使用,也必须在整个生命周期中保留超级用户功能。 对于RDFox,我们建议仅以CAP_CHOWNCAP_SETUIDCAP_SETGID功能以root CAP_CHOWN运行伴随映像,然后以其默认非root用户身份运行主映像。

In order to be able to prepare the server directory for the main container the init-server-directory container mounts the Pod’s server directory in exactly the same way as the rdfox container. Another feature of the companion image is to look for a file at container path /data/initialize.rdfox and, if present, pass it to the contained RDFox process which will then attempt to execute it in the RDFox shell. To take advantage of this, our initialisation container mounts the pre-populated EFS file system mentioned earlier, which contains such a script, to the default shell root container path /data. The initialize.rdfox script in this EFS file system is as follows:

为了能够为主容器准备服务器目录, init-server-directory容器以与rdfox容器完全相同的方式装入Pod的服务器目录。 伴随映像的另一个功能是在容器路径/data/initialize.rdfox查找文件,如果存在,则将其传递给包含的RDFox进程,该进程随后将尝试在RDFox Shell中执行该文件。 为了利用这一点,我们的初始化容器将前面提到的预先填充的EFS文件系统挂载到默认的shell根容器路径/data ,该文件系统包含这样的脚本。 此EFS文件系统中的initialize.rdfox脚本如下:

dstore create family par-complex-nn
active family
import data.ttl
import rules.dlog


role create guest
grant privileges read > to guest

This creates a data store called family and populates it with the example data and rules from the Getting Started guide for RDFox which are also loaded inside the mounted volume. It also creates the special guest role and allows it to read all of the server’s resource. This will allow us to make calls to the REST service anonymously. All of this is persisted to the server directory which, when mounted to the main rdfox container, then has everything needed for RDFox to load the data store and access control policies in daemon mode.

这将创建一个名为family的数据存储,并使用RDFox入门指南中的示例数据和规则填充该数据存储,这些数据和规则也已装入已装入的卷中。 它还创建特殊的guest角色,并允许其读取服务器的所有资源。 这将使我们能够匿名调用REST服务。 所有这些都保存在服务器目录中,该服务器目录在挂载到主rdfox容器后,便具有RDFox在daemon模式下加载数据存储和访问控制策略所需的一切。

The final thing to discuss from the above block of YAML is the approach to mounting the license which is done in different ways for the rdfox and init-server-directory containers. Our official recommendation for mounting the license is to bind-mount it to /opt/RDFox/RDFox.lic so that it will be found by the executable in the same directory. This works well when launching containers using docker run but Kubernetes does not allow mounting of single files to existing directories so trying this approach leads to a situation where the image’s entrypoint executable is hidden by the mount and the container can’t start. To work around this, our definition mounts the license volume (defined on lines 46–51) to the rdfox container at path /license and then overrides the default CMD for the image to explicitly set the license-file server parameter as /license/RDFox.lic. In future, RDFox will accept the license via an environment variable RDFOX_LICENSE_CONTENT, avoiding the need to override the default command in most circumstances. The companion image used in the init-server-directory container already accepts this argument and lines 34–38 of stateful-set-pt-2.yml instead map the rdfox-license secret into the container via this environment variable.

在上述YAML块中,最后要讨论的是安装许可证的方法,该方法对rdfoxinit-server-directory容器以不同的方式完成。 挂载许可证的官方建议是将其绑定挂载到/opt/RDFox/RDFox.lic以便可执行文件可以在同一目录中找到它。 当使用docker docker run启动容器时,这很好用,但是Kubernetes不允许将单个文件挂载到现有目录,因此尝试此方法会导致以下情况:挂载隐藏了映像的入口点可执行文件,并且容器无法启动。 要解决此问题,我们的定义将license卷(在第46–51行中定义)安装到路径/license处的rdfox容器,然后覆盖映像的默认CMD以将license-file服务器参数显式设置为/license/RDFox.lic 。 将来,RDFox将通过环境变量RDFOX_LICENSE_CONTENT接受许可证,从而在大多数情况下避免覆盖默认命令。 在init-server-directory容器中使用的伴随映像已经接受了该参数,而stateful-set-pt-2.yml 34-38行通过该环境变量将rdfox-license秘密映射到了容器中。

The last part of the definition of our StatefulSet definition is the volumeClaimTemplates field discussed earlier. It looks like this:

StatefulSet定义的最后一部分是前面讨论的volumeClaimTemplates字段。 看起来像这样:

volumeClaimTemplates:
    - metadata:
        name: server-directory
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: "gp2"
        resources:
          requests:
            storage: 1Gi

Here we find our first piece of AWS-specific configuration in the use of the gp2 StorageClass. The gp2 resource, which is installed by default onto the clusters built by the EKS Quick Start template, relates to the Elastic Block Store. Using it in the template for our server-directory PersistentVolumeClaim tells Kubernetes to create a new EBS volume in the same availability zone as the node that is running the Pod to fulfil this role. To port the example configuration to another cloud provider, set up the most suitable equivalent StorageClass for that provider on your cluster and use it in place of gp2 in this template. The official Kubernetes documentation for the StorageClass resource type contains details of many alternatives.

在这里,我们找到了使用gp2 StorageClass的第一个AWS特定配置。 gp2资源默认安装在EKS快速入门模板构建的集群上,与Elastic Block Store有关。 在server-directory PersistentVolumeClaim的模板中使用它,可以告诉Kubernetes在与运行Pod的节点相同的可用区中创建一个新的EBS卷,以履行该角色。 要将示例配置移植到另一个云提供程序,请在您的集群上为该提供程序设置最合适的等效StorageClass ,并在该模板中使用它代替gp2StorageClass资源类型的官方Kubernetes文档包含许多替代方法的详细信息。

The complete definition of our StatefulSet resource is visible here.

我们的StatefulSet资源的完整定义在此处可见

负载均衡服务和入口 (The Load-Balanced Service and Ingress)

Our load-balancing Service definition will be responsible for distributing requests to the replicas. In essence, this is our high-availability service. It is a pretty vanilla Kubernetes Service. As with our headless Service, it routes traffic to the port named rdfox-endpoint on pods labelled app: rdfox-app. Unlike our headless service, though, we set its type to NodePort. The definition is:

我们的负载平衡服务定义将负责将请求分发到副本。 从本质上讲,这是我们的高可用性服务。 这是一个非常漂亮的Kubernetes服务。 与我们的无头服务一样,它会将流量路由到标记为app: rdfox-app上名为rdfox-endpoint的端口。 但是,与无头服务不同,我们将其类型设置为NodePort 。 定义是:

apiVersion: v1
kind: Service
metadata:
  name: rdfox-service
spec:
  type: NodePort
  selector:
    app: rdfox-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: rdfox-endpoint

We now have a service that could be used by other containers within the cluster which is sufficient for many use cases. For the purposes of demonstration though, we also define the following Ingress resource to allow us to reach the service from the outside world at the imaginary domainrdfox-kubernetes.example.org:

现在,我们有了集群中其他容器可以使用的服务,足以满足许多使用情况。 不过,出于演示目的,我们还定义了以下Ingress资源,以使我们能够在假想域rdfox-kubernetes.example.org上从外界获得服务:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: rdfox-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
spec:
  tls:
    - hosts:
      - "rdfox-kubernetes.example.org"
  rules:
  - host: "rdfox-kubernetes.example.org"
    http:
      paths:
      - path: /*
        backend:
          serviceName: "rdfox-service"
          servicePort: 80

This resource is another place where AWS-specialisation is seen — specifically in the annotations on lines 5–8. These configure the behaviour of the alb-ingress-controller, a component which makes our desired Ingress definition a reality using an Application Load Balancer on AWS. Deleting these lines would still leave us with a valid Ingress resource however other providers may need equivalent custom annotations. For the above declaration to work correctly on AWS we would need to have a TLS certificate for the stated domain in CertificateManager.

该资源是可以看到AWS专业化的另一个地方-特别是在第5-8行的注释中。 这些配置了alb-ingress-controller的行为,alb-ingress-controller的一个组件使用AWS上的Application Load Balancer使我们所需的Ingress定义变为现实。 删除这些行仍将为我们提供有效的Ingress资源,但其他提供程序可能需要等效的自定义注释。 为了使以上声明在AWS上正常工作,我们需要在CertificateManager中具有针对所述域的TLS证书。

部署中 (Deploying)

We now have four files defining our main resources which, for convenience we gather in a directory called RDFoxKubernetes on a host where we have kubectl configured to control our target cluster. Before we push our resource definitions to our cluster, we first need to create the secrets they depend on.

现在,我们有四个文件定义了我们的主要资源,为方便起见,我们将其收集在主机上的一个名为RDFoxKubernetes的目录中,在该目录中,我们配置了kubectl来控制目标集群。 在将资源定义推送到集群之前,我们首先需要创建它们所依赖的秘密。

To create the rdfox-license secret, we add a valid, in-date RDFox license key to file RDFox.lic within our working directory and run:

要创建rdfox-license密钥,我们在工作目录中添加有效的最新RDFox许可证密钥以将RDFox.lic文件RDFox.lic并运行:

kubectl create secret generic rdfox-license --from-file=./RDFox.lic

Likewise, to create the credentials for the first role, we add the desired role name to file rolename and the desired password to file password, both within our working directory, and then run:

同样,以创建第一个角色的凭据,我们所期望的角色名添加到文件rolename和所需密码文件password ,无论我们的工作目录中,然后运行:

kubectl create secret generic first-role-credentials \
--from-file=./rolename --from-file=./password

Finally we create our StatefulSet and accompanying resources with:

最后,我们使用以下方法创建我们的StatefulSet及其随附的资源:

kubectl apply -f RDFoxKubernetes

The StatefulSet controller on our cluster will now set about bringing our cluster into the desired state we have declared in the manifests. For each replica, this will involve provisioning a fresh EBS volume to fulfil the server-directory PersistentVolumeClaim declared in our StatefulSet’s template, running the initialisation container to populate the new volume and finally launching the main RDFox container. The replicas will be assigned integers from 0 to 2 and the controller will not attempt to create replicas with higher indices until all lower-indexed replicas are up and healthy.

现在,集群上的StatefulSet控制器将着手使集群进入清单中声明的​​所需状态。 对于每个副本,这将涉及提供一个新的EBS卷以实现在StatefulSet的模板中声明的server-directory PersistentVolumeClaim ,运行初始化容器以填充新卷,最后启动主RDFox容器。 副本将被分配0到2之间的整数,并且控制器将不会尝试创建具有较高索引的副本,直到所有索引较低的副本正常运行为止。

We can follow the state of this process as follows:

我们可以遵循以下过程的状态:

$ kubectl get statefulsets
NAME READY AGE
rdfox-stateful-set 1/3 1m

When this shows that the rdfox-stateful-set has 3 out 3 pods ready, we can lookup the name assigned to our ingress with:

当这表明rdfox-stateful-set已准备好3个Pod中的3个时,我们可以使用以下命令查找分配给入口的名称:

kubectl get ingress

The entry under the column headed ADDRESS for the rdfox-ingress resource is the name of a public-facing load balancer created specifically for the ingress. We can set this as the value of a DNS A record for our imaginary rdfox-kubernetes.example.org domain and then, allowing some time for DNS records to update, call our service from any host with internet access. For a simple test, let’s curl the API that lists the server’s data stores to check that our family data store is present as expected:

rdfox-ingress资源的“ ADDRESS ”列下的条目是专门为入口创建的面向公众的负载均衡器的名称。 我们可以将其设置为我们想象中的rdfox-kubernetes.example.org域的DNS A记录的值,然后,让DNS记录有一段时间更新,请从任何具有Internet访问权限的主机调用我们的服务。 为了进行简单的测试,让我们curl列出服务器数据存储区的API,以检查我们的family数据存储区是否按预期存在:

$ curl https://rdfox-kubernetes.example.org/datastores?Name
"family"

Success! 😁

成功! 😁

打扫干净 (Cleaning Up)

Once we’re done with our test deployment, we can clean up the resources with:

完成测试部署后,我们可以使用以下方法清理资源:

kubectl delete -f RDFoxKubernetes

This will delete all the resources we explicitly declared but not the PersitentVolumeClaims that were created from the template in our StatefulSet. According to the official Kubernetes documentation this choice was made

这将删除我们明确声明的所有资源,但不会删除从StatefulSet中的模板创建的PersitentVolumeClaims 。 根据Kubernetes的官方文档,做出了这一选择

…to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources.

…以确保数据安全,这通常比自动清除所有相关StatefulSet资源有价值。

These can be listed with kubectl get pvc and deleted with kubectl delete pvc <pvc-name>. We can also now delete the DNS record we added.

这些可以用kubectl get pvc列出,用kubectl delete pvc <pvc-name> 。 现在,我们还可以删除添加的DNS记录。

结论 (Conclusion)

We’ve seen that RDFox can be deployed into a high-availability, read-only setup using Kubernetes. The newly-published official Docker images from Oxford Semantic Technologies help make this a convenient deployment option and we look forward to hearing from users about their experiences of running RDFox in this way.

我们已经看到可以使用Kubernetes将RDFox部署到高可用性的只读设置中。 来自牛津语义技术公司的最新发布的官方Docker镜像帮助使它成为一个方便的部署选项,我们期待听到用户关于他们以这种方式运行RDFox的经验。

翻译自: https://medium.com/oxford-semantic-technologies/rdfox-high-availability-setup-using-kubernetes-fe6ffa16f3c0

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
ReFox XI+ for VFP9 and all older versions ComPro (CZ) Jan Brebera info@refox.net jan.brebera@email.cz Seven steps to install ReFox XI+ 1. download ReFox XI+ files • ReFox_INST - download and save the .zip archive to your local disk - do not unpack it directly from the browser 2. create new folder for ReFox (e.g. "C:\ReFox XI+") and unpack the archive. Attention: Some zip-extractors (incl. the WinXP internal one) do not set the time of created file correctly and running ReFox.exe fails with error message: xxxxxxxx – Program violation In this case please use another extractor e.g. WinZip, pkunzip or Total Commander internal unzip 3. run ReFox.setup.exe to install ReFox 4. fill out your details in the form fill out the field 'Serial Number' do not change the fields 'Activation Key', modify the fields 'User name' and 'Company' if necessary after clicking 'OK' ReFox prepares data for generating the full activation key: created data sample: see ReFox.~~~ and ReFox~~~.zip files in your ReFox folder 5. send the data for generating activation key please create and send the e-mail message now (this version can send the data automatically) copy the text of ReFox.~~~ to the e-mail message body and the file ReFox~~~.zip send as an attachement 6. wait for sending the activation key back this can take several hours depending on daytime 7. copy the new activation key to ReFox folder and run ReFox.exe again Copyright © 1992-2007 by Jan Brebera, ComPro (CZ) ReFox is a property of its author - Jan Brebera Jan Brebera is holder of all rights regarding ReFox
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值