使用Vagrant的Kubernetes上的Java EE 7和WildFly

本技巧将展示如何运行在WildFly中部署并使用Kubernetes和Docker托管的Java EE 7应用程序。 如果您想了解更多有关基础知识的信息,那么该博客已经发布了有关该主题的大量内容。 以下是一些内容的样本:

让我们开始吧!

启动Kubernetes集群

可以使用常规脚本在Linux机器上轻松启动Kubernetes集群。 有适用于不同平台的入门指南 ,例如Fedora,CoreOS,Amazon Web Services等。 在Mac OS X上运行Kubernetes集群需要使用Vagrant映像,《 Vagrant 入门》中也对此进行了说明。 该博客将使用“无业游民”框。

  1. 默认情况下,Kubernetes集群管理脚本假定您正在Google Compute Engine上运行。 Kubernetes可以被配置成与运行各种供应商gcegkeawsazurevagrantlocalvsphere 。 因此,让我们将提供者设置为vagrant为:
    export KUBERNETES_PROVIDER=vagrant

    这意味着您的Kubernetes集群正在Vagrant创建的Fedora VM内运行。

  2. 以以下方式启动集群:
    kubernetes> ./cluster/kube-up.sh 
    Starting cluster using provider: vagrant
    ... calling verify-prereqs
    ... calling kube-up
    Using credentials: vagrant:vagrant
    Bringing machine 'master' up with 'virtualbox' provider...
    Bringing machine 'minion-1' up with 'virtualbox' provider...
    
    . . .
    
    Running: ./cluster/../cluster/vagrant/../../cluster/../cluster/vagrant/../../_output/dockerized/bin/darwin/amd64/kubectl --auth-path=/Users/arungupta/.kubernetes_vagrant_auth create -f -
    skydns
    ... calling setup-logging
    TODO: setup logging
    Done

    注意,此命令是从kubernetes目录中给出的,该目录已按照Mac OS X上的Build Kubernetes中的说明进行了编译

    默认情况下,Vagrant安装程序将创建一个kubernetes-master和一个kubernetes-minion。 这涉及创建Fedora VM,安装依赖项,创建master和minion,在它们之间建立连接以及许多其他事情。 因此,此步骤可能需要几分钟(在我的计算机上大约需要10分钟)。

验证Kubernetes集群

现在集群已经启动,请确保我们验证了集群是否已完成其应做的一切。

  1. 验证您的Vagrant图片是否正确显示为:
    kubernetes> vagrant status
    Current machine states:
    
    master                    running (virtualbox)
    minion-1                  running (virtualbox)
    
    This environment represents multiple VMs. The VMs are all listed
    above with their current state. For more information about a specific
    VM, run `vagrant status NAME`.

    也可以通过在Virtual Box控制台中验证状态来验证此消息,如下所示:

    虚拟盒子中的Kubernetes虚拟机

    虚拟盒子中的Kubernetes虚拟机

    boot2docker-vm是Boot2Docker VM。 然后是Kubernetes master和minion VM。 这里显示了另外两个VM,但它们与示例无关。

  2. 以以下方式登录到主服务器:
    kubernetes> vagrant ssh master
    Last login: Fri Jan 30 21:35:34 2015 from 10.0.2.2
    [vagrant@kubernetes-master ~]$

    确认其他Kubernetes组件已正确启动。 从Kubernetes API服务器开始:

    [vagrant@kubernetes-master ~]$ sudo systemctl status kube-apiserver
    kube-apiserver.service - Kubernetes API Server
       Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled)
       Active: active (running) since Fri 2015-01-30 21:34:25 UTC; 7min ago
         Docs: https://github.com/GoogleCloudPlatform/kubernetes
     Main PID: 3506 (kube-apiserver)
       CGroup: /system.slice/kube-apiserver.service
               └─3506 /usr/local/bin/kube-apiserver --address=127.0.0.1 --etcd_servers=http://10.245.1.2:4001 --cloud_provider=vagrant --admission_c...
    
    . . .

    然后,Kube Controller Manager:

    [vagrant@kubernetes-master ~]$ sudo systemctl status kube-controller-manager
    kube-controller-manager.service - Kubernetes Controller Manager
       Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled)
       Active: active (running) since Fri 2015-01-30 21:34:27 UTC; 8min ago
         Docs: https://github.com/GoogleCloudPlatform/kubernetes
     Main PID: 3566 (kube-controller)
       CGroup: /system.slice/kube-controller-manager.service
               └─3566 /usr/local/bin/kube-controller-manager --master=127.0.0.1:8080 --minion_regexp=.* --cloud_provider=vagrant --v=2
    
    . . .

    同样,您也可以验证etcdnginx

    Docker和Kubelet在minion中运行,可以通过登录minion并使用systemctl脚本来进行验证:

    kubernetes> vagrant ssh minion-1
    Last login: Fri Jan 30 21:37:05 2015 from 10.0.2.2
    [vagrant@kubernetes-minion-1 ~]$ sudo systemctl status docker
    docker.service - Docker Application Container Engine
       Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
       Active: active (running) since Fri 2015-01-30 21:39:05 UTC; 8min ago
         Docs: http://docs.docker.com
     Main PID: 13056 (docker)
       CGroup: /system.slice/docker.service
               ├─13056 /usr/bin/docker -d -b=kbr0 --iptables=false --selinux-enabled
               └─13192 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 4194 -container-ip 10.246.0.3 -container-port 8080
    
    . . .
    [vagrant@kubernetes-minion-1 ~]$ sudo systemctl status kubelet
    kubelet.service - Kubernetes Kubelet Server
       Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled)
       Active: active (running) since Fri 2015-01-30 21:36:57 UTC; 10min ago
         Docs: https://github.com/GoogleCloudPlatform/kubernetes
     Main PID: 3118 (kubelet)
       CGroup: /system.slice/kubelet.service
               └─3118 /usr/local/bin/kubelet --etcd_servers=http://10.245.1.2:4001 --api_servers=https://10.245.1.2:6443 --auth_path=/var/lib/kubele...
    
    . . .
  3. 检查小兵:
    kubernetes> ./cluster/kubectl.sh get minions
    Running: ./cluster/../cluster/vagrant/../../_output/dockerized/bin/darwin/amd64/kubectl --auth-path=/Users/arungupta/.kubernetes_vagrant_auth get minions
    NAME                LABELS              STATUS
    10.245.1.3          <none>              Ready

    仅创建一个小兵。 这可以通过在调用kube-up.sh脚本之前将环境变量NUM_MINIONS变量设置为整数来进行操作。

    最后检查豆荚为:

    kubernetes> ./cluster/kubectl.sh get pods
    Running: ./cluster/../cluster/vagrant/../../_output/dockerized/bin/darwin/amd64/kubectl --auth-path=/Users/arungupta/.kubernetes_vagrant_auth get pods
    POD                                    IP                  CONTAINER(S)        IMAGE(S)                           HOST                    LABELS              STATUS
    22d4a478-a8c8-11e4-a61e-0800279696e1   10.246.0.2          etcd                quay.io/coreos/etcd:latest         10.245.1.3/10.245.1.3   k8s-app=skydns      Running
                                                               kube2sky            kubernetes/kube2sky:1.0                                                        
                                                               skydns              kubernetes/skydns:2014-12-23-001

    这显示默认情况下创建了一个容器,并且有三个容器在运行:

    • skydns :SkyDNS是公告及建立在一流的服务发现分布式服务ETCD 。 它利用DNS查询来发现可用的服务。
    • etcd :用于共享配置和服务发现的分布式一致键值存储,重点是简单,安全,快速,可靠。 这用于存储Kubernetes的状态信息。
    • kube2sky :Kubernetes和SkyDNS之间的桥梁。 这将监视kubernetes API中Services的更改,然后通过etcd将这些更改发布到SkyDNS。
    • 到目前为止,我们的应用程序尚未创建任何吊舱。

启动WildFly和Java EE 7应用程序Pod

通过使用kubectl脚本并在JSON配置文件中提供详细信息来创建Pod。 我们的配置文件的源代码可在github.com/arun-gupta/kubernetes-java-sample上找到,如下所示:

{
  "id": "wildfly",
  "kind": "Pod",
  "apiVersion": "v1beta1",
  "desiredState": {
    "manifest": {
      "version": "v1beta1",
      "id": "wildfly",
      "containers": [{
        "name": "wildfly",
        "image": "arungupta/javaee7-hol",
        "cpu": 100,
        "ports": [{
          "containerPort": 8080,
          "hostPort": 8080
        },
        {
          "containerPort": 9090,
          "hostPort": 9090
        }]
      }]
    }
  },
  "labels": {
    "name": "wildfly"
  }
}

该配置文件的确切有效负载和属性在kubernetes.io/third_party/swagger-ui/#!/v1beta1/createPod_0中进行了说明 。 有关所有可能的API的完整文档,请访问kubernetes.io/third_party/swagger-ui/ 。 该国际剑联的关键属性尤其是:

  • 创建一个吊舱。 API允许创建其他类型,例如“服务”,“ replicationController”等。
  • 该API的版本为“ v1beta1”。
  • 用于运行容器的Docker映像arungupta / javaee7-hol
  • 公开端口8080和9090,因为它们最初在基本映像Dockerfile中公开。 这需要进一步调试如何清除端口列表。
  • Pod的标签为“野蝇”。 在这种情况下,它使用不多,但在后续博客中创建服务时将更有意义。

如前所述,此技术提示将使用一个容器旋转单个吊舱。 我们的容器将使用预先构建的映像( arungupta / javaee7-hol ),该映像将典型的3层Java EE 7应用程序部署到WildFly。

以以下方式启动WildFly pod:

kubernetes/>./cluster/kubectl create -f ../kubernetes-java-sample/javaee7-hol.json

检查创建的容器的状态为:

kubernetes> ./cluster/kubectl.sh get pods
Running: ./cluster/../cluster/vagrant/../../_output/dockerized/bin/darwin/amd64/kubectl --auth-path=/Users/arungupta/.kubernetes_vagrant_auth get pods
POD                                    IP                  CONTAINER(S)        IMAGE(S)                           HOST                    LABELS              STATUS
4c283aa1-ab47-11e4-b139-0800279696e1   10.246.0.2          etcd                quay.io/coreos/etcd:latest         10.245.1.3/10.245.1.3   k8s-app=skydns      Running
                                                           kube2sky            kubernetes/kube2sky:1.0                                                        
                                                           skydns              kubernetes/skydns:2014-12-23-001                                               
wildfly                                10.246.0.5          wildfly             arungupta/javaee7-hol              10.245.1.3/10.245.1.3   name=wildfly        Running

现在创建了WildFly窗格,并显示在列表中。 “ HOST列显示可访问应用程序的IP地址。

下图说明了所有组件如何相互配合:

Mac OS X上的Kubernetes中的Java EE 7 / WildFly

Mac OS X上的Kubernetes中的Java EE 7 / WildFly

由于默认情况下仅创建一个奴才,因此将在该奴才上创建此窗格。 该博客将展示如何创建多个奴才。 Kubernetes当然会选择创建吊舱的奴才。

运行pod可以确保将Java EE 7应用程序部署到WildFly。

访问Java EE 7应用程序

kubectl.sh get pods输出中,“ HOST列显示可从外部访问应用程序的IP地址。 在我们的例子中,IP地址是10.245.1.3 。 因此,在浏览器中访问该应用程序以查看输出为:

Kubernetes上的Java EE 7应用程序

Kubernetes上的Java EE 7应用程序

这确认您的Java EE 7应用程序现在可访问。

Kubernetes调试技巧

创建Kubernetes集群后,您将需要对其进行调试,并查看其底层运行情况。

首先,让我们登录小兵:

kubernetes> vagrant ssh minion-1
Last login: Tue Feb  3 01:52:22 2015 from 10.0.2.2
Minion上的Docker容器列表

让我们看一下在minion-1上运行的所有Docker容器:

[vagrant@kubernetes-minion-1 ~]$ docker ps
CONTAINER ID        IMAGE                                  COMMAND                CREATED             STATUS              PORTS                                            NAMES
3f7e174b82b1        arungupta/javaee7-hol:latest           "/opt/jboss/wildfly/   16 minutes ago      Up 16 minutes                                                        k8s_wildfly.a78dc60_wildfly.default.api_59be80fa-ab48-11e4-b139-0800279696e1_75a4a7cb                                               
1c464e71fb69        kubernetes/pause:go                    "/pause"               20 minutes ago      Up 20 minutes       0.0.0.0:8080->8080/tcp, 0.0.0.0:9090->9090/tcp   k8s_net.7946daa4_wildfly.default.api_59be80fa-ab48-11e4-b139-0800279696e1_e74d3d1d                                                  
7bdd763df691        kubernetes/skydns:2014-12-23-001       "/skydns -machines=h   21 minutes ago      Up 21 minutes                                                        k8s_skydns.394cd23c_4c283aa1-ab47-11e4-b139-0800279696e1.default.api_4c283aa1-ab47-11e4-b139-0800279696e1_3352f6bd                  
17a140aaabbe        google/cadvisor:0.7.1                  "/usr/bin/cadvisor"    22 minutes ago      Up 22 minutes                                                        k8s_cadvisor.68f5108e_cadvisor-agent.file-6bb810db-kubernetes-minion-1.file_65235067df34faf012fd8bb088de6b73_86e59309               
a5f8cf6463e9        kubernetes/kube2sky:1.0                "/kube2sky -domain=k   22 minutes ago      Up 22 minutes                                                        k8s_kube2sky.1cbba018_4c283aa1-ab47-11e4-b139-0800279696e1.default.api_4c283aa1-ab47-11e4-b139-0800279696e1_126d4d7a                
28e6d2e67a92        kubernetes/fluentd-elasticsearch:1.0   "/bin/sh -c '/usr/sb   23 minutes ago      Up 23 minutes                                                        k8s_fluentd-es.6361e00b_fluentd-to-elasticsearch.file-8cd71177-kubernetes-minion-1.file_a190cc221f7c0766163ed2a4ad6e32aa_a9a369d3   
5623edf7decc        quay.io/coreos/etcd:latest             "/etcd /etcd -bind-a   25 minutes ago      Up 25 minutes                                                        k8s_etcd.372da5db_4c283aa1-ab47-11e4-b139-0800279696e1.default.api_4c283aa1-ab47-11e4-b139-0800279696e1_8b658811                    
3575b562f23e        kubernetes/pause:go                    "/pause"               25 minutes ago      Up 25 minutes       0.0.0.0:4194->8080/tcp                           k8s_net.beddb979_cadvisor-agent.file-6bb810db-kubernetes-minion-1.file_65235067df34faf012fd8bb088de6b73_8376ce8e                    
094d76c83068        kubernetes/pause:go                    "/pause"               25 minutes ago      Up 25 minutes                                                        k8s_net.3e0f95f3_fluentd-to-elasticsearch.file-8cd71177-kubernetes-minion-1.file_a190cc221f7c0766163ed2a4ad6e32aa_6931ca22          
f8b9cd5af169        kubernetes/pause:go                    "/pause"               25 minutes ago      Up 25 minutes                                                        k8s_net.3d64b7f6_4c283aa1-ab47-11e4-b139-0800279696e1.default.api_4c283aa1-ab47-11e4-b139-0800279696e1_b0ebce5a

第一个容器特定于我们的应用程序,其他所有内容均由Kubernetes启动。

有关每个Docker容器的详细信息

有关每个容器的更多详细信息,可以通过使用其容器ID来找到:

docker inspect <CONTAINER_ID>

在我们的例子中,输出显示为:

[vagrant@kubernetes-minion-1 ~]$ docker inspect 3f7e174b82b1
[{
    "AppArmorProfile": "",
    "Args": [
        "-c",
        "standalone-full.xml",
        "-b",
        "0.0.0.0"
    ],
    "Config": {
        "AttachStderr": false,
        "AttachStdin": false,
        "AttachStdout": false,
        "Cmd": [
            "/opt/jboss/wildfly/bin/standalone.sh",
            "-c",
            "standalone-full.xml",
            "-b",
            "0.0.0.0"
        ],
        "CpuShares": 102,
        "Cpuset": "",
        "Domainname": "",
        "Entrypoint": null,
        "Env": [
            "KUBERNETES_PORT_443_TCP_PROTO=tcp",
            "KUBERNETES_RO_PORT_80_TCP=tcp://10.247.82.143:80",
            "SKYDNS_PORT_53_UDP=udp://10.247.0.10:53",
            "KUBERNETES_PORT_443_TCP=tcp://10.247.92.82:443",
            "KUBERNETES_PORT_443_TCP_PORT=443",
            "KUBERNETES_PORT_443_TCP_ADDR=10.247.92.82",
            "KUBERNETES_RO_PORT_80_TCP_PROTO=tcp",
            "SKYDNS_PORT_53_UDP_PROTO=udp",
            "KUBERNETES_RO_PORT_80_TCP_ADDR=10.247.82.143",
            "SKYDNS_SERVICE_HOST=10.247.0.10",
            "SKYDNS_PORT_53_UDP_PORT=53",
            "SKYDNS_PORT_53_UDP_ADDR=10.247.0.10",
            "KUBERNETES_SERVICE_HOST=10.247.92.82",
            "KUBERNETES_RO_SERVICE_HOST=10.247.82.143",
            "KUBERNETES_RO_PORT_80_TCP_PORT=80",
            "SKYDNS_SERVICE_PORT=53",
            "SKYDNS_PORT=udp://10.247.0.10:53",
            "KUBERNETES_SERVICE_PORT=443",
            "KUBERNETES_PORT=tcp://10.247.92.82:443",
            "KUBERNETES_RO_SERVICE_PORT=80",
            "KUBERNETES_RO_PORT=tcp://10.247.82.143:80",
            "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "JAVA_HOME=/usr/lib/jvm/java",
            "WILDFLY_VERSION=8.2.0.Final",
            "JBOSS_HOME=/opt/jboss/wildfly"
        ],
        "ExposedPorts": {
            "8080/tcp": {},
            "9090/tcp": {},
            "9990/tcp": {}
        },
        "Hostname": "wildfly",
        "Image": "arungupta/javaee7-hol",
        "MacAddress": "",
        "Memory": 0,
        "MemorySwap": 0,
        "NetworkDisabled": false,
        "OnBuild": null,
        "OpenStdin": false,
        "PortSpecs": null,
        "StdinOnce": false,
        "Tty": false,
        "User": "jboss",
        "Volumes": null,
        "WorkingDir": "/opt/jboss"
    },
    "Created": "2015-02-03T02:03:54.882111127Z",
    "Driver": "devicemapper",
    "ExecDriver": "native-0.2",
    "HostConfig": {
        "Binds": null,
        "CapAdd": null,
        "CapDrop": null,
        "ContainerIDFile": "",
        "Devices": null,
        "Dns": [
            "10.247.0.10",
            "10.0.2.3"
        ],
        "DnsSearch": [
            "default.kubernetes.local",
            "kubernetes.local",
            "c.hospitality.swisscom.com"
        ],
        "ExtraHosts": null,
        "IpcMode": "",
        "Links": null,
        "LxcConf": null,
        "NetworkMode": "container:1c464e71fb69adfb2a407217d0c84600a18f755721628ea3f329f48a2cdaa64f",
        "PortBindings": {
            "8080/tcp": [
                {
                    "HostIp": "",
                    "HostPort": "8080"
                }
            ],
            "9090/tcp": [
                {
                    "HostIp": "",
                    "HostPort": "9090"
                }
            ]
        },
        "Privileged": false,
        "PublishAllPorts": false,
        "RestartPolicy": {
            "MaximumRetryCount": 0,
            "Name": ""
        },
        "SecurityOpt": null,
        "VolumesFrom": null
    },
    "HostnamePath": "",
    "HostsPath": "/var/lib/docker/containers/1c464e71fb69adfb2a407217d0c84600a18f755721628ea3f329f48a2cdaa64f/hosts",
    "Id": "3f7e174b82b1520abdc7f39f34ad4e4a9cb4d312466143b54935c43d4c258e3f",
    "Image": "a068decaf8928737340f8f08fbddf97d9b4f7838d154e88ed77fbcf9898a83f2",
    "MountLabel": "",
    "Name": "/k8s_wildfly.a78dc60_wildfly.default.api_59be80fa-ab48-11e4-b139-0800279696e1_75a4a7cb",
    "NetworkSettings": {
        "Bridge": "",
        "Gateway": "",
        "IPAddress": "",
        "IPPrefixLen": 0,
        "MacAddress": "",
        "PortMapping": null,
        "Ports": null
    },
    "Path": "/opt/jboss/wildfly/bin/standalone.sh",
    "ProcessLabel": "",
    "ResolvConfPath": "/var/lib/docker/containers/1c464e71fb69adfb2a407217d0c84600a18f755721628ea3f329f48a2cdaa64f/resolv.conf",
    "State": {
        "Error": "",
        "ExitCode": 0,
        "FinishedAt": "0001-01-01T00:00:00Z",
        "OOMKilled": false,
        "Paused": false,
        "Pid": 17920,
        "Restarting": false,
        "Running": true,
        "StartedAt": "2015-02-03T02:03:55.471392394Z"
    },
    "Volumes": {},
    "VolumesRW": {}
}
]
来自Docker容器的日志

可以使用以下命令查看来自容器的日志:

docker logs <CONTAINER_ID>

在我们的例子中,输出显示为:

[vagrant@kubernetes-minion-1 ~]$ docker logs 3f7e174b82b1
=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: /opt/jboss/wildfly

  JAVA: /usr/lib/jvm/java/bin/java

  JAVA_OPTS:  -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true

=========================================================================

. . .

02:04:12,078 INFO  [org.jboss.as.jpa] (ServerService Thread Pool -- 57) JBAS011409: Starting Persistence Unit (phase 1 of 2) Service 'movieplex7-1.0-SNAPSHOT.war#movieplex7PU'
02:04:12,128 INFO  [org.hibernate.jpa.internal.util.LogHelper] (ServerService Thread Pool -- 57) HHH000204: Processing PersistenceUnitInfo [
	name: movieplex7PU
	...]
02:04:12,154 INFO  [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221007: Server is now live
02:04:12,155 INFO  [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221001: HornetQ Server version 2.4.5.FINAL (Wild Hornet, 124) [f13dedbd-ab48-11e4-a924-615afe337134] 
02:04:12,175 INFO  [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221003: trying to deploy queue jms.queue.ExpiryQueue
02:04:12,735 INFO  [org.jboss.as.messaging] (ServerService Thread Pool -- 56) JBAS011601: Bound messaging object to jndi name java:/jms/queue/ExpiryQueue
02:04:12,736 INFO  [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221003: trying to deploy queue jms.queue.DLQ
02:04:12,749 INFO  [org.jboss.as.messaging] (ServerService Thread Pool -- 60) JBAS011601: Bound messaging object to jndi name java:/jms/queue/DLQ
02:04:12,792 INFO  [org.hibernate.Version] (ServerService Thread Pool -- 57) HHH000412: Hibernate Core {4.3.7.Final}
02:04:12,795 INFO  [org.hibernate.cfg.Environment] (ServerService Thread Pool -- 57) HHH000206: hibernate.properties not found
02:04:12,801 INFO  [org.hibernate.cfg.Environment] (ServerService Thread Pool -- 57) HHH000021: Bytecode provider name : javassist
02:04:12,820 INFO  [org.jboss.as.connector.deployment] (MSC service thread 1-1) JBAS010406: Registered connection factory java:/JmsXA
02:04:12,997 INFO  [org.hornetq.jms.server] (ServerService Thread Pool -- 59) HQ121005: Invalid "host" value "0.0.0.0" detected for "http-connector" connector. Switching to "wildfly". If this new address is incorrect please manually configure the connector to use the proper one.
02:04:13,021 INFO  [org.jboss.as.messaging] (ServerService Thread Pool -- 59) JBAS011601: Bound messaging object to jndi name java:jboss/exported/jms/RemoteConnectionFactory
02:04:13,025 INFO  [org.jboss.as.messaging] (ServerService Thread Pool -- 58) JBAS011601: Bound messaging object to jndi name java:/ConnectionFactory
02:04:13,072 INFO  [org.hornetq.ra] (MSC service thread 1-1) HornetQ resource adaptor started
02:04:13,073 INFO  [org.jboss.as.connector.services.resourceadapters.ResourceAdapterActivatorService$ResourceAdapterActivator] (MSC service thread 1-1) IJ020002: Deployed: file://RaActivatorhornetq-ra
02:04:13,078 INFO  [org.jboss.as.messaging] (MSC service thread 1-4) JBAS011601: Bound messaging object to jndi name java:jboss/DefaultJMSConnectionFactory
02:04:13,076 INFO  [org.jboss.as.connector.deployment] (MSC service thread 1-8) JBAS010401: Bound JCA ConnectionFactory 
02:04:13,487 INFO  [org.jboss.weld.deployer] (MSC service thread 1-2) JBAS016002: Processing weld deployment movieplex7-1.0-SNAPSHOT.war
02:04:13,694 INFO  [org.hibernate.validator.internal.util.Version] (MSC service thread 1-2) HV000001: Hibernate Validator 5.1.3.Final
02:04:13,838 INFO  [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-2) JNDI bindings for session bean named ShowTimingFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:

	java:global/movieplex7-1.0-SNAPSHOT/ShowTimingFacadeREST!org.javaee7.movieplex7.rest.ShowTimingFacadeREST
	java:app/movieplex7-1.0-SNAPSHOT/ShowTimingFacadeREST!org.javaee7.movieplex7.rest.ShowTimingFacadeREST
	java:module/ShowTimingFacadeREST!org.javaee7.movieplex7.rest.ShowTimingFacadeREST
	java:global/movieplex7-1.0-SNAPSHOT/ShowTimingFacadeREST
	java:app/movieplex7-1.0-SNAPSHOT/ShowTimingFacadeREST
	java:module/ShowTimingFacadeREST

02:04:13,838 INFO  [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-2) JNDI bindings for session bean named TheaterFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:

	java:global/movieplex7-1.0-SNAPSHOT/TheaterFacadeREST!org.javaee7.movieplex7.rest.TheaterFacadeREST
	java:app/movieplex7-1.0-SNAPSHOT/TheaterFacadeREST!org.javaee7.movieplex7.rest.TheaterFacadeREST
	java:module/TheaterFacadeREST!org.javaee7.movieplex7.rest.TheaterFacadeREST
	java:global/movieplex7-1.0-SNAPSHOT/TheaterFacadeREST
	java:app/movieplex7-1.0-SNAPSHOT/TheaterFacadeREST
	java:module/TheaterFacadeREST

02:04:13,839 INFO  [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-2) JNDI bindings for session bean named MovieFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:

	java:global/movieplex7-1.0-SNAPSHOT/MovieFacadeREST!org.javaee7.movieplex7.rest.MovieFacadeREST
	java:app/movieplex7-1.0-SNAPSHOT/MovieFacadeREST!org.javaee7.movieplex7.rest.MovieFacadeREST
	java:module/MovieFacadeREST!org.javaee7.movieplex7.rest.MovieFacadeREST
	java:global/movieplex7-1.0-SNAPSHOT/MovieFacadeREST
	java:app/movieplex7-1.0-SNAPSHOT/MovieFacadeREST
	java:module/MovieFacadeREST

02:04:13,840 INFO  [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-2) JNDI bindings for session bean named SalesFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:

	java:global/movieplex7-1.0-SNAPSHOT/SalesFacadeREST!org.javaee7.movieplex7.rest.SalesFacadeREST
	java:app/movieplex7-1.0-SNAPSHOT/SalesFacadeREST!org.javaee7.movieplex7.rest.SalesFacadeREST
	java:module/SalesFacadeREST!org.javaee7.movieplex7.rest.SalesFacadeREST
	java:global/movieplex7-1.0-SNAPSHOT/SalesFacadeREST
	java:app/movieplex7-1.0-SNAPSHOT/SalesFacadeREST
	java:module/SalesFacadeREST

02:04:13,840 INFO  [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-2) JNDI bindings for session bean named TimeslotFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:

	java:global/movieplex7-1.0-SNAPSHOT/TimeslotFacadeREST!org.javaee7.movieplex7.rest.TimeslotFacadeREST
	java:app/movieplex7-1.0-SNAPSHOT/TimeslotFacadeREST!org.javaee7.movieplex7.rest.TimeslotFacadeREST
	java:module/TimeslotFacadeREST!org.javaee7.movieplex7.rest.TimeslotFacadeREST
	java:global/movieplex7-1.0-SNAPSHOT/TimeslotFacadeREST
	java:app/movieplex7-1.0-SNAPSHOT/TimeslotFacadeREST
	java:module/TimeslotFacadeREST

02:04:14,802 INFO  [org.jboss.as.messaging] (MSC service thread 1-1) JBAS011601: Bound messaging object to jndi name java:global/jms/pointsQueue
02:04:14,931 INFO  [org.jboss.weld.deployer] (MSC service thread 1-2) JBAS016005: Starting Services for CDI deployment: movieplex7-1.0-SNAPSHOT.war
02:04:15,018 INFO  [org.jboss.weld.Version] (MSC service thread 1-2) WELD-000900: 2.2.6 (Final)
02:04:15,109 INFO  [org.hornetq.core.server] (ServerService Thread Pool -- 57) HQ221003: trying to deploy queue jms.queue.movieplex7-1.0-SNAPSHOT_movieplex7-1.0-SNAPSHOT_movieplex7-1.0-SNAPSHOT_java:global/jms/pointsQueue
02:04:15,110 INFO  [org.jboss.weld.deployer] (MSC service thread 1-6) JBAS016008: Starting weld service for deployment movieplex7-1.0-SNAPSHOT.war
02:04:15,787 INFO  [org.jboss.as.jpa] (ServerService Thread Pool -- 57) JBAS011409: Starting Persistence Unit (phase 2 of 2) Service 'movieplex7-1.0-SNAPSHOT.war#movieplex7PU'
02:04:16,189 INFO  [org.hibernate.annotations.common.Version] (ServerService Thread Pool -- 57) HCANN000001: Hibernate Commons Annotations {4.0.4.Final}
02:04:17,174 INFO  [org.hibernate.dialect.Dialect] (ServerService Thread Pool -- 57) HHH000400: Using dialect: org.hibernate.dialect.H2Dialect
02:04:17,191 WARN  [org.hibernate.dialect.H2Dialect] (ServerService Thread Pool -- 57) HHH000431: Unable to determine H2 database version, certain features may not work
02:04:17,954 INFO  [org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory] (ServerService Thread Pool -- 57) HHH000397: Using ASTQueryTranslatorFactory
02:04:19,832 INFO  [org.hibernate.dialect.Dialect] (ServerService Thread Pool -- 57) HHH000400: Using dialect: org.hibernate.dialect.H2Dialect
02:04:19,833 WARN  [org.hibernate.dialect.H2Dialect] (ServerService Thread Pool -- 57) HHH000431: Unable to determine H2 database version, certain features may not work
02:04:19,854 WARN  [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE SALES]
02:04:19,855 WARN  [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE POINTS]
02:04:19,855 WARN  [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE SHOW_TIMING]
02:04:19,855 WARN  [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE MOVIE]
02:04:19,856 WARN  [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE TIMESLOT]
02:04:19,857 WARN  [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE THEATER]
02:04:23,942 INFO  [io.undertow.websockets.jsr] (MSC service thread 1-5) UT026003: Adding annotated server endpoint class org.javaee7.movieplex7.chat.ChatServer for path /websocket
02:04:24,975 INFO  [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-5) Initializing Mojarra 2.2.8-jbossorg-1 20140822-1131 for context '/movieplex7'
02:04:26,377 INFO  [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-5) Monitoring file:/opt/jboss/wildfly/standalone/tmp/vfs/temp/temp1267e5586f39ea50/movieplex7-1.0-SNAPSHOT.war-ea3c92cddc1c81c/WEB-INF/faces-config.xml for modifications
02:04:30,216 INFO  [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Deploying javax.ws.rs.core.Application: class org.javaee7.movieplex7.rest.ApplicationConfig
02:04:30,247 INFO  [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding class resource org.javaee7.movieplex7.rest.TheaterFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig
02:04:30,248 INFO  [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding class resource org.javaee7.movieplex7.rest.ShowTimingFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig
02:04:30,248 INFO  [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding class resource org.javaee7.movieplex7.rest.MovieFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig
02:04:30,248 INFO  [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding provider class org.javaee7.movieplex7.json.MovieWriter from Application class org.javaee7.movieplex7.rest.ApplicationConfig
02:04:30,249 INFO  [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding provider class org.javaee7.movieplex7.json.MovieReader from Application class org.javaee7.movieplex7.rest.ApplicationConfig
02:04:30,267 INFO  [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding class resource org.javaee7.movieplex7.rest.SalesFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig
02:04:30,267 INFO  [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding class resource org.javaee7.movieplex7.rest.TimeslotFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig
02:04:31,544 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-5) JBAS017534: Registered web context: /movieplex7
02:04:32,187 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 31) JBAS018559: Deployed "movieplex7-1.0-SNAPSHOT.war" (runtime-name : "movieplex7-1.0-SNAPSHOT.war")
02:04:34,800 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on http://127.0.0.1:9990/management
02:04:34,859 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990
02:04:34,859 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.2.0.Final "Tweek" started in 38558ms - Started 400 of 452 services (104 services are lazy, passive or on-demand)

此处显示WildFly启动日志,包括应用程序部署。

登录Docker容器

登录到容器并显示WildFly日志。 有两种方法可以做到这一点。

首先是使用名称并执行bash shell。 为此,获取容器的名称为:

docker inspect <CONTAINER_ID> | grep Name

在我们的例子中,输出为:

[vagrant@kubernetes-minion-1 ~]$ docker inspect 3f7e174b82b1 | grep Name
            "Name": ""
    "Name": "/k8s_wildfly.a78dc60_wildfly.default.api_59be80fa-ab48-11e4-b139-0800279696e1_75a4a7cb",

以以下方式登录到容器:

[vagrant@kubernetes-minion-1 ~]$ docker exec -it k8s_wildfly.a78dc60_wildfly.default.api_59be80fa-ab48-11e4-b139-0800279696e1_75a4a7cb bash
[root@wildfly /]# pwd
/

另一种更经典的方法是将容器的进程ID获取为:

docker inspect <CONTAINER_ID> | grep Pid

在我们的例子中,输出为:

[vagrant@kubernetes-minion-1 ~]$ docker inspect 3f7e174b82b1 | grep Pid
        "Pid": 17920,

以以下方式登录到容器:

[vagrant@kubernetes-minion-1 ~]$ sudo nsenter -m -u -n -i -p -t 17920 /bin/bash
docker exec -it 
[root@wildfly /]# pwd
/

现在,可以在以下位置找到完整的WildFly发行版:

[root@wildfly /]# cd /opt/jboss/wildfly
[root@wildfly wildfly]# ls -la
total 424
drwxr-xr-x 10 jboss jboss   4096 Dec  5 22:22 .
drwx------  4 jboss jboss   4096 Dec  5 22:22 ..
drwxr-xr-x  3 jboss jboss   4096 Nov 20 22:43 appclient
drwxr-xr-x  5 jboss jboss   4096 Nov 20 22:43 bin
-rw-r--r--  1 jboss jboss   2451 Nov 20 22:43 copyright.txt
drwxr-xr-x  4 jboss jboss   4096 Nov 20 22:43 docs
drwxr-xr-x  5 jboss jboss   4096 Nov 20 22:43 domain
drwx------  2 jboss jboss   4096 Nov 20 22:43 .installation
-rw-r--r--  1 jboss jboss 354682 Nov 20 22:43 jboss-modules.jar
-rw-r--r--  1 jboss jboss  26530 Nov 20 22:43 LICENSE.txt
drwxr-xr-x  3 jboss jboss   4096 Nov 20 22:43 modules
-rw-r--r--  1 jboss jboss   2356 Nov 20 22:43 README.txt
drwxr-xr-x  8 jboss jboss   4096 Feb  3 02:03 standalone
drwxr-xr-x  2 jboss jboss   4096 Nov 20 22:43 welcome-content

清理集群

可以使用Virtual Box控制台或使用以下命令行来清理整个Kubernetes群集:

kubernetes> vagrant halt
==> minion-1: Attempting graceful shutdown of VM...
==> minion-1: Forcing shutdown of VM...
==> master: Attempting graceful shutdown of VM...
==> master: Forcing shutdown of VM...
kubernetes> vagrant destroy
    minion-1: Are you sure you want to destroy the 'minion-1' VM? [y/N] y
==> minion-1: Destroying VM and associated drives...
==> minion-1: Running cleanup tasks for 'shell' provisioner...
    master: Are you sure you want to destroy the 'master' VM? [y/N] y
==> master: Destroying VM and associated drives...
==> master: Running cleanup tasks for 'shell' provisioner...

因此,我们学习了如何运行在WildFly中部署并使用Kubernetes和Docker托管的Java EE 7应用程序。

请享用!

翻译自: https://www.javacodegeeks.com/2015/02/java-ee-7-wildfly-kubernetes-using-vagrant.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值