docker 安装并运行 Kibana(单机版)

由于Kibana的运行需要指定ElasticSearch的地址,如果没有安装ElasticSearch可参考博主的 docker 安装并运行 elasticsearch(单机版),已安装的请忽略;

1.拉取指定版本的 Kibana:

  • 操作命令
docker pull kibana:6.5.0
  • 操作示例:
PS C:\WINDOWS\system32> docker pull kibana:6.5.0
6.5.0: Pulling from library/kibana
5040bd298390: Already exists
4ed8502e84f5: Pull complete
90ca2be203ee: Pull complete
4cd4e8631ffa: Pull complete
0d91dad24e26: Pull complete
e632de183972: Pull complete
2d00befc3e6e: Pull complete
252de58a1cc9: Pull complete
fe24e1e374ff: Pull complete
Digest: sha256:490bc9205f3956b4e35787adfa46df88aaca2f38d5e4dfdcedc374ad84ed264a
Status: Downloaded newer image for kibana:6.5.0
PS C:\WINDOWS\system32> 

2.查看已拉取的 Kibana:

  • 操作命令
 docker image ls kibana
  • 操作示例:
PS C:\WINDOWS\system32>  docker image ls kibana
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
kibana             6.5.0               2c2904814b68       1 years ago         327MB
PS C:\WINDOWS\system32>

3.创建Kibana配置文件:

  • 操作命令:
mkdir -p D:/SoftWare/data/DockerDataInfo/Kibana6.5.0/config/
  • 操作示例:
PS C:\Users\54lxb\Desktop> mkdir -p D:/SoftWare/data/DockerDataInfo/Kibana6.5.0/config/

    目录: D:\SoftWare\data\DockerDataInfo\Kibana6.5.0
    
    Mode              LastWriteTime         Length Name
    ----              -------------         ------  ----

    d-----            2019/8/6  10:19             config

PS C:\Users\54lxb\Desktop>

用文件管理器打开D:/SoftWare/data/DockerDataInfo/Kibana6.5.0/config/目录,新建文件名为kibana.xml的文件,然后打开文件,将下面的内容复制到刚才新建的文件中并并保存;

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
server.name: "kibana"

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://127.0.0.1:9200"

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
#i18n.locale: "en"

4.运行 Kibana 并指定 ElasticSearch 地址:

  • 操作命令
docker run --restart=always -itd --name kibana6.5.0 -p 5601:5601 -vD:/SoftWare/data/DockerDataInfo/Kibana6.5.0/config/kibana.yml:/etc/kibana/kibana.yml kibana:6.5.0
  • 操作示例:
PS C:\WINDOWS\system32> docker run --restart=always -itd --name kibana6.5.0 -p 5601:5601 -vD:/SoftWare/data/DockerDataInfo/Kibana6.5.0/config/kibana.yml:/etc/kibana/kibana.yml kibana:6.5.0
1e91063598be8105c1c647b18428e965395546836dbbead9a12dbea29ef21864
PS C:\WINDOWS\system32>

5.验证 Kibana 是否启动成功:

  • 查看容器运行状态:
PS C:\WINDOWS\system32> docker ps
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                              NAMES
1e91063598be        kibana:6.5.0          "/docker-entrypoint.…"   45 seconds ago      Up 42 seconds       0.0.0.0:5601->5601/tcp          kibana6.5.0
1e0f242864f3        elasticsearch:6.5.0   "/docker-entrypoint.…"   37 minutes ago      Up 37 minutes       0.0.0.0:9200->9200/tcp, 9300/tcp   elasticsearch6.5.0
PS C:\WINDOWS\system32>

通过以上信息,我们可以很清楚的了解到,ElasticSearch和Kibana已经启动成功了!

  • 直接访问:http://localhost:5601/,看到页面显示:Kibana server is not ready yet;此时查看运行Kibana的容器docker logs -f 容器ID/容器Name,可以看到日志中循环输出警告的日志如下:
  log   [05:48:20.001] [warning][admin][elasticsearch] Unable to revive connection: http://localhost:9200/
  log   [05:48:20.001] [warning][admin][elasticsearch] No living connections

6.kibana报错:Unable to revive connection/kibana elasticsearch plugin is red 的解决方法:

1.获取IP地址信息:

我们可以获取运行容器的IP地址

  • 操作指令:
    docker inspect --format='{{.Name}} - {{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)

  • 操作示例:

PS C:\WINDOWS\system32> docker inspect --format='{{.Name}} - {{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)
/kibana6.5.0 - 172.17.0.3
/elasticsearch6.5.0 - 172.17.0.2
PS C:\WINDOWS\system32>

也可以获取真实物理机的IP地址:

  • 操作指令:
    ipconfig

  • 操作示例:

PS C:\Users\54lxb\Desktop> ipconfig

Windows IP 配置


以太网适配器 vEthernet (DockerNAT):

   连接特定的 DNS 后缀 . . . . . . . :
   IPv4 地址 . . . . . . . . . . . . : 10.0.75.1
   子网掩码  . . . . . . . . . . . . : 255.255.255.0
   默认网关. . . . . . . . . . . . . :

以太网适配器 以太网:

   连接特定的 DNS 后缀 . . . . . . . :
   本地链接 IPv6 地址. . . . . . . . : fe80::b4ec:e57d:268c:47af%16
   IPv4 地址 . . . . . . . . . . . . : 172.16.0.115
   子网掩码  . . . . . . . . . . . . : 255.255.252.0
   默认网关. . . . . . . . . . . . . : 172.16.0.1

以太网适配器 vEthernet (Default Switch):

   连接特定的 DNS 后缀 . . . . . . . :
   本地链接 IPv6 地址. . . . . . . . : fe80::c2b:db22:531a:6095%20
   IPv4 地址 . . . . . . . . . . . . : 192.168.229.113
   子网掩码  . . . . . . . . . . . . : 255.255.255.240
   默认网关. . . . . . . . . . . . . :
PS C:\Users\54lxb\Desktop>
2.修改Kibana配置文件并重启Kibana:
#修改前
elasticsearch.url: 'http://127.0.0.1:9200'
#修改后
elasticsearch.url: 'http://172.17.0.2:9200'
#或者
elasticsearch.url: 'http://172.16.0.115:9200'

重启Kibana容器,看到结果如下:
Kibana启动成功

为什么要修改kibana指定的es url地址?因为es和kibana都是容器启动的,而且在启动时没有添加 net 参数所以默认是 bridge模式,然而127.0.0.1指定的是kibana当前这个容器并非elasticsearch容器,所以必须要改为elasticsearch容器的地址,只有这样他们之间才能通讯;

当然我这只是其中一种解决方案,其实还有很多其他方案的,比如:a.让elasticsearch容器和kibana容器之间通讯;b.也可修改与主机的网络制式;

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
要在Docker安装Elasticsearch和Kibana,可以按照以下步骤进行操作: 1. 首先,确保已经安装Docker。您可以通过在终端或命令提示符中运行以下命令来检查是否已安装: ``` docker --version ``` 如果您看到了Docker本号,那么说明您已经安装Docker。 2. 接下来,我们将使用Docker Compose来管理Elasticsearch和Kibana容器的部署。创建一个名为`docker-compose.yml`的文件,并将以下内容添加到文件中: ```yaml version: '3' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2 container_name: elasticsearch environment: - discovery.type=single-node ports: - 9200:9200 - 9300:9300 kibana: image: docker.elastic.co/kibana/kibana:7.10.2 container_name: kibana ports: - 5601:5601 depends_on: - elasticsearch ``` 这将创建两个服务:Elasticsearch和Kibana。Elasticsearch服务将在9200和9300端口上暴露,而Kibana服务将在5601端口上暴露。 3. 保存并关闭`docker-compose.yml`文件。然后,在终端或命令提示符中,导航到包含该文件的目录,并运行以下命令来启动Elasticsearch和Kibana容器: ``` docker-compose up -d ``` 这将使用Docker Compose根据`docker-compose.yml`文件中的配置启动Elasticsearch和Kibana容器。`-d`选项将容器置于后台运行。 4. 等待一段时间,直到容器成功启动。您可以通过运行以下命令来检查容器的状态: ``` docker-compose ps ``` 如果您看到了elasticsearch和kibana容器正在运行,则表示安装成功。 5. 现在,您可以通过浏览器访问Kibana的Web界面,地址为`http://localhost:5601`。在Kibana中,您可以进行各种操作,如索引数据、创建可视化仪表板等。 请注意,以上步骤假设您已经具备基本的Docker和容器管理知识。如果您遇到任何问题,请参考DockerDocker Compose的官方文档以获得更多详细信息。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值