大数据组件安装笔记(十)Kibana安装

Kibana安装

  1. 解压缩

tar -zxf kibana-7.4.2-linux-x86_64.tar.gz -C /usr/local/

  1. 修改配置文件
vi /usr/local/kibana-7.4.2-linux-x86_64/config/kibana.yml

修改内容:

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: 0.0.0.0

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
server.name: "master"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://master:9200","http://slave1:9200","http://slave2:9200","http://slave3:9200"]

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
i18n.locale: "zh-CN"
  1. 配置环境变量
vi  /etc/profile
export KIBANA_HOME=/usr/local/kibana-7.4.2-linux-x86_64
export PATH=$PATH:$KIBANA_HOME/bin
source /etc/profile
  1. 修改归属用户和用户组
chown -R es:es /usr/local/kibana-7.4.2-linux-x86_64/
  1. 启动
kibana  > /tmp/kibana.log &
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
要在Docker中安装Elasticsearch和Kibana,可以按照以下步骤进行操作: 1. 首先,确保已经安装了Docker。您可以通过在终端或命令提示符中运行以下命令来检查是否已安装: ``` docker --version ``` 如果您看到了Docker的版本号,那么说明您已经安装了Docker。 2. 接下来,我们将使用Docker Compose来管理Elasticsearch和Kibana容器的部署。创建一个名为`docker-compose.yml`的文件,并将以下内容添加到文件中: ```yaml version: '3' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2 container_name: elasticsearch environment: - discovery.type=single-node ports: - 9200:9200 - 9300:9300 kibana: image: docker.elastic.co/kibana/kibana:7.10.2 container_name: kibana ports: - 5601:5601 depends_on: - elasticsearch ``` 这将创建两个服务:Elasticsearch和Kibana。Elasticsearch服务将在9200和9300端口上暴露,而Kibana服务将在5601端口上暴露。 3. 保存并关闭`docker-compose.yml`文件。然后,在终端或命令提示符中,导航到包含该文件的目录,并运行以下命令来启动Elasticsearch和Kibana容器: ``` docker-compose up -d ``` 这将使用Docker Compose根据`docker-compose.yml`文件中的配置启动Elasticsearch和Kibana容器。`-d`选项将容器置于后台运行。 4. 等待一段时间,直到容器成功启动。您可以通过运行以下命令来检查容器的状态: ``` docker-compose ps ``` 如果您看到了elasticsearch和kibana容器正在运行,则表示安装成功。 5. 现在,您可以通过浏览器访问Kibana的Web界面,地址为`http://localhost:5601`。在Kibana中,您可以进行各种操作,如索引数据、创建可视化仪表板等。 请注意,以上步骤假设您已经具备基本的Docker和容器管理知识。如果您遇到任何问题,请参考Docker和Docker Compose的官方文档以获得更多详细信息。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值