Getting Started with Kibana
开始使用Kibana
Now that you have Kibana installed, you can step through this tutorial to get fast hands-on experience with key Kibana functionality. By the end of this tutorial, you will have:
既然你已经安装了Kibana,你就可以继续学习接下来这个教程,通过它你可以快速得到Kibana关键功能的指导性经验。在教程的结尾,你讲得到:
- Loaded a sample data set into your Elasticsearch installation
- Defined at least one index pattern
- Use the Discover functionality to explore your data
- Set up some visualizations to graphically represent your data
- Assembled visualizations into a Dashboard
- 加载一个简单的数据集到ES装置
- 定义至少一个索引样式
- 使用 Discover 功能来检索数据
- 设置一些可视化结果来生动地代表你的数据
- 将可视化结果存入Dashboard
The material in this section assumes you have a working Kibana install connected to a working Elasticsearch install.
Video tutorials are also available:
这部分的资料假定你将一个工作的Kibana连接到一个工作的ES。
一些视频资料可以在下面得到:
- High-level Kibana 4 introduction, pie charts
- Kibana 4高级介绍,圆形分割统计图
- Data discovery, bar charts, and line charts
- 数据发现,条状图,线状图
- Tile maps
- 瓦片图
- Embedding Kibana 4 visualizations
- Kibana 4 嵌入可视化
Before You Start: Loading Sample Data
在开始前:下载样例数据
edit
The tutorials in this section rely on the following data sets:
这部分的教程依赖以下的数据集:
- The complete works of William Shakespeare, suitably parsed into fields. Download this data set by clicking here: shakespeare.json.
- 莎士比亚全集,被适当地分片。可以从这里下载数据:shakespeare.json。
- A set of fictitious accounts with randomly generated data. Download this data set by clicking here:accounts.zip
- 随机生成数据的虚构账目数据集。可以从这里下载数据:accounts.zip。
- A set of randomly generated log files. Download this data set by clicking here: logs.jsonl.gz
- 随机收集的日志文件数据集。可以从这里下载数据: logs.jsonl.gz。
Two of the data sets are compressed. Use the following commands to extract the files:
两个数据集是压缩文件。使用下面的命令来解压文件:
unzip accounts.zip gunzip logs.jsonl.gz
The Shakespeare data set is organized in the following schema:
莎士比亚数据集的数据是:
{ "line_id": INT, "play_name": "String", "speech_number": INT, "line_number": "String", "speaker": "String", "text_entry": "String", }
The accounts data set is organized in the following schema:
统计数据集的格式是:
{ "account_number": INT, "balance": INT, "firstname": "String", "lastname": "String", "age": INT, "gender": "M or F", "address": "String", "employer": "String", "email": "String", "city": "String", "state": "String" }
The schema for the logs data set has dozens of different fields, but the notable ones used in this tutorial are:
日志数据集有不同的分块,但是在这片教程中,使用的一个很有名的是:
{ "memory": INT, "geo.coordinates": "geo_point" "@timestamp": "date" }
在我们加载莎士比亚数据集之前,我们需要设置块的映射。映射把索引中的文件划分成逻辑组,而且确定分块的特性,例如分块的搜索能力,无论它是否是标记化的,或者是被分成独立的单词。
使用下面的命令,来设置对莎士比亚数据集的映射:
Before we load the Shakespeare data set, we need to set up a mapping for the fields. Mapping divides the documents in the index into logical groups and specifies a field’s characteristics, such as the field’s searchability or whether or not it’s tokenized, or broken up into separate words.
Use the following command to set up a mapping for the Shakespeare data set:
curl -XPUT http://localhost:9200/shakespeare -d ' { "mappings" : { "_default_" : { "properties" : { "speaker" : {"type": "string", "index" : "not_analyzed" }, "play_name" : {"type": "string", "index" : "not_analyzed" }, "line_id" : { "type" : "integer" }, "speech_number" : { "type" : "integer" } } } } } ';
This mapping specifies the following qualities for the data set:
映射为数据集确定了下面的特性:
- The speaker field is a string that isn’t analyzed. The string in this field is treated as a single unit, even if there are multiple words in the field.
- speaker块是一个不被分析的串。在这个分块的字符串被当成简单的单元,即使有很多单词在这个块中。
- The same applies to the play_name field.
- 对于play_name块的同样应用。
- The line_id and speech_number fields are integers.
- line_id 和 speech_number块是整形。
The logs data set requires a mapping to label the latitude/longitude pairs in the logs as geographic locations by applying the geo_point
type to those fields。
日志数据的数据集需要一个映射,来在日志中标记经纬度作为地理位置,应用方法是在这些块中使用geo_point类型。
Use the following commands to establish geo_point
mapping for the logs:
使用下面的命令为日志来建立geo_point映射:
curl -XPUT http://localhost:9200/logstash-2015.05.18 -d ' { "mappings": { "log": { "properties": { "geo": { "properties": { "coordinates": { "type": "geo_point" } } } } } } } ';
curl -XPUT http://localhost:9200/logstash-2015.05.19 -d ' { "mappings": { "log": { "properties": { "geo": { "properties": { "coordinates": { "type": "geo_point" } } } } } } } ';
curl -XPUT http://localhost:9200/logstash-2015.05.20 -d ' { "mappings": { "log": { "properties": { "geo": { "properties": { "coordinates": { "type": "geo_point" } } } } } } } ';
The accounts data set doesn’t require any mappings, so at this point we’re ready to use the Elasticsearchbulk
API to load the data sets with the following commands:
账目数据集不需要映射,所以,此时,我们准备好使用ES bulk API来加载数据集,用到的命令在下面。
curl -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json curl -XPOST 'localhost:9200/shakespeare/_bulk?pretty' --data-binary @shakespeare.json curl -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl
These commands may take some time to execute, depending on the computing resources available.
Verify successful loading with the following command:
这些命令的执行可能会耽误些时间,依赖于计算的资源是否可以得到。
使用下面的命令进行加载,被证实成功过。
curl 'localhost:9200/_cat/indices?v'
You should see output similar to the following:
你看到了输出结果应该和下面类似:
health status index pri rep docs.count docs.deleted store.size pri.store.size yellow open bank 5 1 1000 0 418.2kb 418.2kb yellow open shakespeare 5 1 111396 0 17.6mb 17.6mb yellow open logstash-2015.05.18 5 1 4631 0 15.6mb 15.6mb yellow open logstash-2015.05.19 5 1 4624 0 15.7mb 15.7mb yellow open logstash-2015.05.20 5 1 4750 0 16.4mb 16.4mb
备注:
材料来自elastic官网。
地址:
https://www.elastic.co/guide/en/kibana/current/getting-started.html