Elasticsearch、Logstash、Kibana在CentOs7下部署(单机版)

1.下载elasticsearch-6.4.0.tar.gz,路径:

Download Elasticsearch | Elastic

2.设置资源参数

vi /etc/security/limits.conf

# 修改如下

* soft nofile 65536

* hard nofile 131072

* soft nproc 65536

* hard nproc 131072

3.设置用户资源参数

vi /etc/security/limits.d/20-nproc.conf

# 设置elk用户参数

* soft    nproc     65536

vi /etc/sysctl.conf

vm.max_map_count=655360

执行生效名利: sctl -p

4.上传tar包,解压:

tar -zxvf elasticsearch-6.4.0.tar.gz -C /usr/local

cd /usr/local/elasticsearch-6.4.0

./bin/elasticsearch启动

解决办法:./bin/elasticsearch -Des.insecure.allow.root=true

或者修改/bin/elasticsearch文件,加ES_JAVA_OPTS="-Des.insecure.allow.root=true"

启动仍然报错:

所以出于安全考虑,仍然需要添加一个新的用户:

groupadd elk        #创建组elk

useradd elk -g elk -p 123456 #将用户添加到组

chown -R elk:elk /usr/local/elasticsearch-6.4.0/

后台启动命令 ./bin/elasticsearch -d

5.安装head插件

检查环境:nodejs是否安装;git是否安装;

安装nodejs:

1>.下载yum源 curl --silent --location https://rpm.nodesource.com/setup_8.x | bash -

2>. yum install -y nodejs

3>.node -v 查看版本

安装git:

yum install -y git

git --version

head下载地址:

cd /usr/local

git clone git://github.com/mobz/elasticsearch-head.git

等待。。。

npm install -g grunt-cli

npm install cnpm -g --registry=https://registry.npm.taobao.org

cd elasticsearch-head

npm install -g cnpm

6.修改head配置

1>.修改允许所有ip访问

vi Gruntfile.js

在connect->server->options 下添加 hostname: '*',

保存文件退出::wq!

2>.修改head连接es服务器ip

cd _site

vi app.js

搜索 /localhost 或者找到4374行 

修改 this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://localhost:9200"; 把localhost 修改成 es安装的服务器,

保存文件退出::wq!

3>.修改es跨域访问

cd /usr/local/elasticsearch-6.4.0/config

vi elasticsearch.yml

在文件最后添加:

http.cors.enabled: true
http.cors.allow-origin: '*'

如需开外网 添加:

network.host: 192.168.239.143

4>.关闭防火墙

systemctl status firewalld.service

systemctl stop firewalld.service

systemctl disable firewalld.service

5>.启动es和head

su elk

cd /usr/local/elasticsearch-6.4.0

./bin/elasticsearch -d

exit

启动head:

方法一:cd /usr/local/elasticsearch-head

cd node_modules/grunt/bin

./grunt server

方法二:cd /usr/local/elasticsearch-head

npm run start

后台启动:nohup npm run start &

验证:netstat -anltp | grep 9100

如果有用户名密码的es:

请在 上方图中的 文本框中填写:http://douzi8:9200/?auth_user=XXX&auth_password=XXX ;在点击连接即可

7.安装kibana

1>.下载地址 Download Kibana Free | Get Started Now | Elastic

2>.上传并解压 tar -zxvf kibana-6.4.0-linux-x86_64.tar.gz -C /usr/local

3>.cd /usr/local/

4>.mv kibana-6.4.0-linux-x86_64 kibana-6.4.0

5>.cd kibana-6.4.0/config

6>.vi  kibana.yml 修改 server.host 和 elasticsearch.url 为自己的服务器ip地址。

7>.启动kibana /usr/local/kibana-6.4.0/bin/kibana 即可。

     后台启动 nohup /usr/local/kibana-6.4.0/bin/kibana &

     验证:netstat -anltp | grep 5601 或者 ps -ef | grep node

如需配置密码,请在配置文件中添加:

elasticsearch.username: "XXX"
elasticsearch.password: "XXX"

重启即可

8.中文分词 elasticsearch-analysis-ik

复杂方法:

下载地址: https://github.com/medcl/elasticsearch-analysis-ik

                   Gitee 极速下载/elasticsearch-analysis-ik

下载zip后,解压在/usr/local/elasticsearch-analysis-ik

cd /usr/local/elasticsearch-analysis-ik

mvn clean install -Dmaven.test.skip=true

然后把 /usr/local/elasticsearch-analysis-ik/target/releases/elasticsearch-analysis-ik-6.2.4.zip 拷贝到

/usr/local/elasticsearch-6.4.0/plugins/ik 下(ik文件夹自己建一下)

简单方法:

./bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.4.0/elasticsearch-analysis-ik-6.4.0.zip (好慢)

或者翻墙下载 https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.4.0/elasticsearch-analysis-ik-6.4.0.zip

9.客户端操作例子:

GET _search
{
  "query": {
    "match_all": {}
  }
}

PUT /website/blog/123
{
  "title": "My first blog entry",
  "text":  "Just trying this out...",
  "date":  "2014/01/01"
}

POST /website/blog/
{
  "title": "My second blog entry",
  "text":  "Still trying this out...",
  "date":  "2014/01/01"
}

GET /website/blog/123?pretty
GET /website/blog/124

GET /website/blog/123?_source=title,text
GET /website/blog/123/_source

PUT /website/blog/123
{
  "title": "My first blog entry",
  "text":  "I am starting to get the hang of this...",
  "date":  "2014/01/02"
}


PUT /website/blog/123?op_type=create
{
  "title": "My first blog entry1111",
  "text":  "I am starting to get the hang of this...",
  "date":  "2014/01/02"
}

PUT /website/blog/123/_create
{
  "title": "My first blog entry1111",
  "text":  "I am starting to get the hang of this...",
  "date":  "2014/01/02"
}

DELETE /website/blog/123


PUT /website/blog/1/_create
{
  "title": "My first blog entry",
  "text":  "Just trying this out..."
}

GET /website/blog/1

PUT /website/blog/1?version=1 
{
  "title": "My first blog entry",
  "text":  "Starting to get the hang of this..."
}

PUT /website/blog/2?version=5&version_type=external
{
  "title": "My first external blog entry",
  "text":  "Starting to get the hang of this..."
}

PUT /website/blog/2?version=10&version_type=external
{
  "title": "My first external blog entry",
  "text":  "This is a piece of cake..."
}

POST /website/blog/1/_update
{
   "doc" : {
      "tags" : [ "testing" ],
      "views": 0
   }
}

POST /website/blog/1/_update
{
   "script" : "ctx._source.views+=1"
}

POST /website/blog/1/_update
{
   "script":{
    "inline":"ctx._source.tags.add(params.new_tag)",
    "params":{
         "new_tag":"tag3"
    }
  }
}

GET /website/blog/2

GET /_mget
{
   "docs" : [
      {
         "_index" : "website",
         "_type" :  "blog",
         "_id" :    2
      },
      {
         "_index" : "website",
         "_type" :  "blog1",
         "_id" :    1,
         "_source": "views"
      }
   ]
}

GET /website/blog/_mget
{
   "docs" : [
      { "_id" : 2 },
      { "_id" : 1 }
   ]
}

GET /website/blog/_mget
{
   "ids" : [ "2", "1" ]
}




POST /_bulk
{ "delete": { "_index": "website", "_type": "blog", "_id": "123" }} 
{ "create": { "_index": "website", "_type": "blog", "_id": "123" }}
{ "title":    "My first blog post" }
{ "index":  { "_index": "website", "_type": "blog" }}
{ "title":    "My second blog post" }
{ "update": { "_index": "website", "_type": "blog", "_id": "123", "_retry_on_conflict" : 3} }
{ "doc" : {"title" : "My updated blog post"} } 

GET /website/blog/123?pretty

GET /product/book/1

// es7 查询全貌
GET /ecg_r_wavees/_doc/_search
{
  "query": {
    "bool" : {
      "must": [
        {
          "term": {"recId" : "47dd44b0-4372-40c0-8787-24355037" }  # 完全匹配
        },{
          "range":    # 范围查询
            {
              "rL" : {
                "gt": "1",
                "lt": "800000"
              }
          }
        },{
          "fuzzy": { #误拼写查询 #说明:以上例子是es允许在value内容中出现差别,例如输入505,然后查询,504的也会被查询到,105的也会被查询到,总之是只要是一位内容差别的都会查询出来,这里是因为max_expansions这个参数做了限制,最大就允许你输错一位,当然也是可以调整。另外还有一个最小误拼写检查参数min_similarity
            "abc": {
              "value": "505",
              "max_expansions": 1
            }
          }
        },{
            "wildcard": {  # 模糊查询
                "title": "*first*"
            }
        }, {
            "exists": {  # 排除null
                "field": "title"
            }
        }, {
            "match": {  # 分词查询 这个match只会对text字段生效,与模糊查询有区别,例如我们输入的是"这是",用了match关键字会帮我们拆成“这”跟“是”,然后因为text类型的数据,数据本身是分词的,例如“这是一个标题”,可能会被分词成“这”,“是”,“一个”,“标题”所以它们会按照分词完毕之后的内容进行匹配。故会匹配除这是一个标题这段内容。
                "title": "这是"
            }
        }, {
          "match_phrase": { # 这个match_phrase 采用的是短语匹配,也就是说把传入的内容当成是 一个句子,去匹配内容,只有当有匹配的字段中有满足这个句子的时候才会返回匹配内容
            "title": "标题啊"
          }
        }
      ],
      "must_not": [],
      "should": []
    }
  },
  "from": 0,
  "size": 10,
  "sort": [],
  "aggs": {}
}

#设置最大返回数
PUT /ecg_r_wavees/_settings
{
  "max_result_window":200000000
}


POST content_test/_doc/1
{
  "title":"douzi test",
  "channelRels":[{"id":1,"name":"douzi","deleted": 0}]
}

GET content_test/_doc/1

POST /content_test/_update/1/
{
   "script":{
    "lang": "painless",
    "inline":"if (ctx._source.containsKey('channelRels')) {if(ctx._source.channelRels.size() <= 0) {ctx._source.channelRels.add(params.doc)}else{boolean isHave = false;for (int i=0;i<ctx._source.channelRels.size();i++) {if (ctx._source.channelRels[i].id == params['doc'].id) {isHave = true;ctx._source.channelRels[i].putAll(params['doc'])}} if (isHave == false) { ctx._source.channelRels.add(params.doc) }}} else { ctx._source.channelRels = [params.doc] } ctx._source.channelRels.removeIf(it -> it.deleted == 1)",
    "params": {
        "doc": {
          "id": 1,
          "name": "douzi5",
          "deleted" : 0
        } 
    }
  }
}

10.logstash-6.4.0安装启动

   下载 解压 修改配置文件

# 修改配置文件 /usr/share/logstash/bin/logstash.conf 
input{
    beats{
        codec => plain{charset => "UTF-8"}
        port => "5044"
    }
}
filter{
    mutate{
        remove_field => "@version"
        remove_field => "offset"
        remove_field => "input_type"
        remove_field => "beat"
        remove_field => "tags"
    }
    ruby{
        code => "event.timestamp.time.localtime"
    }
}
output{
    elasticsearch{
        codec => plain{charset => "UTF-8"}
        hosts => ["192.168.239.143:9200", "192.168.239.143:8200", "192.168.239.143:7200"]
    }
}

# 前台启动

bin/logstash -f logstash.conf

# 后台启动
nohup  /usr/local/logstash-6.4.0/bin/logstash -f /usr/local/logstash-6.4.0/bin/logstash.conf >> /dev/null 2>&1 &

11.Filebeat

   配置文件修改

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /tools/logs/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /tools/logs/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  fields:
    level: info
  #  review: 1

   启动命令

     ./filebeat -e -c filebeat.yml -d "publish"

后台启动

nohup /usr/local/filebeat-6.4.0/filebeat -e -c /usr/local/filebeat-6.4.0/filebeat.yml >> /dev/null 2>&1 &

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

窦再兴

你的鼓励是我最大的动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值