Elasticsearch 6.5.4 集群部署记录及监控

Elasticsearch stack 6.5.4版本:

IP地址         主机名     Elasticsearch角色   ELasticsearch 冷热   安装组件
10.19.145.159  datanode3  master,data       hot                  Elasticsearch
10.19.162.134  datanode3  master,data       hot                  Elasticsearch 
10.19.94.155   datanode3  data               hot                  Elasticsearch
10.19.102.65   namenode2  data               cold                 Elasticsearch,kibana
10.19.18.192   namenode1  data               cold                 Elasticsearch,logstash

datanode3 配置:

[root@datanode3 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v ^#
cluster.name: es_pro_cluster
node.name: datanode3
node.master: true
node.data: true
node.attr.box_type: hot
path.data: /data5/elasticsearch/data/
path.logs: /data5/elasticsearch/log/
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
cluster.routing.allocation.disk.watermark.low: 85%
cluster.routing.allocation.disk.watermark.high: 90%
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 30%

discovery.zen.ping.unicast.hosts: ["10.19.145.159", "10.19.162.134","10.19.94.155","10.19.18.192","10.19.102.65"]
discovery.zen.minimum_master_nodes: 2
gateway.recover_after_nodes: 4
xpack.security.enabled: false
action.auto_create_index: true

-- 附加的配置:
#xpack.security.transport.ssl.enabled: true
#action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*
action.auto_create_index: true
#x-pack设置完毕后,elastic head无法登陆的问题 需要设置参数:
#http.cors.enabled: true
#http.cors.allow-origin: '*'
#http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type

datanode2 的配置:
[root@datanode2 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v ^#
cluster.name: es_pro_cluster
node.name: datanode2
node.master: true
node.data: true
node.attr.box_type: hot
path.data: /data5/elasticsearch/data/
path.logs: /data5/elasticsearch/log/
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
cluster.routing.allocation.disk.watermark.low: 85%
cluster.routing.allocation.disk.watermark.high: 90%
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 30%
discovery.zen.ping.unicast.hosts: ["10.19.145.159", "10.19.162.134","10.19.94.155","10.19.18.192","10.19.102.65"]
discovery.zen.minimum_master_nodes: 2
gateway.recover_after_nodes: 3
xpack.security.enabled: false
action.auto_create_index: true

datanode1的配置:
[root@datanode1 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v ^#
cluster.name: es_pro_cluster
node.name: datanode1
node.data: true
node.attr.box_type: hot
path.data: /data5/elasticsearch/data/
path.logs: /data5/elasticsearch/log/
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
cluster.routing.allocation.disk.watermark.low: 85%
cluster.routing.allocation.disk.watermark.high: 90%
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 30%
discovery.zen.ping.unicast.hosts: ["10.19.145.159", "10.19.162.134","10.19.94.155","10.19.18.192","10.19.102.65"]
discovery.zen.minimum_master_nodes: 2
gateway.recover_after_nodes: 3
xpack.security.enabled: false
action.auto_create_index: true

namenode2和namenode1的配置:
#  cat /etc/elasticsearch/elasticsearch.yml | grep -v ^#
cluster.name: es_pro_cluster
node.name: namenode2
node.data: true
node.attr.box_type: cold
path.data: /data5/elasticsearch/data/
path.logs: /data5/elasticsearch/log/
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9300
transport.tcp.compress: true
cluster.routing.allocation.disk.watermark.low: 85%
cluster.routing.allocation.disk.watermark.high: 90%
indices.fielddata.cache.size: 10%
indices.breaker.fielddata.limit: 30%
discovery.zen.ping.unicast.hosts: ["10.19.145.159", "10.19.162.134","10.19.94.155","10.19.18.192","10.19.102.65"]
gateway.recover_after_nodes: 3
xpack.security.enabled: false
action.auto_create_index: true


所有的JVM设置:
# cat /etc/elasticsearch/jvm.options

--logstash的配置:
# cat logstash.yml  | grep -v ^#
path.data: /data5/logstash/data

pipeline.workers: 10
pipeline.batch.size: 1000
pipeline.batch.delay: 10


http.host: "10.19.67.225"
path.logs: /data5/logstash/log

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: ["http://10.19.145.159:9200", "http://10.19.162.134:9200","http://10.19.94.155:9200","http://10.19.18.192:9200","http://10.19.102.65:9200"]

xpack.monitoring.collection.interval: 5s
xpack.monitoring.collection.pipeline.details.enabled: true

--logstash 的解析内容:
# cd /etc/logstash/conf.d
# cat logstash.conf 
input {
    beats {
        port => 5044
        codec => "json"
    }
}
filter {

    ruby {
        path => "/etc/logstash/conf.d/test.rb"
    }

    json {
        source => "business"
        remove_field => ["@version","path","host","tags","header","body","business"]
    }

}
output {
    # stdout {}
    webhdfs {
        host => "master1"              # (required)
        port => 14000                      # (optional, default: 50070)
        path => "/user/hive/warehouse/yjp_trace.db/yjp_ods_trace/day=%{op_day}/logstash-%{op_hour}.log"  # (required)
        user => "hdfs"                       # (required)
        # compression => "snappy"
        # snappy_format => "stream"
        codec => line {
                     format => "%{message}"
                 }
    }
    elasticsearch {
        hosts => ["datanode1:9200","datanode2:9200","datanode3:9200"]
        index => "trace_%{op_day}"
        #template => "/data1/cloud/logstash-5.5.1/filebeat-template.json"
        #template_name => "my_index"
        #template_overwrite => true
    }

}

# cat test.rb 
# the value of `params` is the value of the hash passed to `script_params`
# in the logstash configuration
def register(params)
    # 这里通过params获取的参数是在logstash文件中通过script_params传入的
    @message = params["message"]
end

# the filter method receives an event and must return a list of events.
# Dropping an event means not including it in the return array,
# while creating new ones only requires you to add a new instance of
# LogStash::Event to the returned array
# 这里通过参数event可以获取到所有input中的属性
def filter(event)
    application_id = event.get('[header][meta][applicationId]')
    ip = event.get('[header][runtime][ip]')
    os_ver = event.get('[header][runtime][osVer]')
    device_id = event.get('[header][runtime][deviceID]')
    app_type =  event.get('[header][runtime][appType]') 
    browser_ver =  event.get('[header][runtime][browserVer]')
    browser_type =  event.get('[header][runtime][browserType]') 
    device_type =  event.get('[header][runtime][deviceType]') 
    os =  event.get('[header][runtime][os]') 
    device_brand =  event.get('[header][runtime][deviceBrand]') 
    app_ver =  event.get('[header][runtime][appVer]')
    page_id =  event.get('[body][meta][pageID]') 
    event_id =  event.get('[body][meta][eventID]') 
    business =  event.get('[body][business]')
    city_id =  event.get('[body][runtime][cityID]') 
    url_request_Seq_num =  event.get('[body][runtime][urlRequestSeqNum]')     
    session_id =  event.get('[body][runtime][sessionID]') 
    event_time =  event.get('[body][runtime][eventTime]')    
    log_type =  event.get('[body][runtime][logType]')    
    page_url =  event.get('[body][runtime][pageUrl]')  
        ref_page_id =  event.get('[body][runtime][refPageID]')    
    uuid =  event.get('[body][runtime][uuid]')       
    first_request_flag =  event.get('[body][runtime][firstRequestFlag]')   
    last_request_flag =  event.get('[body][runtime][lastRequestFlag]')   
    uid =  event.get('[body][runtime][uid]')  
    page_duration =  event.get('[body][runtime][pageDuration]')      
    request_time =  event.get('[body][runtime][requestTime]') 

    if application_id.nil?
        application_id = ""
    end
    if ip.nil?
        ip = ""
    end
    if os_ver.nil?
        os_ver = ""
    end
    if device_id.nil?
        device_id = ""
    end
    if app_type.nil?
        app_type = ""
    end
    if browser_ver.nil?
        browser_ver = ""
    end
    if browser_type.nil?
        browser_type = ""
    end
   if device_type.nil?
        device_type = ""
    end
    if os.nil?
        os = "null"
    end
    if device_brand.nil?
        device_brand = ""
    end
    if app_ver.nil?
        app_ver = ""
    end
    if page_id.nil?
        page_id = ""
    end
    if event_id.nil?
        event_id = ""
    end
    if business.nil?
        business = "{}"
    else
        business=business.to_json
    end
    if city_id.nil?
        city_id = "-1"
    end
    if url_request_Seq_num.nil?
        url_request_Seq_num = 0
    end
    if session_id.nil?
        session_id = ""
    end
    if event_time.nil?
        event_time = ""
    end
    if log_type.nil?
        log_type = ""
    end
    if page_url.nil?
        page_url = ""
    end
    if ref_page_id.nil?
        ref_page_id = ""
    end
    if uuid.nil?
        uuid = ""
    end
    if first_request_flag.nil?
        first_request_flag = false
    end
    if first_request_flag==""
        first_request_flag = false
    end

    if last_request_flag.nil?
        last_request_flag = false
    end
    if last_request_flag==""
        last_request_flag = false
    end
    if uid.nil?
        uid = uuid
    end
    if page_duration.nil?
        page_duration = 0
    end

    begin
        Integer(page_duration)
    rescue
        page_duration = 0
    end

    if request_time.nil?
        request_time = 0
    end
    begin
        Integer(request_time)
    rescue
        request_time = 0
    end
    message = application_id + "\001" + ip + "\001" + os_ver + "\001" + device_id + "\001" + app_type + "\001"  + browser_ver + "\001"  + \
        browser_type.to_s + "\001"  + device_type.to_s + "\001"  + os.to_s + "\001"  + device_brand.to_s+ "\001"  + app_ver.to_s+ "\001"  +  \
        page_id.to_s + "\001"  + event_id.to_s + "\001" +  \
        business.to_s + "\001" + city_id.to_s + "\001" + url_request_Seq_num.to_s + "\001"  + session_id.to_s+ "\001"  + event_time.to_s + "\001"  + \
        log_type.to_s + "\001"  + page_url.to_s+ "\001"  + ref_page_id.to_s + "\001"  + uuid.to_s + "\001"  + first_request_flag.to_s + "\001" +  \
        last_request_flag.to_s + "\001" + uid.to_s + "\001" + page_duration.to_s + "\001" + request_time.to_s
    event.set('message',message)
    event.set('business',business)
    
event.set('application_id',application_id)
    event.set('ip',ip)
    event.set('os_ver',os_ver)
    event.set('device_id',device_id)
    event.set('app_type',app_type)
    event.set('browser_ver',browser_ver)
    event.set('browser_type',browser_type)
    event.set('device_type',device_type)
    event.set('os',os)
    event.set('device_brand',device_brand)
    event.set('app_ver',app_ver)
    event.set('page_code',page_id)
    event.set('event_code',event_id)
    event.set('city_id',city_id)
    event.set('url_request_Seq_num',url_request_Seq_num)
    event.set('session_id',session_id)
    event.set('event_time',event_time)
    event.set('log_type',log_type)
    event.set('page_url',page_url)
    event.set('ref_page_id',ref_page_id)
    event.set('uuid',uuid)
    event.set('first_request_flag',first_request_flag)
    event.set('last_request_flag',last_request_flag)
    event.set('uid',uid)
    event.set('page_duration',Integer(page_duration))
    event.set('request_time',Integer(request_time))
    # 增加当前系统时间
    today = Time.new
    op_day = today.strftime("%Y%m%d")
    op_hour = today.strftime("%H")
    event.set("op_day",op_day)
    event.set("op_hour",op_hour)
    return [event]

-- 启动和查看logstash:
# systemctl status logstash
# systemctl start logstash


--kibana的配置:
# cat /etc/kibana/kibana.yml | grep -v ^#
server.port: 5601
server.host: "10.19.145.159"
elasticsearch.url: "http://10.19.145.159:9200"
kibana.index: ".kibana"

--filebeat+logstash的配置:
-- filebeat:
## cat /etc/filebeat/filebeat.yml
###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /data/trace/json/*/*.json
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after

  # 抓取单个文件的缓冲,默认16384(16k)
  harvester_buffer_size: 104857600
  # 单个event的大小默认10485760(10M)
  max_bytes: 104857600
  
  # 如果文件在指定的持续时间内没有更新,Filebeat会关闭文件句柄
  close_inactive: 3s
   
  # Filebeat会为每个harvester提供预定义的使用期限
  close_timeout: 1m
  # 一旦文件结束,Filebeat会立即关闭文件。 当您的文件只写入一次而不是不时更新时,这很有用。
  close_eof: true
  # Filebeat会在删除文件时马上关闭harvester。
  close_removed: true
  # Filebeat将在指定的非活动时间段过去后移除文件的状态。
  clean_inactive: 48h

  ignore_older: 1h

  scan_frequency: 10s

  # Filebeat在达到EOF之后再次检查文件之间等待的时间
  backoff: 5s
  # 在达到EOF之后再次检查文件之前Filebeat等待的最长时间 backoff <= max_backoff <= scan_frequency
  max_backoff: 10s


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
# output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["172.16.4.53:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Procesors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:


--logstash:
## cat /etc/logstash/conf.d/logstash.conf 
input {
    #beats {
    #    port => 5044
    #    codec => "json"
    #}

    file {
           file_completed_action => "delete"
           path => "/data/trace/json/*/*.json"
           mode => "read"
           #不处理120天以前的数据,默认为一天
           #ignore_older => "10368000"
           codec => "json"
           # 一次抽取一个文件最多10块
           file_chunk_count => "10"
           # 块大小1M
           file_chunk_size => "1048576"

           sincedb_clean_after => "2"

           #max_open_files => "200"

    }
}
filter {

    ruby {
        path => "/etc/logstash/conf.d/test.rb"
    }

    json {
        source => "business"
        remove_field => ["@version","path","host","tags","header","body","business"]
    }

}
output {
    # stdout {}
    webhdfs {
       host => "10.19.94.240"              # (required)
       port => 14000                      # (optional, default: 50070)
       path => "/user/hive/warehouse/yjp_trace.db/yjp_ods_trace/day=%{op_day}/logstash-%{op_hour}.log"  # (required)
       user => "hdfs"                       # (required)
       # compression => "snappy"
       # snappy_format => "stream"
       codec => line {
                    format => "%{message}"
                }
    }
    elasticsearch {
        hosts => ["172.16.4.51:9200","172.16.4.52:9200","172.16.4.53:9200"]
        index => "trace_%{op_day}"
        #template => "/data1/cloud/logstash-5.5.1/filebeat-template.json"
        #template_name => "my_index"
        #template_overwrite => true
    }

}




--集群查看:
http://10.19.102.65:9200/_cat/nodes
10.19.102.65  49 83 12 6.18 6.41 6.57 mdi - namenode2
10.19.162.134 53 81 16 6.42 6.54 6.66 mdi - datanode2
10.19.145.159 52 99 22 5.92 5.98 6.15 mdi * datanode3
10.19.94.155  64 75 22 7.04 7.23 7.53 mdi - datanode1
10.19.18.192  46 81 20 6.38 6.37 6.48 mdi - namenode1

监控信息查看:

数据查看:

另外一个集群的监控:

索引:

Logstash的监控:

 

 

 

部分报错原因分析:
#
# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Thu 2019-12-12 11:45:02 CST; 6s ago
     Docs: http://www.elastic.co
  Process: 6666 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet (code=exited, status=1/FAILURE)
 Main PID: 6666 (code=exited, status=1/FAILURE)

Dec 12 11:45:02 hadoop103 systemd[1]: Started Elasticsearch.
Dec 12 11:45:02 hadoop103 elasticsearch[6666]: which: no java in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
Dec 12 11:45:02 hadoop103 elasticsearch[6666]: warning: Falling back to java on path. This behavior is deprecated. Specify JAVA_HOME
Dec 12 11:45:02 hadoop103 elasticsearch[6666]: could not find java; set JAVA_HOME
Dec 12 11:45:02 hadoop103 systemd[1]: elasticsearch.service: main process exited, code=exited, status=1/FAILURE
Dec 12 11:45:02 hadoop103 systemd[1]: Unit elasticsearch.service entered failed state.
Dec 12 11:45:02 hadoop103 systemd[1]: elasticsearch.service failed.

解释:由于java可能是自定义安装,路径不是默认的/usr/local/bin/java

在启动文件中找不到java的二进制文件,需要做一个软连接即可:
# ln -s /opt/module/jdk1.8.0_221/bin/java /usr/local/bin/java 

然后重新启动即可:
# systemctl restart elasticsearch
# systemctl status elasticsearch  

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值