ELK使用Logstash同步mysql数据

本文介绍如何使用Logstash从MySQL同步实时数据到ELK(Elasticsearch, Logstash, Kibana)堆栈。通过配置Logstash的jdbc input插件,实现了增量同步,并展示了多输入源配置,包括TCP日志和多个MySQL表,解决输出重复问题的方法。" 108536967,9999987,VB串口通讯初学者指南,"['程序人生', 'VB编程', '串口通信']
摘要由CSDN通过智能技术生成

上篇文章简单的把logstash跑起来,接受微服务中的tcp日志消息。

本篇主要用来同步mysql的实时数据。

首先修改logstash的配置文件logstash-7.3.0\config\logstash-sample.conf,复制出一份logstash-mysql.conf。修改内容如下:

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  #beats {
  #  port => 5044
  #}
  jdbc {
      # 数据库  数据库名称为elk,表名为book_table
      jdbc_connection_string => "jdbc:mysql://localhost:14416/lip?characterEncoding=UTF-8"
      # 用户名密码
      jdbc_user => "root"
      jdbc_password => "root"
      # jar包的位置
      jdbc_driver_library => "E:/mysql-connector-java-5.1.40.jar"
      # mysql的Driver
      jdbc_driver_class => "com.mysql.jdbc.Driver"
      jdbc_paging_enabled => "true"
      jdbc_page_size => "100"
      statement_filepath => "E:\logstash-7.3.0\config\viewlogs.sql"
      #statement => "select * from viewlogs"
      schedule => "* * * * *"
	  #是否记录上次执行结果, 如果为真,将会把上次执行到的 tracking_column 字段的值记录下来,保存到 last_run_metadata_path 指定的文件中
	  record_last_run => true

	  #是否需要记录某个column 的值,如果 record_last_run 为真,可以自定义我们需要 track 的 column 名称,此时该参数就要为 true. 否则默认 track 的是 timestamp 的值.
	  use_column_value => true

	  #如果 use_column_value 为真,需配置此参数. track 的数据库 column 名,该 column 必须是递增的.比如:ID.
	  tracking_column => id

	  #指定文件,来记录上次执行到的 tracking_column 字段的值
	  #比如上次数据库有 10000 条记录,查询完后该文件中就会有数字 10000 这样的记录,下次执行 SQL 查询可以从 10001 条处开始.
	  #我们只需要在 SQL 语句中 WHERE MY_ID > :last_sql_value 即可. 其中 :last_sql_value 取得就是该文件中的值(10000).
	  last_run_metadata_path => "E:\logstash-7.3.0\config\viewlogs"

	  #是否清除 last_run_metadata_path 的记录,如果为真那么每次都相当于从头开始查询所有的数据库记录
	  clean_run => false

	  #是否将 column 名称转小写
	  #lowercase_column_names => false
    }
}

output {

  elasticsearch {
    hosts => ["http://localhost:9200"]
	#按分钟
	#index => "mysql-%{+YYYY.MM.dd.HH.mm}"
	#按小时
	index => "mysql-%{+YYYY.MM.dd.HH}"
	#index => "wdnmd"
	#index => "%{[servicename]}"
	#index => "logstash-%{[fields][document_type]}-%{+YYYY.MM.dd}"
    #index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

其中,viewlogs.sql内容如下:

select * from viewlogs where id > :sql_last_value

表viewlogs结构如下:

CREATE TABLE `viewlogs` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `nums` int(11) DEFAULT NULL,
  `rank` int(11) DEFAULT NULL,
  `date` datetime DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=26 DEFAULT CHARSET=utf8mb4;

 

配置完毕后,关闭logstash,重新指定配置文件启动

PS C:\windows\system32> cd E:\logstash-7.3.0\bin
PS E:\logstash-7.3.0\bin> .\logstash.bat -f ..\config\logstash-mysql.conf

启动日志:

PS E:\logstash-7.3.0\bin> .\logstash.bat -f ..\config\logstash-mysql.conf
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to E:/logstash-7.3.0/logs which is now configured via log4j2.properties
[2019-08-06T16:04:49,711][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-08-06T16:04:49,732][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.3.0"}
[2019-08-06T16:04:51,955][INFO ][org.reflections.Reflections] Reflections took 60 ms to scan 1 urls, producing 19 keys and 39 values
[2019-08-06T16:04:53,123][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-08-06T16:04:53,398][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-08-06T16:04:53,478][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-08-06T16:04:53,484][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-08-06T16:04:53,519][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2019-08-06T16:04:53,599][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-08-06T16:04:53,652][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2019-08-06T16:04:53,659][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x4d52bedc run>"}
[2019-08-06T16:04:53,707][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值