mysql logstash 同步 实例

3 篇文章 0 订阅
1 篇文章 0 订阅

7 elasticSearch 与mysql 同步

7.1 logstash
7.1.1 介绍

Logstash是一款轻量级的日志搜集处理框架,可以方便的把分散的、多样化的日志搜集
起来,并进行自定义的处理,然后传输到指定的位置,比如某个服务器或者文件。

企业中的黄精搭档。

7.1.2 Logstash安装与测试

解压缩 logstash ,然后到bin目录下进行测试。通过代码验证安装成功。

Microsoft Windows [版本 6.1.7601]
版权所有 (c) 2009 Microsoft Corporation。保留所有权利。

D:\elasticsearch\logstash-5.6.8\bin>logstash -e 'input {stdin {}} output {stdout {}}'
Sending Logstash's logs to D:/elasticsearch/logstash-5.6.8/logs which is now configured via log4j2.properties
[2019-08-20T10:27:34,224][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"D:/elasticsearch/logstash-5.6.8/modules/fb_apache/configuration"}
[2019-08-20T10:27:34,228][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"D:/elasticsearch/logstash-5.6.8/modules/netflow/configuration"}
[2019-08-20T10:27:34,232][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"D:/elasticsearch/logstash-5.6.8/data/queue"}
[2019-08-20T10:27:34,234][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"D:/elasticsearch/logstash-5.6.8/data/dead_letter_queue"}
[2019-08-20T10:27:34,245][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"7e6b95b2-3318-4a77-a02a-dcedf0028914", :path=>"D:/elasticsearch/logstash-5.6.8/data/uuid"}
[2019-08-20T10:27:34,357][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2019-08-20T10:27:34,378][INFO ][logstash.pipeline        ] Pipeline main started
The stdin plugin is now waiting for input:
[2019-08-20T10:27:34,429][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
ing
2019-08-20T02:27:41.127Z 7DXOSPRVGV8O21O ing
out pag
2019-08-20T02:27:46.199Z 7DXOSPRVGV8O21O out pag
hellowrld
2019-08-20T02:27:55.814Z 7DXOSPRVGV8O21O hellowrld



7.1.3 命令

命令参数: -e 代表执行语句

​ -f 表示执行文件,文件内放我们需要执行的语句,实际生产环境大多采用该方式

7.1.4 创建目录和准备驱动包

[外链图片转存失败(img-kzVF8bVM-1566271905028)(C:\Users\Administrator\Desktop\images\el\1566270365472.png)]在这里插入图片描述

7.1.5 配置文件
input {
  jdbc {
	  # mysql jdbc connection string to our backup databse
	  jdbc_connection_string => "jdbc:mysql://192.168.85.198:3306/tensquare_article?characterEncoding=utf-8&&useSSL=false&&serverTimezone=Asia/Shanghai"
	  # the user we wish to excute our statement as
	  jdbc_user => "root"
	  jdbc_password => "123456"
	  # the path to our downloaded jdbc driver  
	  jdbc_driver_library => "D:\elasticsearch\logstash-5.6.8\mysqletc\mysql-connector-java-5.1.46.jar"
	  # the name of the driver class for mysql
	  jdbc_driver_class => "com.mysql.jdbc.Driver"
	  jdbc_paging_enabled => "true"
	  jdbc_page_size => "50"
	  #以下对应着要执行的sql的绝对路径。
	  #statement_filepath => ""
	  statement => "SELECT id ,title,content,state from tb_article"
	  #定时字段 各字段含义(由左至右)分、时、天、月、年,全部为*默认含义为每分钟都更新(测试结果,不同的话请留言指出)
      schedule => "* * * * *"
  }
}



output {
  elasticsearch {
	  #ESIP地址与端口
	  hosts => "127.0.0.1:9200" 
	  #ES索引名称(自己定义的)
	  index => "search"
	  #自增ID编号
	  document_id => "%{id}"
	  document_type => "article"
  }
  stdout {
      #以JSON格式输出
      codec => json_lines
  }
}




7.1.6 启动同步命令
D:\elasticsearch\logstash-5.6.8\bin>logstash -f ..\mysqletc\mysql.conf
Sending Logstash's logs to D:/elasticsearch/logstash-5.6.8/logs which is now configured via log4j2.properties
[2019-08-20T10:47:29,496][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"D:/elasticsearch/logstash-5.6.8/modules/fb_apache
[2019-08-20T10:47:29,499][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"D:/elasticsearch/logstash-5.6.8/modules/netflow/con
[2019-08-20T10:47:30,344][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2019-08-20T10:47:30,346][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1
[2019-08-20T10:47:30,447][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2019-08-20T10:47:30,530][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-08-20T10:47:30,537][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"
message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_t
properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>
}}}}}}
[2019-08-20T10:47:30,549][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2019-08-20T10:47:30,672][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1:9200"]}
[2019-08-20T10:47:30,674][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5,
[2019-08-20T10:47:31,001][INFO ][logstash.pipeline        ] Pipeline main started
[2019-08-20T10:47:31,119][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-08-20T10:48:00,957][INFO ][logstash.inputs.jdbc     ] (0.009000s) SELECT version()
[2019-08-20T10:48:00,966][INFO ][logstash.inputs.jdbc     ] (0.004000s) SELECT count(*) AS `count` FROM (SELECT id ,title,content,state from tb_article) AS `t1` LIMIT 1
[2019-08-20T10:48:00,971][INFO ][logstash.inputs.jdbc     ] (0.003000s) SELECT * FROM (SELECT id ,title,content,state from tb_article) AS `t1` LIMIT 50 OFFSET 0
{"@version":"1","id":"1","state":"1","@timestamp":"2019-08-20T02:48:00.985Z","title":"中国","content":"我们都是中国人"}
{"@version":"1","id":"2","state":"1","@timestamp":"2019-08-20T02:48:00.989Z","title":"中国人很好","content":"滚滚滚只能是中古偶人"}
[2019-08-20T10:49:00,144][INFO ][logstash.inputs.jdbc     ] (0.001000s) SELECT version()
[2019-08-20T10:49:00,147][INFO ][logstash.inputs.jdbc     ] (0.002000s) SELECT count(*) AS `count` FROM (SELECT id ,title,content,state from tb_article) AS `t1` LIMIT 1
[2019-08-20T10:49:00,153][INFO ][logstash.inputs.jdbc     ] (0.003000s) SELECT * FROM (SELECT id ,title,content,state from tb_article) AS `t1` LIMIT 50 OFFSET 0
{"@version":"1","id":"1","state":"1","@timestamp":"2019-08-20T02:49:00.155Z","title":"中国","content":"我们都是中国人"}
{"@version":"1","id":"2","state":"1","@timestamp":"2019-08-20T02:49:00.156Z","title":"中国人很好","content":"滚滚滚只能是中古偶人"}


7.1.8 解决同步少8个小时的问题

添加过滤器,来解决。但是还是感觉有点不靠谱。后续再看吧。

filter {
  date {
    match => ["message","UNIX_MS"]
    target => "@timestamp"   
  }
 ruby { 
   code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)" 
 }
 ruby {
   code => "event.set('@timestamp',event.get('timestamp'))"
 }
 mutate {
   remove_field => ["timestamp"]
 }
}


  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值