虚拟机上部署elk相关备忘录

在虚拟机中安装了CentOS7,用此当做服务器练习elk。

从官网下载elk相关的压缩包,我下载的是6.8.0版本。

我在home目录下新增elksofts文件夹,下载好的压缩包全都放在此文件下,然后解压缩文件。

启动elasticsearch可能会遇到的错误:

1,首先要新增一个用户,因为如果是直接解压后,不做任何配置,在root用户下启动是会报错的,报错信息大致如下:

----------------

[root@localhost bin]# ./elasticsearch
warning: Falling back to java on path. This behavior is deprecated. Specify JAVA_HOME
[2020-07-18T17:56:30,467][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [talent_node_1] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.8.0.jar:6.8.0]
        at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.8.0.jar:6.8.0]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:116) ~[elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) ~[elasticsearch-6.8.0.jar:6.8.0]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:103) ~[elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170) ~[elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) ~[elasticsearch-6.8.0.jar:6.8.0]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-6.8.0.jar:6.8.0]
        ... 6 more
[root@localhost bin]#

-------------------------------

2,新增用户等可参考
https://www.cnblogs.com/gcgc/p/10297563.html

新增了用户之后还要把elasticsearch所在目录的相关权限赋给新增的用户,不然启动也是会报错的,

esuser@localhost bin]$ ./elasticsearch -d
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
[esuser@localhost bin]$ 2020-07-17 08:41:42,817 main ERROR RollingFileManager (/home/elksofts/elasticsearch-6.8.0/logs/elasticsearch.log) java.io.FileNotFoundException: /home/elksofts/elasticsearch-6.8.0/logs/elasticsearch.log (权限不够) java.io.FileNotFoundException: /home/elksofts/elasticsearch-6.8.0/logs/elasticsearch.log (权限不够)
        at java.io.FileOutputStream.open0(Native Method)
        at java.io.FileOutputStream.open(FileOutputStream.java:270)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:133)
        at org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory.createManager(RollingFileManager.java:640)
        at org.apache.logging.log4j.core.appender.rolling.RollingFileManager$RollingFileManagerFactory.createManager(RollingFileManager.java:608)
        at org.apache.logging.log4j.core.appender.AbstractManager.getManager(AbstractManager.java:113)
        at org.apache.logging.log4j.core.appender.OutputStreamManager.getManager(OutputStreamManager.java:114)
        at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.getFileManager(RollingFileManager.java:188)
        at org.apache.logging.log4j.core.appender.RollingFileAppender$Builder.build(RollingFileAppender.java:145)
        at org.apache.logging.log4j.core.appender.RollingFileAppender$Builder.build(RollingFileAppender.java:61)
        at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:123)
        at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:959)
        at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:899)
        at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:891)
        at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:514)
        at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:238)
        at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:250)
        at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:547)
        at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:263)
        at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:234)
        at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:127)
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:302)
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159)
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150)
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124)
        at org.elasticsearch.cli.Command.main(Command.java:90)
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:116)
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93)

 

 

后台启动方式 :./elasticsearch -d

esuser@localhost bin]$ ./elasticsearch -d

---------------------------------------------------------------------------------------------------------

elasticsearch的启动大致应该就这些了,主要是记录下做备忘,如果有写的不对的地方望各位看官能不吝指教。

在kibana的解压文件中,在config目录下找到kibana.yml配置文件,稍微修改几个地方就能跟上面启动的elasticsearch配套使用了,我配置的地方如下图:

配置好之后,切换到kibana的bin目录下面直接启动就行了,

[root@localhost bin]# ./kibana

启动好了之后就可以访问了:http://192.168.0.101:5601

不出意外应该不会有什么问题。

---------------------------------------------------------------------------------------------------------

 

logstash同步mysql中的数据到elasticsearch

我是在bin目录下新增了一个mysql文件夹,里面放的是同步数据的相关配置文件和mysql驱动包

 

tid.conf中的内容:

--------------------------------------------------------------------------------------------------------------------------------

input {
    stdin { }
    jdbc {
        type => "custom_my"
        
        #数据库地址
        
        # mysql 数据库链接,itv_web_log为数据库名
        jdbc_connection_string => "jdbc:mysql://192.168.0.103:3306/ztwbxwtest?characterEncoding=utf8&useSSL=false&serverTimezone=UTC&rewriteBatchedStatements=true"
        
        # 用户名和密码
        jdbc_user => "root"
        jdbc_password => "123456"
        
        # 驱动
        jdbc_driver_library => "/home/elksofts/logstash-6.8.0/bin/mysql/mysql-connector-java-5.1.37.jar"
        
        # 驱动类名
        jdbc_driver_class => "com.mysql.jdbc.Driver"
        
        
        #sql路径
        # 执行的sql 文件路径+名称
        statement_filepath => "/home/elksofts/logstash-6.8.0/bin/mysql/tid.sql"
        
        
        #是否开启记录追踪
        record_last_run => "true"
        #是否需要追踪字段,如果为true,则需要指定tracking_column,默认是timestamp
        use_column_value => "true"
        #指定追踪的字段
        tracking_column => "f_custom_id"
        #追踪字段的类型,目前只有数字和时间类型,默认是数字类型
        #tracking_column_type => "timestamp"
        
        #设置时区
        #jdbc_default_timezone =>"Asia/Shanghai"
        
        #是否每次清除last_run_metadata_path的内容
        clean_run => "true"
        
        #这里可以手动设置:sql_last_value的值,默认时间是1970-01-01,默认数字是0
        #们只需要在 SQL 语句中 WHERE MY_ID > :last_sql_value 即可. 其中 :sql_last_value 取得就是该文件中的值
        last_run_metadata_path => "/home/elksofts/logstash-6.8.0/bin/mysql/tid.txt"
        
        #多久同步一次
        schedule => "* * * * *"
        #是否分页
        jdbc_paging_enabled => "true"
        jdbc_page_size => "50000"
    }
    
    jdbc {
        type => "project_my"
        
        #数据库地址
        
        # mysql 数据库链接,itv_web_log为数据库名
        jdbc_connection_string => "jdbc:mysql://192.168.0.103:3306/ztwbxwtest?characterEncoding=utf8&useSSL=false&serverTimezone=UTC&rewriteBatchedStatements=true"
        
        # 用户名和密码
        jdbc_user => "root"
        jdbc_password => "123456"
        
        # 驱动
        jdbc_driver_library => "/home/elksofts/logstash-6.8.0/bin/mysql/mysql-connector-java-5.1.37.jar"
        
        # 驱动类名
        jdbc_driver_class => "com.mysql.jdbc.Driver"
        
        
        #sql路径
        # 执行的sql 文件路径+名称
        statement_filepath => "/home/elksofts/logstash-6.8.0/bin/mysql/projectv2.sql"
        
        
        #是否开启记录追踪
        record_last_run => "true"
        #是否需要追踪字段,如果为true,则需要指定tracking_column,默认是timestamp
        use_column_value => "true"
        #指定追踪的字段
        tracking_column => "f_project_id"
        #追踪字段的类型,目前只有数字和时间类型,默认是数字类型
        #tracking_column_type => "timestamp"
        
        #设置时区
        #jdbc_default_timezone =>"Asia/Shanghai"
        
        #是否每次清除last_run_metadata_path的内容
        clean_run => "true"
        
        #这里可以手动设置:sql_last_value的值,默认时间是1970-01-01,默认数字是0
        #们只需要在 SQL 语句中 WHERE MY_ID > :last_sql_value 即可. 其中 :sql_last_value 取得就是该文件中的值
        last_run_metadata_path => "/home/elksofts/logstash-6.8.0/bin/mysql/tidv2.txt"
        
        #多久同步一次
        schedule => "* * * * *"
        #是否分页
        jdbc_paging_enabled => "true"
        jdbc_page_size => "50000"
    }
}
filter {
    json {
        source => "message"
        remove_field => ["message"]
    }
    mutate {
        #指定要删除的字段
        #remove_field => "@version"
        #remove_field => "@timestamp"
    }
}
output {
    #     
    stdout {
        codec =>  rubydebug
    }
    
    if[type] == "custom_my"{
        elasticsearch {
            hosts => ["http://192.168.0.101:9200/"]
            index => "talent20200716"
            #action => "update"
            #action => "delete"
            upsert => "update"
            #doc_as_upsert  => true
            document_id => "%{f_custom_id}"
        }
    }
    if[type] == "project_my"{
        elasticsearch {
            hosts => ["http://192.168.0.101:9200/"]
            index => "project20200716"
            #action => "update"
            #action => "delete"
            upsert => "update"
            #doc_as_upsert  => true
            document_id => "%{f_project_id}"
        }
    }   

}

--------------------------------------------------------------------------------------------------------------------------------

 

tid.sql中的内容:

SELECT * FROM book_my where id > :sql_last_value

配置文件大概就以上那些,切换到bin目录下

[root@localhost bin]# ./logstash -f mysql/tid.conf

大概经过以上步骤就可以把mysql中的数据同步到elasticsearch中,这个时候就可以在kibana中看到

 

 

=======

logstash启动报下面这个错误 Expected one of #, input, filter, output at line 1, column 1 (byte 1) after :

可能是配置文件编码问题,我按照这篇文章解决了此错误
https://blog.csdn.net/Crazy_T_B/article/details/79422602?utm_source=blogxgwz0

(配置文件的编码是UTF-8 BOM也就是第一行就东西的只是我们看不到而已,解决方案就是把文件换成UTF-8编码格式就行无Bom模式的。)

 

[2020-07-18T17:39:00,546][ERROR][logstash.agent           ] Failed to execute action
{:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError",
:message=>"Expected one of #, input, filter, output at line 1, column 1 (byte 1) after ",
:backtrace=>["/home/elksofts/logstash-6.8.0/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'",
"/home/elksofts/logstash-6.8.0/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'",
"/home/elksofts/logstash-6.8.0/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'",
"org/jruby/RubyArray.java:2577:in `map'", "/home/elksofts/logstash-6.8.0/logstash-core/lib/logstash/compiler.rb:10:in
`compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:in `initialize'",
"/home/elksofts/logstash-6.8.0/logstash-core/lib/logstash/pipeline.rb:22:in `initialize'",
"/home/elksofts/logstash-6.8.0/logstash-core/lib/logstash/pipeline.rb:90:in `initialize'",
"/home/elksofts/logstash-6.8.0/logstash-core/lib/logstash/pipeline_action/create.rb:43:in `block in execute'",
"/home/elksofts/logstash-6.8.0/logstash-core/lib/logstash/agent.rb:96:in `block in exclusive'",
"org/jruby/ext/thread/Mutex.java:165:in `synchronize'", "/home/elksofts/logstash-6.8.0/logstash-core/lib/logstash/agent.rb:96:in `exclusive'",
"/home/elksofts/logstash-6.8.0/logstash-core/lib/logstash/pipeline_action/create.rb:39:in `execute'",
"/home/elksofts/logstash-6.8.0/logstash-core/lib/logstash/agent.rb:334:in `block in converge_state'"]}

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值