阿里Druid慢sql监控数据持久化到日志或ES中

阿里Druid慢sql监控数据持久化

前言: 本文基于springboot 2.0.3RELEASE版本+Druid 1.0.28版本开发。此文省略了springboot+druid的项目搭建过程

本文介绍两种方式慢sql监控数据持久化

1.利用logback收集日志

第一步:确认Druid配置文件增加了以下:

spring.datasource.filters: stat,wall,log4j
spring.datasource.connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000

第二步:配置DriudConfig

增加一个过滤器

@Bean
public Slf4jLogFilter logFilter(){
    Slf4jLogFilter filter = new Slf4jLogFilter();
    filter.setResultSetLogEnabled(false);
    filter.setConnectionLogEnabled(false);
    filter.setStatementParameterClearLogEnable(false);
    filter.setStatementCreateAfterLogEnabled(false);
    filter.setStatementCloseAfterLogEnabled(false);
    filter.setStatementParameterSetLogEnabled(false);
    filter.setStatementPrepareAfterLogEnabled(false);
    return  filter;
}
DruidDataSource添加这个过滤器
List list= new ArrayList<Filter>(){{add(logFilter());}};
datasource.setProxyFilters(list);

第三步:增加logback配置

<appender name="druidlog" class="ch.qos.logback.core.rolling.RollingFileAppender">
     <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
          <FileNamePattern>
                 druidlog/%d{yyyy-MM-dd}.log
          </FileNamePattern>
          <MaxHistory>30</MaxHistory>
      </rollingPolicy>
      <encoder>
          <pattern>%msg%n</pattern>
      </encoder>
</appender>
<logger name="druid" level="DEBUG">
        <appender-ref ref="DruidFILE"/>
</logger>

第四步:运行项目跑数据

日志查看

{conn-10001} connected
{conn-10001} pool-connect
{conn-10001, pstmt-20000} created. select * from ACT_GE_PROPERTY where NAME_ = ?
{conn-10001, pstmt-20000} Parameters : [schema.version]
{conn-10001, pstmt-20000} Types : [VARCHAR]
{conn-10001, pstmt-20000} executed. 43.613649 millis. select * from ACT_GE_PROPERTY where NAME_ = ?
{conn-10001, pstmt-20000} executed. 43.613649 millis. select * from ACT_GE_PROPERTY where NAME_ = ?
{conn-10001, pstmt-20000, rs-50000} open
{conn-10001, pstmt-20000, rs-50000} Header: [NAME_, VALUE_, REV_]
{conn-10001, pstmt-20000, rs-50000} Result: [schema.version, 5.22.0.0, 1]
{conn-10001, pstmt-20000, rs-50000} closed
{conn-10001, pstmt-20000} clearParameters. 
{conn-10001} pool-recycle
{conn-10001} pool-connect
{conn-10001} pool-recycle
{conn-10001} pool-connect
{conn-10001, pstmt-20001} created. select `id`,`cron_expression`,`method_name`,`is_concurrent`,`description`,`update_by`,`bean_class`,`create_date`,`job_status`,`job_group`,`update_date`,`create_by`,`spring_bean`,`job_name` from sys_task
          
         
                order by id desc
{conn-10001, pstmt-20001} Parameters : []
{conn-10001, pstmt-20001} Types : []
{conn-10001, pstmt-20001} executed. 6.238774 millis. select `id`,`cron_expression`,`method_name`,`is_concurrent`,`description`,`update_by`,`bean_class`,`create_date`,`job_status`,`job_group`,`update_date`,`create_by`,`spring_bean`,`job_name` from sys_task
          
         
                order by id desc
{conn-10001, pstmt-20001} executed. 6.238774 millis. select `id`,`cron_expression`,`method_name`,`is_concurrent`,`description`,`update_by`,`bean_class`,`create_date`,`job_status`,`job_group`,`update_date`,`create_by`,`spring_bean`,`job_name` from sys_task
          
         
                order by id desc
{conn-10001, pstmt-20001, rs-50001} open
{conn-10001, pstmt-20001, rs-50001} Header: [id, cron_expression, method_name, is_concurrent, description, update_by, bean_class, create_date, job_status, job_group, update_date, create_by, spring_bean, job_name]
{conn-10001, pstmt-20001, rs-50001} Result: [2, 0/10 * * * * ?, run1, 1, , 4028ea815a3d2a8c015a3d2f8d2a0002, com.bootdo.common.task.WelcomeJob, 2017-05-19 18:30:56.0, 0, group1, 2017-05-19 18:31:07.0, null, , welcomJob]
{conn-10001, pstmt-20001, rs-50001} closed
{conn-10001, pstmt-20001} clearParameters. 
{conn-10001} pool-recycle
{conn-10001} pool-connect
{conn-10001, pstmt-20002} created. select `cid`,`title`,`slug`,`created`,`modified`,`type`,`tags`,`categories`,`hits`,`comments_num`,`allow_comment`,`allow_ping`,`allow_feed`,`status`,`author`,`gtm_create`,`gtm_modified` from blog_content
         WHERE  type = ? 
         
                order by cid desc
			 
		 
			limit 0, 10
{conn-10001, pstmt-20002} Parameters : [article]

2.自定义Logger保存监控记录并输入ES中,可进行日志分析

Druid中有DruidDataSource/Spring/Web等监控记录,其中DruidDataSource提供了保存监控记录的API。

第一步:保存DruidDataSource的监控记录

DruidDataSource有一个属性timeBetweenLogStatsMillis,配置timeBetweenLogStatsMillis>0之后,DruidDataSource会定期把监控数据输出到日志中。

修改DriudConfig配置文件

datasource.setTimeBetweenLogStatsMillis(3000);

或者通过jvm启动参数来指定,例如:

-Ddruid.timeBetweenLogStatsMillis=300000

第二步:定制StatLogger

下面 DruidDataSourceStatValue中有很多的监控数据,此次我主要监控慢SQL所以只取下面几个字段,大家可以根据需求各取所取

@Component
public class MyStatLogger extends DruidDataSourceStatLoggerAdapter implements DruidDataSourceStatLogger {

    private static Logger logger = LoggerFactory.getLogger("druidLogger");

    @Override
    public void log(DruidDataSourceStatValue statValue){
        if (statValue.getSqlList().size() > 0) {
            for (JdbcSqlStatValue sqlStat : statValue.getSqlList()) {
                Map<String, Object> sqlStatMap = new LinkedHashMap<String, Object>();
                String sql;
                sql = sqlStat.getSql().replace("\t", "");
                sql = sql.replace("\n","");
                sqlStatMap.put("sql", sql);
                if (sqlStat.getExecuteCount() > 0) {
                    sqlStatMap.put("executeCount", sqlStat.getExecuteCount());
                    sqlStatMap.put("executeMillisMax", sqlStat.getExecuteMillisMax());
                    sqlStatMap.put("executeMillisTotal", sqlStat.getExecuteMillisTotal());
                    sqlStatMap.put("createtime", LocalDateTime.now());
                    sqlStatMap.put("systemName","SMRZ");
                }
                logger.error(sqlStatMap.toString());
            }

        }
    }
}

第三步:定义logback文件

<appender name="druidlog" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
          <FileNamePattern>
              druidlog/%d{yyyy-MM-dd}.log
          </FileNamePattern>
          <MaxHistory>30</MaxHistory>
     </rollingPolicy>
     <encoder>
          <pattern>%msg%n</pattern>
     </encoder>
</appender>

<logger name="druidLogger" additivity="false">
      <appender-ref ref="druidlog"/>
</logger>

第四步:自定义日志规范展示

形成一个json方便输入es中

{sql=select * from ACT_GE_PROPERTY where NAME_ = ?, executeCount=1, executeMillisMax=45, executeMillisTotal=45, createtime=2019-08-02T11:42:44.106, systemName=SMRZ}
{sql=select `id`,`cron_expression`,`method_name`,`is_concurrent`,`description`,`update_by`,`bean_class`,`create_date`,`job_status`,`job_group`,`update_date`,`create_by`,`spring_bean`,`job_name` from sys_task                                   order by id desc, executeCount=1, executeMillisMax=6, executeMillisTotal=6, createtime=2019-08-02T11:42:50.107, systemName=SMRZ}
{sql=select `cid`,`title`,`slug`,`created`,`modified`,`type`,`tags`,`categories`,`hits`,`comments_num`,`allow_comment`,`allow_ping`,`allow_feed`,`status`,`author`,`gtm_create`,`gtm_modified` from blog_content         WHERE  type = ?                          order by cid desc  limit 0, 10, executeCount=1, executeMillisMax=40, executeMillisTotal=40, createtime=2019-08-02T11:43:05.110, systemName=SMRZ}
{sql=select count(*) from blog_content  WHERE  type = ?, executeCount=1, executeMillisMax=5, executeMillisTotal=5, createtime=2019-08-02T11:43:05.114, systemName=SMRZ}
{sql=select        `user_id`,`username`,`name`,`password`,`dept_id`,`email`,`mobile`,`status`,`user_id_create`,`gmt_create`,`gmt_modified`,`sex`,`birth`,`pic_id`,`live_address`,`hobby`,`province`,`city`,`district`        from sys_user         WHERE  username = ?                          order by user_id desc, executeCount=1, executeMillisMax=69, executeMillisTotal=69, createtime=2019-08-02T11:43:08.114, systemName=SMRZ}
{sql=insert into sys_log(`user_id`, `username`, `operation`, `time`, `method`, `params`, `ip`, `gmt_create`)values(?, ?, ?, ?, ?, ?, ?, ?), executeCount=1, executeMillisMax=300, executeMillisTotal=300, createtime=2019-08-02T11:43:08.114, systemName=SMRZ}
{sql=select distinctm.menu_id , parent_id, name, url,perms,`type`,icon,order_num,gmt_create, gmt_modifiedfrom sys_menu mleftjoin sys_role_menu rm on m.menu_id = rm.menu_id left joinsys_user_role uron rm.role_id =ur.role_id where ur.user_id = ?andm.type in(0,1)order bym.order_num}
{sql=insert into sys_log(`user_id`, `username`, `operation`, `time`, `method`, `params`, `ip`, `gmt_create`)values(?, ?, ?, ?, ?, ?, ?, ?), executeCount=1, executeMillisMax=42, executeMillisTotal=42, createtime=2019-08-02T11:43:11.115, systemName=SMRZ}
{sql=select distinctm.menu_id , parent_id, name, url,perms,`type`,icon,order_num,gmt_create, gmt_modifiedfrom sys_menu mleftjoin sys_role_menu rm on m.menu_id = rm.menu_id left joinsys_user_role uron rm.role_id =ur.role_id where ur.user_id = ?andm.type in(0,1)order bym.order_num, executeCount=1, executeMillisMax=278, executeMillisTotal=278, createtime=2019-08-02T11:43:11.115, systemName=SMRZ}
{sql=select `id`,`type`,`url`,`create_date` from sys_file where id = ?, executeCount=1, executeMillisMax=28, executeMillisTotal=28, createtime=2019-08-02T11:43:11.115, systemName=SMRZ}
{sql=select DISTINCTn.id ,`type`,`title`,`content`,`files`,r.is_read,`status`,`create_by`,`create_date`,`update_by`,`update_date`,`remarks`,`del_flag`from oa_notify_record r right JOIN oa_notify n on r.notify_id = n.id WHERE  r.is_read = ?  and r.user_id = ? order by is_read ASC, update_date DESC limit ?, ?, executeCount=1, executeMillisMax=41, executeMillisTotal=41, createtime=2019-08-02T11:43:11.115, systemName=SMRZ}
{sql=select count(*)fromoa_notify_record r right JOIN oa_notify n on r.notify_id= n.id wherer.user_id =? andr.is_read = ?, executeCount=1, executeMillisMax=7, executeMillisTotal=7, createtime=2019-08-02T11:43:11.115, systemName=SMRZ}
{sql=select table_name tableName, engine, table_comment tableComment, create_time createTime from information_schema.tables where table_schema = (select database()), executeCount=1, executeMillisMax=7, executeMillisTotal=7, createtime=2019-08-02T11:43:11.115, systemName=SMRZ}
{sql=select `id`,`cron_expression`,`method_name`,`is_concurrent`,`description`,`update_by`,`bean_class`,`create_date`,`job_status`,`job_group`,`update_date`,`create_by`,`spring_bean`,`job_name` from sys_task                                   order by id desc  limit ?, ?, executeCount=1, executeMillisMax=5, executeMillisTotal=5, createtime=2019-08-02T11:43:11.115, systemName=SMRZ}

第五步:输入elasticSearch中

两个方案:

1.在项目中集成logstash直接输入elasticSearch

2.形成log文件利用filebeat ----> logstash -----> elasticSearch(推荐此方案)

此步骤暂不展示具体步骤

第六步:elasticSearch数据展示

示例:

接下来就可以用kibana进行数据分析了

总结:

两个方式都可以将慢SQL进行持久化操作,

第一种方式相对于来说只能形成日志文件,并且日志不规范,没有办法进行细致化的分析

个人比较推崇第二种方式:日志的格式可以自定义,并且可以输入数据库中,这里可以根据自定义的需求进行输入类似Mysql,MongoDB,ElasticSearch进行保存

参考链接文献:

https://github.com/alibaba/druid

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值