druid监控等插件的实现以及过滤器模式

       druid的功能就不多讲了,主要提供数据库连接池的功能,但是支持丰富的监控和日志以及防火墙功能。这些附加功能都是以插件的形式存在的,可以自由定制。

本文主要讲解监控、日志等插件的实现,以及怎么集成到druid里。

一、 Druid的使用

先来看一段使用druid连接池的流程。首先是配置连接池

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd
		http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
		http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd">
	<!-- 给web使用的spring文件 -->
	<bean id="propertyConfigurer"
		class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
		<property name="locations">
			<list>
				<value>/WEB-INF/classes/dbconfig.properties</value>
			</list>
		</property>
	</bean>
	<bean id="dataSource" class="com.alibaba.druid.pool.DruidDataSource"
		destroy-method="close">
		<property name="url" value="${url}" />
		<property name="username" value="${username}" />
		<property name="password" value="${password}" />
		<property name="driverClassName" value="${driverClassName}" />
		<property name="filters" value="${filters}" />
 
		<property name="maxActive" value="${maxActive}" />
		<property name="initialSize" value="${initialSize}" />
		<property name="maxWait" value="${maxWait}" />
		<property name="minIdle" value="${minIdle}" />
 
		<property name="timeBetweenEvictionRunsMillis" value="${timeBetweenEvictionRunsMillis}" />
		<property name="minEvictableIdleTimeMillis" value="${minEvictableIdleTimeMillis}" />
 
		<property name="validationQuery" value="${validationQuery}" />
		<property name="testWhileIdle" value="${testWhileIdle}" />
		<property name="testOnBorrow" value="${testOnBorrow}" />
		<property name="testOnReturn" value="${testOnReturn}" />
		<property name="maxOpenPreparedStatements"
			value="${maxOpenPreparedStatements}" />
		<property name="removeAbandoned" value="${removeAbandoned}" /> <!-- 打开removeAbandoned功能 -->
	    <property name="removeAbandonedTimeout" value="${removeAbandonedTimeout}" /> <!-- 1800秒,也就是30分钟 -->
	    <property name="logAbandoned" value="${logAbandoned}" /> <!-- 关闭abanded连接时输出错误日志 -->
	</bean>
	
	
	<!-- jdbcTemplate -->
	<bean id="jdbc" class="org.springframework.jdbc.core.JdbcTemplate">
		<property name="dataSource">
			<ref bean="dataSource" />
		</property>
	</bean>
	
</beans>

然后获取连接,执行sql。

// 通过druid数据源获取数据库连接
Connection conn = dataSource.getConnection();
// 创建语句
PreparedStatement stmt = conn.prepareStatement("select * form table_a where id = ?");
stmt.set(0, 1);
// 执行
stmt.execute();
stmt.close();
conn.close();

二、监控功能实现

我们知道druid提供了丰富的监控功能,这些监控功能是怎么实现的呢?

先说答案吧,主要通过以下两点:

1. 通过代理模式控制statement对象的访问。druid里的Statemest、PreparedStatement、Connection等对象都通过代理模式访问。对应的druid的代理是StatementProxyImpl、PreparedStatementProxyImpl、ConnectionProxyImpl。

2. 通过filter过滤器链增加监控和日志功能。

 

2.1 创建连接

看下datasource初始化的代码。位于com.alibaba.druid.pool.DruidDataSource#init

摘取关键代码如下:相关的代码加了注释,主要通过PhysicalConnectionInfo类来获取连接,DruidConnectionHolder类来包装底层的连接。

public void init() throws SQLException {
        if (inited) {
            return;
        }

        // bug fixed for dead lock, for issue #2980
        DruidDriver.getInstance();
        //  初始化前先加锁
        final ReentrantLock lock = this.lock;
        try {
            lock.lockInterruptibly();
        } catch (InterruptedException e) {
            throw new SQLException("interrupt", e);
        }

        boolean init = false;
        try {
            if (inited) {
                return;
            }

            initStackTrace = Utils.toString(Thread.currentThread().getStackTrace());

            this.id = DruidDriver.createDataSourceId();
            if (this.id > 1) {
                long delta = (this.id - 1) * 100000;
                this.connectionIdSeedUpdater.addAndGet(this, delta);
                this.statementIdSeedUpdater.addAndGet(this, delta);
                this.resultSetIdSeedUpdater.addAndGet(this, delta);
                this.transactionIdSeedUpdater.addAndGet(this, delta);
            }

            if (this.jdbcUrl != null) {
                this.jdbcUrl = this.jdbcUrl.trim();
                initFromWrapDriverUrl();
            }

            // 初始化过滤器
            for (Filter filter : filters) {
                filter.init(this);
            }

            if (this.dbType == null || this.dbType.length() == 0) {
                this.dbType = JdbcUtils.getDbType(jdbcUrl, null);
            }

            if (JdbcConstants.MYSQL.equals(this.dbType)
                    || JdbcConstants.MARIADB.equals(this.dbType)
                    || JdbcConstants.ALIYUN_ADS.equals(this.dbType)) {
                boolean cacheServerConfigurationSet = false;
                if (this.connectProperties.containsKey("cacheServerConfiguration")) {
                    cacheServerConfigurationSet = true;
                } else if (this.jdbcUrl.indexOf("cacheServerConfiguration") != -1) {
                    cacheServerConfigurationSet = true;
                }
                if (cacheServerConfigurationSet) {
                    this.connectProperties.put("cacheServerConfiguration", "true");
                }
            }

            if (maxActive <= 0) {
                throw new IllegalArgumentException("illegal maxActive " + maxActive);
            }

            if (maxActive < minIdle) {
                throw new IllegalArgumentException("illegal maxActive " + maxActive);
            }

            if (getInitialSize() > maxActive) {
                throw new IllegalArgumentException("illegal initialSize " + this.initialSize + ", maxActive " + maxActive);
            }

            if (timeBetweenLogStatsMillis > 0 && useGlobalDataSourceStat) {
                throw new IllegalArgumentException("timeBetweenLogStatsMillis not support useGlobalDataSourceStat=true");
            }

            if (maxEvictableIdleTimeMillis < minEvictableIdleTimeMillis) {
                throw new SQLException("maxEvictableIdleTimeMillis must be grater than minEvictableIdleTimeMillis");
            }

            if (this.driverClass != null) {
                this.driverClass = driverClass.trim();
            }
            // spi服务
            initFromSPIServiceLoader();

            if (this.driver == null) {
                if (this.driverClass == null || this.driverClass.isEmpty()) {
                    this.driverClass = JdbcUtils.getDriverClassName(this.jdbcUrl);
                }

                if (MockDriver.class.getName().equals(driverClass)) {
                    driver = MockDriver.instance;
                } else {
                    if (jdbcUrl == null && (driverClass == null || driverClass.length() == 0)) {
                        throw new SQLException("url not set");
                    }
                    driver = JdbcUtils.createDriver(driverClassLoader, driverClass);
                }
            } else {
                if (this.driverClass == null) {
                    this.driverClass = driver.getClass().getName();
                }
            }

            initCheck();

            initExceptionSorter();
            // 校验连接有效性,是否能够ping通
            initValidConnectionChecker();
            validationQueryCheck();

            if (isUseGlobalDataSourceStat()) {
                dataSourceStat = JdbcDataSourceStat.getGlobal();
                if (dataSourceStat == null) {
                    dataSourceStat = new JdbcDataSourceStat("Global", "Global", this.dbType);
                    JdbcDataSourceStat.setGlobal(dataSourceStat);
                }
                if (dataSourceStat.getDbType() == null) {
                    dataSourceStat.setDbType(this.dbType);
                }
            } else {
                dataSourceStat = new JdbcDataSourceStat(this.name, this.jdbcUrl, this.dbType, this.connectProperties);
            }
            dataSourceStat.setResetStatEnable(this.resetStatEnable);
            // 声明连接池。DruidConnectionHolder包装了底层的连接
            connections = new DruidConnectionHolder[maxActive];
            evictConnections = new DruidConnectionHolder[maxActive];
            keepAliveConnections = new DruidConnectionHolder[maxActive];

            SQLException connectError = null;

            if (createScheduler != null && asyncInit) {
                for (int i = 0; i < initialSize; ++i) {
                    submitCreateTask(true);
                }
            } else if (!asyncInit) {
                // init connections  同步初始化,应用启动的时候初始化所有的连接池
                while (poolingCount < initialSize) {
                    try {
                        // 通过用户配置的数据库值,获取数据库连接。PhysicalConnectionInfo包装了实际的连接ConnectionProxy对象。
                        PhysicalConnectionInfo pyConnectInfo = createPhysicalConnection();
                        // 初始化holder,所以下面重点看下这块代码。DruidConnectionHolder包装了druid的连接
                        DruidConnectionHolder holder = new DruidConnectionHolder(this, pyConnectInfo);
                        connections[poolingCount++] = holder;
                    } catch (SQLException ex) {
                        LOG.error("init datasource error, url: " + this.getUrl(), ex);
                        if (initExceptionThrow) {
                            connectError = ex;
                            break;
                        } else {
                            Thread.sleep(3000);
                        }
                    }
                }

                if (poolingCount > 0) {
                    poolingPeak = poolingCount;
                    poolingPeakTime = System.currentTimeMillis();
                }
            }

            createAndLogThread();
            createAndStartCreatorThread();
            createAndStartDestroyThread();

            initedLatch.await();
            init = true;

            initedTime = new Date();
            registerMbean();

            if (connectError != null && poolingCount == 0) {
                throw connectError;
            }

            if (keepAlive) {
                // async fill to minIdle
                if (createScheduler != null) {
                    for (int i = 0; i < minIdle; ++i) {
                        submitCreateTask(true);
                    }
                } else {
                    this.emptySignal();
                }
            }

        } catch (SQLException e) {
            LOG.error("{dataSource-" + this.getID() + "} init error", e);
            throw e;
        } catch (InterruptedException e) {
            throw new SQLException(e.getMessage(), e);
        } catch (RuntimeException e){
            LOG.error("{dataSource-" + this.getID() + "} init error", e);
            throw e;
        } catch (Error e){
            LOG.error("{dataSource-" + this.getID() + "} init error", e);
            throw e;

        } finally {
            inited = true;
            lock.unlock();

            if (init && LOG.isInfoEnabled()) {
                String msg = "{dataSource-" + this.getID();

                if (this.name != null && !this.name.isEmpty()) {
                    msg += ",";
                    msg += this.name;
                }

                msg += "} inited";

                LOG.info(msg);
            }
        }
    }

获取数据库连接的时序图

 

DruidAbstractDataSource.java 类获取连接

public PhysicalConnectionInfo createPhysicalConnection() throws SQLException {
        String url = this.getUrl();
        Properties connectProperties = getConnectProperties();
        Properties physicalConnectProperties = new Properties();
        Connection conn = null;

        createStartNanosUpdater.set(this, connectStartNanos);
        creatingCountUpdater.incrementAndGet(this);
        try {
            // 通过这个方法获取连接,会在下面给出
            conn = createPhysicalConnection(url, physicalConnectProperties);

            if (conn == null) {
                throw new SQLException("connect error, url " + url + ", driverClass " + this.driverClass);
            }

            initPhysicalConnection(conn, variables, globalVariables);
            initedNanos = System.nanoTime();

            validateConnection(conn);
            validatedNanos = System.nanoTime();

            setFailContinuous(false);
            setCreateError(null);
        } catch (SQLException ex) {
            setCreateError(ex);
            JdbcUtils.close(conn);
            throw ex;
        } catch (RuntimeException ex) {
            setCreateError(ex);
            JdbcUtils.close(conn);
            throw ex;
        } catch (Error ex) {
            createErrorCountUpdater.incrementAndGet(this);
            setCreateError(ex);
            JdbcUtils.close(conn);
            throw ex;
        } finally {
            long nano = System.nanoTime() - connectStartNanos;
            createTimespan += nano;
            creatingCountUpdater.decrementAndGet(this);
        }

        return new PhysicalConnectionInfo(conn, connectStartNanos, connectedNanos, initedNanos, validatedNanos, variables, globalVariables);
    }


// 上面的代码会调下面的方法获取连接。
public Connection createPhysicalConnection(String url, Properties info) throws SQLException {
        Connection conn;
        if (getProxyFilters().size() == 0) {
            conn = getDriver().connect(url, info);
        } else {
            // 如果有过滤器,说明需要代理这些连接。如果没有,则会直接暴露底层数据库连接
            conn = new FilterChainImpl(this).connection_connect(info);
        }

        createCountUpdater.incrementAndGet(this);

        return conn;
    }

 

重点看一下过滤器链里面的实现. FilterChainImpl.java

这个类里提供了默认的实现,默认实现是原始的jdbc类,没有被加强的功能。

public ConnectionProxy connection_connect(Properties info) throws SQLException {
        // 调用过滤器链里的过滤器。  如果过滤器里没有具体的实现,会向下传递,最终调用到下面的默认实现
        if (this.pos < filterSize) {
            return nextFilter()
                    .connection_connect(this, info);
        }
        // 这里提供了默认的实现。只有某些filter才需要加强Connection的功能。
        Driver driver = dataSource.getRawDriver();
        String url = dataSource.getRawJdbcUrl();

        Connection nativeConnection = driver.connect(url, info);

        if (nativeConnection == null) {
            return null;
        }

        return new ConnectionProxyImpl(dataSource, nativeConnection, info, dataSource.createConnectionId());
    }

Filter的类图和功能

  • Filter接口定义了各种jdbc(connection,statement,resultset)相关的函数。
  • FilterAdapter类提供了filter的默认实现,默认实现就是调用FilterChainImpl里的方法.
  • FilterEventAdapter类继承自FilterAdapter,提供了一些扩展点。比如调connection_connect方法的时候,可以在获取连接之前和之后做自定义的动作。

druid里面有的过滤器继承自FilterAdapter,有的则继承自FilterEventAdapter,实现一些扩展的功能。类图里面只包含了典型的过滤器,并不包含所有的。

 

ConnectionProxyImpl的类图

 

2.2 创建prepareStatement

先来看下时序图。其实跟上面创建连接的过程很像,如果理解了上面的过程,自然也就理解了这个过程。其他jdbc相关的操作也是一样的,这就是druid的思路。

 

底层也是调用FilterChainImpl以及Filter来生成PreparedStatement的代理类。在代理类里面可以实现自定义的增强功能。

 

2.3 prepareStatement执行

当业务代码调用prepareStatement.execute()时,其实调用的是PreparedStatementProxyImpl类的execute方法。

PreparedStatementProxyImpl还是调用FilterChainImpl.preparedStatement_execute来执行。

可以看出来思路跟上面两个过程是一样的。通过代理模式控制对象的访问,通过过滤器模式来附加功能。

 

三、FilterChain

上面提到了Filter接口定义了jdbc相关的方法,那么filterChain也定义了相关的方法,以及一些扩展方法。FilterChain是druid关于jdbc相关操作的入口。

FilterChainImpl提供了默认的实现。

FilterChainImpl方法的实现都分为两部分结构,第一部分从具体的Filter找实现,第二部分提供默认的实现。

如果业务没有配置Filter功能,或者配置的Filter里面不需要加强某个功能,就会走到第二部分。

 

四、过滤器模式

还有一种过滤器模式的经典应用那就是web开发里常用的javax.servlet.FilterChain,用来执行配置的一系列Filter。

tomcat里面org.apache.catalina.core.ApplicationFilterChain类实现了FilterChain接口,用来执行过滤器。在请求进来的时候,会把匹配的filter和servlet放到filterChain里。

看下实现方法

@Override
    public void doFilter(ServletRequest request, ServletResponse response)
        throws IOException, ServletException {

        if( Globals.IS_SECURITY_ENABLED ) {
            final ServletRequest req = request;
            final ServletResponse res = response;
            try {
                java.security.AccessController.doPrivileged(
                    new java.security.PrivilegedExceptionAction<Void>() {
                        @Override
                        public Void run()
                            throws ServletException, IOException {
                            internalDoFilter(req,res);
                            return null;
                        }
                    }
                );
            } catch( PrivilegedActionException pe) {
                Exception e = pe.getException();
                if (e instanceof ServletException)
                    throw (ServletException) e;
                else if (e instanceof IOException)
                    throw (IOException) e;
                else if (e instanceof RuntimeException)
                    throw (RuntimeException) e;
                else
                    throw new ServletException(e.getMessage(), e);
            }
        } else {
            internalDoFilter(request,response);
        }
    }

    // 具体执行过滤器的地方
    private void internalDoFilter(ServletRequest request,
                                  ServletResponse response)
        throws IOException, ServletException {

        // Call the next filter if there is one
        // 如果还有过滤器没有执行到,pos是过滤器数组的下标
        if (pos < n) {
            // 匹配这次请求的过滤器会放到数组里。
            ApplicationFilterConfig filterConfig = filters[pos++];
            try {
                Filter filter = filterConfig.getFilter();

                if (request.isAsyncSupported() && "false".equalsIgnoreCase(
                        filterConfig.getFilterDef().getAsyncSupported())) {
                    request.setAttribute(Globals.ASYNC_SUPPORTED_ATTR, Boolean.FALSE);
                }
                if( Globals.IS_SECURITY_ENABLED ) {
                    final ServletRequest req = request;
                    final ServletResponse res = response;
                    Principal principal =
                        ((HttpServletRequest) req).getUserPrincipal();

                    Object[] args = new Object[]{req, res, this};
                    SecurityUtil.doAsPrivilege ("doFilter", filter, classType, args, principal);
                } else {
                    // 调用过滤器。需要把过滤器链本身传递过去,触发下个过滤器的调用
                    filter.doFilter(request, response, this);
                }
            } catch (IOException | ServletException | RuntimeException e) {
                throw e;
            } catch (Throwable e) {
                e = ExceptionUtils.unwrapInvocationTargetException(e);
                ExceptionUtils.handleThrowable(e);
                throw new ServletException(sm.getString("filterChain.filter"), e);
            }
            return;
        }

        // We fell off the end of the chain -- call the servlet instance
        // 过滤器执行完后,执行servlet的service方法。因为这个调用嵌入在filterChain中,filter也就可以控制servlet的访问,或者在servlet执行之后也可以做一些事情。
        try {
            if (ApplicationDispatcher.WRAP_SAME_OBJECT) {
                lastServicedRequest.set(request);
                lastServicedResponse.set(response);
            }

            if (request.isAsyncSupported() && !servletSupportsAsync) {
                request.setAttribute(Globals.ASYNC_SUPPORTED_ATTR,
                        Boolean.FALSE);
            }
            // Use potentially wrapped request from this point
            if ((request instanceof HttpServletRequest) &&
                    (response instanceof HttpServletResponse) &&
                    Globals.IS_SECURITY_ENABLED ) {
                final ServletRequest req = request;
                final ServletResponse res = response;
                Principal principal =
                    ((HttpServletRequest) req).getUserPrincipal();
                Object[] args = new Object[]{req, res};
                SecurityUtil.doAsPrivilege("service",
                                           servlet,
                                           classTypeUsedInService,
                                           args,
                                           principal);
            } else {
                // 执行servlet
                servlet.service(request, response);
            }
        } catch (IOException | ServletException | RuntimeException e) {
            throw e;
        } catch (Throwable e) {
            e = ExceptionUtils.unwrapInvocationTargetException(e);
            ExceptionUtils.handleThrowable(e);
            throw new ServletException(sm.getString("filterChain.servlet"), e);
        } finally {
            if (ApplicationDispatcher.WRAP_SAME_OBJECT) {
                lastServicedRequest.set(null);
                lastServicedResponse.set(null);
            }
        }
    }

总结下,过滤器模式需要两个组件,过滤器链执行器FilterChain和过滤器Filter。

FilterChain负责按顺序执行过滤器,提供一种通过的模式或者框架。可以提供默认实现,或者等所有的过滤器执行之前或之后做一些操作。

Filter可以控制filterChain是否向下执行,也就是可以阻断过滤器执行链,也可以在执行链向下传递之前做一些操作。

发布了117 篇原创文章 · 获赞 219 · 访问量 55万+
展开阅读全文

没有更多推荐了,返回首页

©️2019 CSDN 皮肤主题: 大白 设计师: CSDN官方博客

分享到微信朋友圈

×

扫一扫,手机浏览