上一篇看了druid的初始化方法。本篇继续看连接获取的源码。基于1.1.6版本
public DruidPooledConnection getConnection(long maxWaitMillis) throws SQLException {
init(); //先初始化,已经初始化的跳过
if (filters.size() > 0) { //判断过滤链,最终还是调用了getConnectionDirect
FilterChainImpl filterChain = new FilterChainImpl(this);
return filterChain.dataSource_connect(this, maxWaitMillis);
} else {
return getConnectionDirect(maxWaitMillis);
}
}
判断过滤链,有就用,没有就getConnectionDirect
public DruidPooledConnection getConnectionDirect(long maxWaitMillis) throws SQLException {
int notFullTimeoutRetryCnt = 0;
for (;;) {
// handle notFullTimeoutRetry
DruidPooledConnection poolableConnection;
try {
poolableConnection = getConnectionInternal(maxWaitMillis);
} catch (GetConnectionTimeoutException ex) {
if (notFullTimeoutRetryCnt <= this.notFullTimeoutRetryCount && !isFull()) {
notFullTimeoutRetryCnt++;
if (LOG.isWarnEnabled()) {
LOG.warn("get connection timeout retry : " + notFullTimeoutRetryCnt);
}
continue;
}
throw ex;
}
if (testOnBorrow) {//取得对象验证有效性,还有空闲检测有效性testWhileIdle
}
if (removeAbandoned) {//检测去除无效链接
}
核心方法是getConnectionInternal ,下面是一些根据配置进行的有效性检测。可以理解为什么配置的参数怎么起的作用。
private DruidPooledConnection getConnectionInternal(long maxWait) throws SQLException {
//校验,忽略
...................
try {
lock.lockInterruptibly();
} catch (InterruptedException e) {
connectErrorCountUpdater.incrementAndGet(this);
throw new SQLException("interrupt", e);
}
try {
if (maxWaitThreadCount > 0
&& notEmptyWaitThreadCount >= maxWaitThreadCount) {
connectErrorCountUpdater.incrementAndGet(this);
throw new SQLException("maxWaitThreadCount " + maxWaitThreadCount + ", current wait Thread count "
+ lock.getQueueLength());
}
/**异常抛出 。。。。*/
connectCount++;
if (maxWait > 0) {
holder = pollLast(nanos);
} else {
holder = takeLast();
}
....
holder.incrementUseCount();
DruidPooledConnection poolalbeConnection = new DruidPooledConnection(holder);
return poolalbeConnection;
}
精简了代码,保留主要逻辑,先从连接池takelast()中取出DruidConnectionHolder,然后再封装成DruidPooledConnection对象返回。再看看取holder的方法:
DruidConnectionHolder takeLast() throws InterruptedException, SQLException {
try {
while (poolingCount == 0) {
emptySignal(); // send signal to CreateThread create connection
if (failFast && failContinuous.get()) {
throw new DataSourceNotAvailableException(createError);
}
notEmptyWaitThreadCount++;
if (notEmptyWaitThreadCount > notEmptyWaitThreadPeak) {
notEmptyWaitThreadPeak = notEmptyWaitThreadCount;
}
try {
notEmpty.await(); // signal by recycle or creator
} finally {
notEmptyWaitThreadCount--;
}
notEmptyWaitCount++;
if (!enable) {
connectErrorCountUpdater.incrementAndGet(this);
throw new DataSourceDisableException();
}
}
} catch (InterruptedException ie) {
notEmpty.signal(); // propagate to non-interrupted thread
notEmptySignalCount++;
throw ie;
}
decrementPoolingCount();
DruidConnectionHolder last = connections[poolingCount];
connections[poolingCount] = null;
return last;
}
这里是使用了Condition,在DruidAbstractDataSource里。
protected Condition notEmpty;
protected Condition empty;
大致逻辑:先判断池中的连接数,如果到0了,那么本线程就得被挂起,同时释放empty信号,并且等待notEmpty的信号,就是notEmpty.await()。 下面是如果有连接,就取出数组的最后一个,同时--poolingCount。
补充下等待notEmpty,就是CreateConnectionTask往datasource的连接池put的时候
private boolean put(DruidConnectionHolder holder) {
lock.lock();
try {
if (poolingCount >= maxActive) {
return false;
}
connections[poolingCount] = holder;
incrementPoolingCount();
if (poolingCount > poolingPeak) {
poolingPeak = poolingCount;
poolingPeakTime = System.currentTimeMillis();
}
notEmpty.signal();
notEmptySignalCount++;
if (createScheduler != null) {
createTaskCount--;
if (poolingCount + createTaskCount < notEmptyWaitThreadCount //
&& activeCount + poolingCount + createTaskCount < maxActive) {
emptySignal();
}
}
} finally {
lock.unlock();
}
return true;
}
基本上理解了druid数据库连接池获取连接的过程。这里有几个类关系需要补充下。
DruidDataSource 有数组 private volatile DruidConnectionHolder[] connections;
createStatement,commit,roolback等都是使用DruidPooledConnection 有holder,thread
protected volatile DruidConnectionHolder holder;
protected final Thread ownerThread;
holder里面有数据库连接跟数据源
protected final Connection conn;
protected final DruidAbstractDataSource dataSource;
**************************
druid两大块里面,只看了pool部分,sql解析器还没看。用了很多juc底层的函数。过滤链部分如何实现还没看。