Android源码解析------深入解析OkHttp源码

深入解析OkHttp 源码

1 - OkHttp 3.7源码分析(一)——整体架构

简介: OkHttp是一个处理网络请求的开源项目,是Android端最火热的轻量级框架,由移动支付Square公司贡献用于替代HttpUrlConnection和Apache HttpClient。随着OkHttp的不断成熟,越来越多的Android开发者使用OkHttp作为网络框架。

OkHttp3.7源码分析文章列表如下:

· OkHttp源码分析——整体架构

· OkHttp源码分析——拦截器

· OkHttp源码分析——任务队列

· OkHttp源码分析——缓存策略

· OkHttp源码分析——多路复用

OkHttp是一个处理网络请求的开源项目,是Android端最火热的轻量级框架,由移动支付Square公司贡献用于替代HttpUrlConnection和Apache HttpClient。随着OkHttp的不断成熟,越来越多的Android开发者使用OkHttp作为网络框架。

之所以可以赢得如此多开发者的喜爱,主要得益于如下特点:

· 支持HTTPS/HTTP2/WebSocket(在OkHttp3.7中已经剥离对Spdy的支持,转而大力支持HTTP2)

· 内部维护任务队列线程池,友好支持并发访问

· 内部维护连接池,支持多路复用,减少连接创建开销

· socket创建支持最佳路由

· 提供拦截器链(InterceptorChain),实现request与response的分层处理(如透明GZIP压缩,logging等)

为了一探OkHttp是如何具备以下特点的,笔者反复研究OkHttp框架源码,力求通过源码分析向各位解释一二。所以特意准备了几篇博客跟大家一起探讨下OkHttp的方方面面,今天首先从整体架构上分析下OkHttp。

简单使用

首先来看下OkHttp的简单使用,OkHttp提供了两种调用方式:

· 同步调用

· 异步调用

同步调用
@Override public Response execute() throws IOException {
  synchronized (this) {
    if (executed) throw new IllegalStateException("Already Executed");
    executed = true;
  }
  try {
    client.dispatcher().executed(this);
    Response result = getResponseWithInterceptorChain(false);
    if (result == null) throw new IOException("Canceled");
    return result;
  } finally {
    client.dispatcher().finished(this);
  }
}

首先加锁置标志位,接着使用分配器的executed方法将call加入到同步队列中,然后调用getResponseWithInterceptorChain方法(稍后分析)执行http请求,最后调用finishied方法将call从同步队列中删除

异步请求
void enqueue(Callback responseCallback, boolean forWebSocket) {
  synchronized (this) {
    if (executed) throw new IllegalStateException("Already Executed");
    executed = true;
  }
  client.dispatcher().enqueue(new AsyncCall(responseCallback, forWebSocket));
}

同样先置标志位,然后将封装的一个执行体放到异步执行队列中。这里面引入了一个新的类AsyncCall,这个类继承于NamedRunnable,实现了Runnable接口。NamedRunnable可以给当前的线程设置名字,并且用模板方法将线程的执行体放到了execute方法中

拦截器

除了同步调用和异步调用,OkHttp还提供了一个拦截器的概念。拦截器提供了拦截请求和拦截服务器应答的接口。OkHttp提供了一个拦截器链的概念,通过将一个个拦截器组合成一个拦截器链,可以达到在不同层面做不同拦截操作的效果,有点AOP的意思。具体拦截器的使用可以参考:Okhttp-wiki 之 Interceptors 拦截器

总体架构

在这里插入图片描述

上图是OkHttp的总体架构,大致可以分为以下几层:

· Interface——接口层:接受网络访问请求

· Protocol——协议层:处理协议逻辑

· Connection——连接层:管理网络连接,发送新的请求,接收服务器访问

· Cache——缓存层:管理本地缓存

· I/O——I/O层:实际数据读写实现

· Inteceptor——拦截器层:拦截网络访问,插入拦截逻辑

Interface——接口层:

接口层接收用户的网络访问请求(同步请求/异步请求),发起实际的网络访问。OkHttpClient是OkHttp框架的客户端,更确切的说是一个用户面板。用户使用OkHttp进行各种设置,发起各种网络请求都是通过OkHttpClient完成的。每个OkHttpClient内部都维护了属于自己的任务队列,连接池,Cache,拦截器等,所以在使用OkHttp作为网络框架时应该全局共享一个OkHttpClient实例。

Call描述一个实际的访问请求,用户的每一个网络请求都是一个Call实例。Call本身只是一个接口,定义了Call的接口方法,实际执行过程中,OkHttp会为每一个请求创建一个RealCall,每一个RealCall内部有一个AsyncCall:

  final class AsyncCall extends NamedRunnable {
    private final Callback responseCallback;
 
    AsyncCall(Callback responseCallback) {
      super("OkHttp %s", redactedUrl());
      this.responseCallback = responseCallback;
    }
 
    String host() {
      return originalRequest.url().host();
    }
 
    Request request() {
      return originalRequest;
    }
 
    RealCall get() {
      return RealCall.this;
    }
 
    @Override protected void execute() {
      ...
  }
    ...
}

AsyncCall继承的NamedRunnable继承自Runnable接口:

public abstract class NamedRunnable implements Runnable {
  protected final String name;
 
  public NamedRunnable(String format, Object... args) {
    this.name = Util.format(format, args);
  }
 
  @Override public final void run() {
    String oldName = Thread.currentThread().getName();
    Thread.currentThread().setName(name);
    try {
      execute();
    } finally {
      Thread.currentThread().setName(oldName);
    }
  }
 
  protected abstract void execute();
}

所以每一个Call就是一个线程,而执行Call的过程就是执行其execute方法的过程。

Dispatcher是OkHttp的任务队列,其内部维护了一个线程池,当有接收到一个Call时,Dispatcher负责在线程池中找到空闲的线程并执行其execute方法。这部分将会单独拿一篇博客进行介绍,详细内容可参考本系列接下来的文章。

Protocol——协议层:处理协议逻辑

Protocol层负责处理协议逻辑,OkHttp支持Http1/Http2/WebSocket协议,并在3.7版本中放弃了对Spdy协议,鼓励开发者使用Http/2。

Connection——连接层:管理网络连接,发送新的请求,接收服务器访问

连接层顾名思义就是负责网络连接。在连接层中有一个连接池,统一管理所有的Socket连接,当用户新发起一个网络请求时,OkHttp会首先从连接池中查找是否有符合要求的连接,如果有则直接通过该连接发送网络请求;否则新创建一个网络连接。

RealConnection描述一个物理Socket连接,连接池中维护多个RealConnection实例。由于Http/2支持多路复用,一个RealConnection可以支持多个网络访问请求,所以OkHttp又引入了StreamAllocation来描述一个实际的网络请求开销(从逻辑上一个Stream对应一个Call,但在实际网络请求过程中一个Call常常涉及到多次请求。如重定向,Authenticate等场景。所以准确地说,一个Stream对应一次请求,而一个Call对应一组有逻辑关联的Stream),一个RealConnection对应一个或多个StreamAllocation,所以StreamAllocation可以看做是RealConenction的计数器,当RealConnection的引用计数变为0,且长时间没有被其他请求重新占用就将被释放。

连接层是OkHttp的核心部分,这部分当然也会单独拿一篇博客详细讲解,详细内容可参考本专题相关文章。

Cache——缓存层:管理本地缓存

Cache层负责维护请求缓存,当用户的网络请求在本地已有符合要求的缓存时,OkHttp会直接从缓存中返回结果,从而节省网络开销。这部分内容也会单独拿一篇博客进行介绍,详细内容可参考本专题相关文章。

I/O——I/O层:实际数据读写实现

I/O层负责实际的数据读写。OkHttp的另一大有点就是其高效的I/O操作,这归因于其高效的I/O库okio

这部分内容笔者也打算另开一个专题进行介绍。详细内容可以参考本博客相关内容。

Inteceptor——拦截器层:拦截网络访问,插入拦截逻辑

拦截器层提供了一个类AOP接口,方便用户可以切入到各个层面对网络访问进行拦截并执行相关逻辑。在下一篇博客中,笔者将通过介绍一个实际的网络访问请求实例来介绍拦截器的原理。

2 - OkHttp 3.7源码分析(二)——拦截器&一个实际网络请求的实现

简介: 前一篇博客中我们介绍了OkHttp的总体架构,接下来我们以一个具体的网络请求来讲述OkHttp进行网络访问的具体过程。由于该部分与OkHttp的拦截器概念紧密联系在一起,所以将这两部分放在一起进行讲解。

OkHttp3.7源码分析文章列表如下:

· OkHttp源码分析——整体架构

· OkHttp源码分析——拦截器

· OkHttp源码分析——任务队列

· OkHttp源码分析——缓存策略

· OkHttp源码分析——多路复用

前一篇博客中我们介绍了OkHttp的总体架构,接下来我们以一个具体的网络请求来讲述OkHttp进行网络访问的具体过程。由于该部分与OkHttp的拦截器概念紧密联系在一起,所以将这两部分放在一起进行讲解。

1.构造Demo

首先构造一个简单的异步网络访问Demo:

OkHttpClient client = new OkHttpClient();        
Request request = new Request.Builder()
  .url("http://publicobject.com/helloworld.txt")
  .build();
 
client.newCall(request).enqueue(new Callback() {
  @Override
  public void onFailure(Call call, IOException e) {
    Log.d("OkHttp", "Call Failed:" + e.getMessage());
  }
 
  @Override
  public void onResponse(Call call, Response response) throws IOException {
    Log.d("OkHttp", "Call succeeded:" + response.message());
  }
});

2. 发起请求

OkHttpClient.newCall实际是创建一个RealCall实例:

  /**
   * Prepares the {@code request} to be executed at some point in the future.
   */
  @Override public Call newCall(Request request) {
    return new RealCall(this, request, false /* for web socket */);
  }

RealCall.enqueue实际就是讲一个RealCall放入到任务队列中,等待合适的机会执行:

  @Override public void enqueue(Callback responseCallback) {
    synchronized (this) {
      if (executed) throw new IllegalStateException("Already Executed");
      executed = true;
    }
    captureCallStackTrace();
    client.dispatcher().enqueue(new AsyncCall(responseCallback));
  }

从代码中可以看到最终RealCall被转化成一个AsyncCall并被放入到任务队列中,任务队列中的分发逻辑这里先不说,相关实现会放在OkHttp源码分析——任务队列疑问进行介绍。这里只需要知道AsyncCall的excute方法最终将会被执行:

[RealCall.java]    @Override protected void execute() {
      boolean signalledCallback = false;
      try {
        Response response = getResponseWithInterceptorChain();
        if (retryAndFollowUpInterceptor.isCanceled()) {
          signalledCallback = true;
          responseCallback.onFailure(RealCall.this, new IOException("Canceled"));
        } else {
          signalledCallback = true;
          responseCallback.onResponse(RealCall.this, response);
        }
      } catch (IOException e) {
        if (signalledCallback) {
          // Do not signal the callback twice!
          Platform.get().log(INFO, "Callback failure for " + toLoggableString(), e);
        } else {
          responseCallback.onFailure(RealCall.this, e);
        }
      } finally {
        client.dispatcher().finished(this);
      }
    }
  }

execute方法的逻辑并不复杂,简单的说就是:

· 调用getResponseWithInterceptorChain获取服务器返回

· 通知任务分发器(client.dispatcher)该任务已结束

getResponseWithInterceptorChain构建了一个拦截器链,通过依次执行该拦截器链中的每一个拦截器最终得到服务器返回。

3. 构建拦截器链

首先来看下getResponseWithInterceptorChain的实现:

[RealCall.java]  Response getResponseWithInterceptorChain() throws IOException {
    // Build a full stack of interceptors.
    List<Interceptor> interceptors = new ArrayList<>();
    interceptors.addAll(client.interceptors());
    interceptors.add(retryAndFollowUpInterceptor);
    interceptors.add(new BridgeInterceptor(client.cookieJar()));
    interceptors.add(new CacheInterceptor(client.internalCache()));
    interceptors.add(new ConnectInterceptor(client));
    if (!forWebSocket) {
      interceptors.addAll(client.networkInterceptors());
    }
    interceptors.add(new CallServerInterceptor(forWebSocket));
 
    Interceptor.Chain chain = new RealInterceptorChain(
        interceptors, null, null, null, 0, originalRequest);
    return chain.proceed(originalRequest);
  }

其逻辑大致分为两部分:

· 创建一系列拦截器,并将其放入一个拦截器数组中。这部分拦截器即包括用户自定义的拦截器也包括框架内部拦截器

· 创建一个拦截器链RealInterceptorChain,并执行拦截器链的proceed方法

接下来看下RealInterceptorChain的实现逻辑:

[RealInterceptorChain.java]public final class RealInterceptorChain implements Interceptor.Chain {
  private final List<Interceptor> interceptors;
  private final StreamAllocation streamAllocation;
  private final HttpCodec httpCodec;
  private final RealConnection connection;
  private final int index;
  private final Request request;
  private int calls;
 
  public RealInterceptorChain(List<Interceptor> interceptors, StreamAllocation streamAllocation,
                              HttpCodec httpCodec, RealConnection connection, int index, Request request) {
    this.interceptors = interceptors;
    this.connection = connection;
    this.streamAllocation = streamAllocation;
    this.httpCodec = httpCodec;
    this.index = index;
    this.request = request;
  }
 
  @Override public Connection connection() {
    return connection;
  }
 
  public StreamAllocation streamAllocation() {
    return streamAllocation;
  }
 
  public HttpCodec httpStream() {
    return httpCodec;
  }
 
  @Override public Request request() {
    return request;
  }
 
  @Override public Response proceed(Request request) throws IOException {
    return proceed(request, streamAllocation, httpCodec, connection);
  }
 
  public Response proceed(Request request, StreamAllocation streamAllocation, HttpCodec httpCodec,
      RealConnection connection) throws IOException {
 
    ......
    // Call the next interceptor in the chain.
    RealInterceptorChain next = new RealInterceptorChain(
        interceptors, streamAllocation, httpCodec, connection, index + 1, request);
    Interceptor interceptor = interceptors.get(index);
    Response response = interceptor.intercept(next);
    
    ...... 
 
    return response;
  }
}

proceed方法中的核心代码可以看到,proceed实际上也做了两件事:

· 创建下一个拦截链。传入index + 1使得下一个拦截器链只能从下一个拦截器开始访问

· 执行索引为index的intercept方法,并将下一个拦截器链传入该方法

接下来再看下第一个拦截器RetryAndFollowUpInterceptor的intercept方法:

[RetryAndFollowUpInterceptor.java]
public final class RetryAndFollowUpInterceptor implements Interceptor {
    @Override public Response intercept(Chain chain) throws IOException {
    Request request = chain.request();
 
    streamAllocation = new StreamAllocation(
        client.connectionPool(), createAddress(request.url()), callStackTrace);
 
    int followUpCount = 0;
    Response priorResponse = null;
    while (true) {
      if (canceled) {
        streamAllocation.release();
        throw new IOException("Canceled");
      }
 
      Response response = null;
      boolean releaseConnection = true;
      try {
       //执行下一个拦截器链的proceed方法
        response = ((RealInterceptorChain) chain).proceed(request, streamAllocation, null, null);
        releaseConnection = false;
      } catch (RouteException e) {
        // The attempt to connect via a route failed. The request will not have been sent.
        if (!recover(e.getLastConnectException(), false, request)) {
          throw e.getLastConnectException();
        }
        releaseConnection = false;
        continue;
      } catch (IOException e) {
        // An attempt to communicate with a server failed. The request may have been sent.
        boolean requestSendStarted = !(e instanceof ConnectionShutdownException);
        if (!recover(e, requestSendStarted, request)) throw e;
        releaseConnection = false;
        continue;
      } finally {
        // We're throwing an unchecked exception. Release any resources.
        if (releaseConnection) {
          streamAllocation.streamFailed(null);
          streamAllocation.release();
        }
      }
 
      // Attach the prior response if it exists. Such responses never have a body.
      ......
 
      Request followUp = followUpRequest(response);
 
 
      closeQuietly(response.body());
      ...
    }
  }
}

这段代码最关键的代码是:

response = ((RealInterceptorChain) chain).proceed(request, streamAllocation, null, null);

这行代码就是执行下一个拦截器链的proceed方法。而我们知道在下一个拦截器链中又会执行下一个拦截器的intercept方法。所以整个执行链就在拦截器与拦截器链中交替执行,最终完成所有拦截器的操作。这也是OkHttp拦截器的链式执行逻辑。而一个拦截器的intercept方法所执行的逻辑大致分为三部分:

· 在发起请求前对request进行处理

· 调用下一个拦截器,获取response

· 对response进行处理,返回给上一个拦截器

这就是OkHttp拦截器机制的核心逻辑。所以一个网络请求实际上就是一个个拦截器执行其intercept方法的过程。而这其中除了用户自定义的拦截器外还有几个核心拦截器完成了网络访问的核心逻辑,按照先后顺序依次是:

· RetryAndFollowUpInterceptor

· BridgeInterceptor

· CacheInterceptor

· ConnectIntercetot

· CallServerInterceptor

4 RetryAndFollowUpInterceptor

如上文代码所示,RetryAndFollowUpInterceptor负责两部分逻辑:

· 在网络请求失败后进行重试

· 当服务器返回当前请求需要进行重定向时直接发起新的请求,并在条件允许情况下复用当前连接

5 BridgeInterceptor

public final class BridgeInterceptor implements Interceptor {
  private final CookieJar cookieJar;
 
  public BridgeInterceptor(CookieJar cookieJar) {
    this.cookieJar = cookieJar;
  }
 
  @Override public Response intercept(Chain chain) throws IOException {
    Log.e("haha", "BridgeInterceptor.intercept");
    Request userRequest = chain.request();
    Request.Builder requestBuilder = userRequest.newBuilder();
 
    RequestBody body = userRequest.body();
    if (body != null) {
      MediaType contentType = body.contentType();
      if (contentType != null) {
        requestBuilder.header("Content-Type", contentType.toString());
      }
 
      long contentLength = body.contentLength();
      if (contentLength != -1) {
        requestBuilder.header("Content-Length", Long.toString(contentLength));
        requestBuilder.removeHeader("Transfer-Encoding");
      } else {
        requestBuilder.header("Transfer-Encoding", "chunked");
        requestBuilder.removeHeader("Content-Length");
      }
    }
 
    if (userRequest.header("Host") == null) {
      requestBuilder.header("Host", hostHeader(userRequest.url(), false));
    }
 
    if (userRequest.header("Connection") == null) {
      requestBuilder.header("Connection", "Keep-Alive");
    }
 
    // If we add an "Accept-Encoding: gzip" header field we're responsible for also decompressing
    // the transfer stream.
    boolean transparentGzip = false;
    if (userRequest.header("Accept-Encoding") == null && userRequest.header("Range") == null) {
      transparentGzip = true;
      requestBuilder.header("Accept-Encoding", "gzip");
    }
 
    List<Cookie> cookies = cookieJar.loadForRequest(userRequest.url());
    if (!cookies.isEmpty()) {
      requestBuilder.header("Cookie", cookieHeader(cookies));
    }
 
    if (userRequest.header("User-Agent") == null) {
      requestBuilder.header("User-Agent", Version.userAgent());
    }
 
    Response networkResponse = chain.proceed(requestBuilder.build());
 
    HttpHeaders.receiveHeaders(cookieJar, userRequest.url(), networkResponse.headers());
 
    Response.Builder responseBuilder = networkResponse.newBuilder()
        .request(userRequest);
 
    if (transparentGzip
        && "gzip".equalsIgnoreCase(networkResponse.header("Content-Encoding"))
        && HttpHeaders.hasBody(networkResponse)) {
      GzipSource responseBody = new GzipSource(networkResponse.body().source());
      Headers strippedHeaders = networkResponse.headers().newBuilder()
          .removeAll("Content-Encoding")
          .removeAll("Content-Length")
          .build();
      responseBuilder.headers(strippedHeaders);
      responseBuilder.body(new RealResponseBody(strippedHeaders, Okio.buffer(responseBody)));
    }
 
    return responseBuilder.build();
  }
}

BridgeInterceptor主要负责以下几部分内容:

· 设置内容长度,内容编码

· 设置gzip压缩,并在接收到内容后进行解压。省去了应用层处理数据解压的麻烦

· 添加cookie

· 设置其他报头,如User-Agent,Host,Keep-alive等。其中Keep-Alive是实现多路复用的必要步骤

6. CacheInterceptor

[CacheInterceptor.intercept]
 
  @Override public Response intercept(Chain chain) throws IOException {
    Log.e("haha", "CacheInterceptor.intercept");
    Response cacheCandidate = cache != null
        ? cache.get(chain.request())
        : null;
 
    long now = System.currentTimeMillis();
 
    CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
    Request networkRequest = strategy.networkRequest;
    Response cacheResponse = strategy.cacheResponse;
 
    if (cache != null) {
      cache.trackResponse(strategy);
    }
 
    if (cacheCandidate != null && cacheResponse == null) {
      closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
    }
 
    // If we're forbidden from using the network and the cache is insufficient, fail.
    if (networkRequest == null && cacheResponse == null) {
      return new Response.Builder()
          .request(chain.request())
          .protocol(Protocol.HTTP_1_1)
          .code(504)
          .message("Unsatisfiable Request (only-if-cached)")
          .body(Util.EMPTY_RESPONSE)
          .sentRequestAtMillis(-1L)
          .receivedResponseAtMillis(System.currentTimeMillis())
          .build();
    }
 
    // If we don't need the network, we're done.
    if (networkRequest == null) {
      return cacheResponse.newBuilder()
          .cacheResponse(stripBody(cacheResponse))
          .build();
    }
 
    Response networkResponse = null;
    try {
      networkResponse = chain.proceed(networkRequest);
    } finally {
      // If we're crashing on I/O or otherwise, don't leak the cache body.
      if (networkResponse == null && cacheCandidate != null) {
        closeQuietly(cacheCandidate.body());
      }
    }
 
    // If we have a cache response too, then we're doing a conditional get.
    if (cacheResponse != null) {
      if (networkResponse.code() == HTTP_NOT_MODIFIED) {
        Response response = cacheResponse.newBuilder()
            .headers(combine(cacheResponse.headers(), networkResponse.headers()))
            .sentRequestAtMillis(networkResponse.sentRequestAtMillis())
            .receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
            .cacheResponse(stripBody(cacheResponse))
            .networkResponse(stripBody(networkResponse))
            .build();
        networkResponse.body().close();
 
        // Update the cache after combining headers but before stripping the
        // Content-Encoding header (as performed by initContentStream()).
        cache.trackConditionalCacheHit();
        cache.update(cacheResponse, response);
        return response;
      } else {
        closeQuietly(cacheResponse.body());
      }
    }
 
    Response response = networkResponse.newBuilder()
        .cacheResponse(stripBody(cacheResponse))
        .networkResponse(stripBody(networkResponse))
        .build();
 
    if (cache != null) {
      if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
        // Offer this request to the cache.
        CacheRequest cacheRequest = cache.put(response);
        return cacheWritingResponse(cacheRequest, response);
      }
 
      if (HttpMethod.invalidatesCache(networkRequest.method())) {
        try {
          cache.remove(networkRequest);
        } catch (IOException ignored) {
          // The cache cannot be written.
        }
      }
    }
 
    return response;
  }

CacheInterceptor的职责很明确,就是负责Cache的管理

· 当网络请求有符合要求的Cache时直接返回Cache

· 当服务器返回内容有改变时更新当前cache

· 如果当前cache失效,删除

7 ConnectInterceptor

[ConnectInterceptor.java]public final class ConnectInterceptor implements Interceptor {
  public final OkHttpClient client;
 
  public ConnectInterceptor(OkHttpClient client) {
    this.client = client;
  }
 
  @Override public Response intercept(Chain chain) throws IOException {
    Log.e("haha", "ConnectInterceptor.intercept");
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    Request request = realChain.request();
    StreamAllocation streamAllocation = realChain.streamAllocation();
 
    // We need the network to satisfy this request. Possibly for validating a conditional GET.
    boolean doExtensiveHealthChecks = !request.method().equals("GET");
    HttpCodec httpCodec = streamAllocation.newStream(client, doExtensiveHealthChecks);
    RealConnection connection = streamAllocation.connection();
 
    return realChain.proceed(request, streamAllocation, httpCodec, connection);
  }
}

ConnectInterceptor的intercept方法只有一行关键代码:

RealConnection connection = streamAllocation.connection();

即为当前请求找到合适的连接,可能复用已有连接也可能是重新创建的连接,返回的连接由连接池负责决定。

8. CallServerInterceptor

[CallServerInterceptor.java]@Override public Response intercept(Chain chain) throws IOException {
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    HttpCodec httpCodec = realChain.httpStream();
    StreamAllocation streamAllocation = realChain.streamAllocation();
    RealConnection connection = (RealConnection) realChain.connection();
    Request request = realChain.request();
 
    long sentRequestMillis = System.currentTimeMillis();
    httpCodec.writeRequestHeaders(request);
 
    Response.Builder responseBuilder = null;
      ......
 
    httpCodec.finishRequest();
 
    if (responseBuilder == null) {
      responseBuilder = httpCodec.readResponseHeaders(false);
    }
 
    Response response = responseBuilder
        .request(request)
        .handshake(streamAllocation.connection().handshake())
        .sentRequestAtMillis(sentRequestMillis)
        .receivedResponseAtMillis(System.currentTimeMillis())
        .build();
 
    int code = response.code();
    if (forWebSocket && code == 101) {
      // Connection is upgrading, but we need to ensure interceptors see a non-null response body.
      response = response.newBuilder()
          .body(Util.EMPTY_RESPONSE)
          .build();
    } else {
      response = response.newBuilder()
          .body(httpCodec.openResponseBody(response))
          .build();
    }
 
    if ("close".equalsIgnoreCase(response.request().header("Connection"))
        || "close".equalsIgnoreCase(response.header("Connection"))) {
      streamAllocation.noNewStreams();
    }
 
    if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
      throw new ProtocolException(
          "HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength());
    }
 
    return response;
  }

CallServerInterceptor负责向服务器发起真正的访问请求,并在接收到服务器返回后读取响应返回。

8.整体流程

以上就是整个网络访问的核心步骤,总结起来如下图所示:
在这里插入图片描述

3 - OkHttp 3.7源码分析(三)——任务队列

简介: OkHttp的任务队列在内部维护了一个线程池用于执行具体的网络请求。而线程池最大的好处在于通过线程复用减少非核心任务的损耗。

OkHttp3.7源码分析文章列表如下:

· OkHttp源码分析——整体架构

· OkHttp源码分析——拦截器

· OkHttp源码分析——任务队列

· OkHttp源码分析——缓存策略

· OkHttp源码分析——多路复用

前面的博客已经提到过,OkHttp的一个高效之处在于在内部维护了一个线程池,方便高效地执行异步请求。本篇博客将详细介绍OkHttp的任务队列机制。

1. 线程池的优点

OkHttp的任务队列在内部维护了一个线程池用于执行具体的网络请求。而线程池最大的好处在于通过线程复用减少非核心任务的损耗。

多线程技术主要解决处理器单元内多个线程执行的问题,它可以显著减少处理器单元的闲置时间,增加处理器单元的吞吐能力。但如果对多线程应用不当,会增加对单个任务的处理时间。可以举一个简单的例子:

假设在一台服务器完成一项任务的时间为T

T1 创建线程的时间T2 在线程中执行任务的时间,包括线程间同步所需时间T3 线程销毁的时间

显然T = T1+T2+T3。注意这是一个极度简化的假设。
可以看出T1,T3是多线程本身的带来的开销(在Java中,通过映射pThead,并进一步通过>SystemCall实现native线程),我们渴望减少T1,T3所用的时间,从而减少T的时间。但一些线>程的使用者并没有注意到这一点,所以在程序中频繁的创建或销毁线程,这导致T1和T3在T中占有>相当比例。显然这是突出了线程的弱点(T1,T3),而不是优点(并发性)。

线程池技术正是关注如何缩短或调整T1,T3时间的技术,从而提高服务器程序性能的。

\1. 通过对线程进行缓存,减少了创建销毁的时间损失

\2. 通过控制线程数量阀值,减少了当线程过少时带来的CPU闲置(比如说长时间卡在I/O上了)与线程过多时对JVM的内存与线程切换时系统调用的压力

类似的还有Socket连接池、DB连接池、CommonPool(比如Jedis)等技术。

2. OkHttp的任务队列

OkHttp的任务队列主要由两部分组成:

· 任务分发器dispatcher:负责为任务找到合适的执行线程

· 网络请求任务线程池

public final class Dispatcher {
  private int maxRequests = 64;
  private int maxRequestsPerHost = 5;
  private Runnable idleCallback;
 
  /** Executes calls. Created lazily. */
  private ExecutorService executorService;
 
  /** Ready async calls in the order they'll be run. */
  private final Deque<AsyncCall> readyAsyncCalls = new ArrayDeque<>();
 
  /** Running asynchronous calls. Includes canceled calls that haven't finished yet. */
  private final Deque<AsyncCall> runningAsyncCalls = new ArrayDeque<>();
 
  /** Running synchronous calls. Includes canceled calls that haven't finished yet. */
  private final Deque<RealCall> runningSyncCalls = new ArrayDeque<>();
 
  public Dispatcher(ExecutorService executorService) {
    this.executorService = executorService;
  }
 
  public Dispatcher() {
  }
 
  public synchronized ExecutorService executorService() {
    if (executorService == null) {
      executorService = new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60, TimeUnit.SECONDS,
          new SynchronousQueue<Runnable>(), Util.threadFactory("OkHttp Dispatcher", false));
    }
    return executorService;
  }
  
  ...
}

参数说明如下:

· readyAsyncCalls:待执行异步任务队列

· runningAsyncCalls:运行中异步任务队列

· runningSyncCalls:运行中同步任务队列

· executorService:任务队列线程池:

    public ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue,
                              ThreadFactory threadFactory) {
        this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
             threadFactory, defaultHandler);
  • int corePoolSize: 最小并发线程数,这里并发同时包括空闲与活动的线程,如果是0的话,空闲一段时间后所有线程将全部被销毁
  • int maximumPoolSize: 最大线程数,当任务进来时可以扩充的线程最大值,当大于了这个值就会根据丢弃处理机制来处理
  • long keepAliveTime: 当线程数大于corePoolSize时,多余的空闲线程的最大存活时间,类似于HTTP中的Keep-alive
  • TimeUnit unit: 时间单位,一般用秒
  • BlockingQueue workQueue: 工作队列,先进先出,可以看出并不像Picasso那样设置优先队列
  • ThreadFactory threadFactory: 单个线程的工厂,可以打Log,设置Daemon(即当JVM退出时,线程自动结束)等

可以看出,在Okhttp中,构建了一个阀值为[0, Integer.MAX_VALUE]的线程池,它不保留任何最小线程数,随时创建更多的线程数,当线程空闲时只能活60秒,它使用了一个不存储元素的阻塞工作队列,一个叫做"OkHttp Dispatcher"的线程工厂。

也就是说,在实际运行中,当收到10个并发请求时,线程池会创建十个线程,当工作完成后,线程池会在60s后相继关闭所有线程。

3. Dispatcher分发器

dispatcher分发器类似于Ngnix中的反向代理,通过Dispatcher将任务分发到合适的空闲线程,实现非阻塞,高可用,高并发连接

1.同步请求

当我们使用OkHttp进行同步请求时,一般构造如下:

OkHttpClient client = new OkHttpClient();
Request request = new Request.Builder()
    .url("http://publicobject.com/helloworld.txt")
    .build();
Response response = client.newCall(request).execute();

接下来看看RealCall.execute

  @Override public Response execute() throws IOException {
    synchronized (this) {
      if (executed) throw new IllegalStateException("Already Executed");
      executed = true;
    }
    captureCallStackTrace();
    try {
      client.dispatcher().executed(this);
      Response result = getResponseWithInterceptorChain();
      if (result == null) throw new IOException("Canceled");
      return result;
    } finally {
      client.dispatcher().finished(this);
    }
  }

同步调用的执行逻辑是:

· 将对应任务加入分发器

· 执行任务

· 执行完成后通知dispatcher对应任务已完成,对应任务出队

2.异步请求

异步请求一般构造如下:

        OkHttpClient client = new OkHttpClient();
        Request request = new Request.Builder()
                .url("http://publicobject.com/helloworld.txt")
                .build();
 
        client.newCall(request).enqueue(new Callback() {
            @Override
            public void onFailure(Call call, IOException e) {
                Log.d("OkHttp", "Call Failed:" + e.getMessage());
            }
 
            @Override
            public void onResponse(Call call, Response response) throws IOException {
                Log.d("OkHttp", "Call succeeded:" + response.message());
            }
        });

当HttpClient的请求入队时,根据代码,我们可以发现实际上是Dispatcher进行了入队操作。

synchronized void enqueue(AsyncCall call) {
  if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) {
      //添加正在运行的请求
    runningAsyncCalls.add(call);
       //线程池执行请求
    executorService().execute(call);
  } else {
      //添加到缓存队列排队等待
    readyAsyncCalls.add(call);
  }
}

如果满足条件:

· 当前请求数小于最大请求数(64)

· 对单一host的请求小于阈值(5)

将该任务插入正在执行任务队列,并执行对应任务。如果不满足则将其放入待执行队列。

接下来看看AsyncCall.execute

@Override protected void execute() {
  boolean signalledCallback = false;
  try {
      //执行耗时IO任务
    Response response = getResponseWithInterceptorChain(forWebSocket);
    if (canceled) {
      signalledCallback = true;
      //回调,注意这里回调是在线程池中,而不是想当然的主线程回调
      responseCallback.onFailure(RealCall.this, new IOException("Canceled"));
    } else {
      signalledCallback = true;
      //回调,同上
      responseCallback.onResponse(RealCall.this, response);
    }
  } catch (IOException e) {
    if (signalledCallback) {
      // Do not signal the callback twice!
      logger.log(Level.INFO, "Callback failure for " + toLoggableString(), e);
    } else {
      responseCallback.onFailure(RealCall.this, e);
    }
  } finally {
      //最关键的代码
    client.dispatcher().finished(this);
  }
}

当任务执行完成后,无论成功与否都会调用dispatcher.finished方法,通知分发器相关任务已结束:

  private <T> void finished(Deque<T> calls, T call, boolean promoteCalls) {
    int runningCallsCount;
    Runnable idleCallback;
    synchronized (this) {
      if (!calls.remove(call)) throw new AssertionError("Call wasn't in-flight!");
      if (promoteCalls) promoteCalls();
      runningCallsCount = runningCallsCount();
      idleCallback = this.idleCallback;
    }
 
    if (runningCallsCount == 0 && idleCallback != null) {
      idleCallback.run();
    }
  }

· 空闲出多余线程,调用promoteCalls调用待执行的任务

· 如果当前整个线程池都空闲下来,执行空闲通知回调线程(idleCallback)

接下来看看promoteCalls:

  private void promoteCalls() {
    if (runningAsyncCalls.size() >= maxRequests) return; // Already running max capacity.
    if (readyAsyncCalls.isEmpty()) return; // No ready calls to promote.
 
    for (Iterator<AsyncCall> i = readyAsyncCalls.iterator(); i.hasNext(); ) {
      AsyncCall call = i.next();
 
      if (runningCallsForHost(call) < maxRequestsPerHost) {
        i.remove();
        runningAsyncCalls.add(call);
        executorService().execute(call);
      }
 
      if (runningAsyncCalls.size() >= maxRequests) return; // Reached max capacity.
    }
  }

promoteCalls的逻辑也很简单:扫描待执行任务队列,将任务放入正在执行任务队列,并执行该任务。

4. 总结

以上就是整个任务队列的实现细节,总结起来有以下几个特点:

· OkHttp采用Dispatcher技术,类似于Nginx,与线程池配合实现了高并发,低阻塞的运行

· Okhttp采用Deque作为缓存,按照入队的顺序先进先出

· OkHttp最出彩的地方就是在try/finally中调用了finished函数,可以主动控制等待队列的移动,而不是采用锁或者wait/notify,极大减少了编码复杂性

4 - OkHttp 3.7源码分析(四)——缓存策略

简介: 合理地利用本地缓存可以有效地减少网络开销,减少响应延迟。HTTP报头也定义了很多与缓存有关的域来控制缓存。今天就来讲讲OkHttp中关于缓存部分的实现细节。

OkHttp3.7源码分析文章列表如下:

· OkHttp源码分析——整体架构

· OkHttp源码分析——拦截器

· OkHttp源码分析——任务队列

· OkHttp源码分析——缓存策略

· OkHttp源码分析——多路复用

合理地利用本地缓存可以有效地减少网络开销,减少响应延迟。HTTP报头也定义了很多与缓存有关的域来控制缓存。今天就来讲讲OkHttp中关于缓存部分的实现细节。

1. HTTP缓存策略

首先来了解下HTTP协议中缓存部分的相关域。

1.1 Expires

超时时间,一般用在服务器的response报头中用于告知客户端对应资源的过期时间。当客户端需要再次请求相同资源时先比较其过期时间,如果尚未超过过期时间则直接返回缓存结果,如果已经超过则重新请求。

1.2 Cache-Control

相对值,单位时秒,表示当前资源的有效期。Cache-ControlExpires优先级更高:

Cache-Control:max-age=31536000,public
1.3 条件GET请求
1.3.1 Last-Modified-Date

客户端第一次请求时,服务器返回:

Last-Modified: Tue, 12 Jan 2016 09:31:27 GMT

当客户端二次请求时,可以头部加上如下header:

If-Modified-Since: Tue, 12 Jan 2016 09:31:27 GMT

如果当前资源没有被二次修改,服务器返回304告知客户端直接复用本地缓存。

1.3.2 ETag

ETag是对资源文件的一种摘要,可以通过ETag值来判断文件是否有修改。当客户端第一次请求某资源时,服务器返回:

ETag: "5694c7ef-24dc"

客户端再次请求时,可在头部加上如下域:

If-None-Match: "5694c7ef-24dc"

如果文件并未改变,则服务器返回304告知客户端可以复用本地缓存。

1.4 no-cache/no-store

不使用缓存

1.5 only-if-cached

只使用缓存

2. Cache源码分析

OkHttp的缓存工作都是在CacheInterceptor中完成的,Cache部分有如下几个关键类:

· Cache:Cache管理器,其内部包含一个DiskLruCache将cache写入文件系统:

 * <h3>Cache Optimization</h3>
 *
 * <p>To measure cache effectiveness, this class tracks three statistics:
 * <ul>
 *     <li><strong>{@linkplain #requestCount() Request Count:}</strong> the number of HTTP
 *         requests issued since this cache was created.
 *     <li><strong>{@linkplain #networkCount() Network Count:}</strong> the number of those
 *         requests that required network use.
 *     <li><strong>{@linkplain #hitCount() Hit Count:}</strong> the number of those requests
 *         whose responses were served by the cache.
 * </ul>
 *
 * Sometimes a request will result in a conditional cache hit. If the cache contains a stale copy of
 * the response, the client will issue a conditional {@code GET}. The server will then send either
 * the updated response if it has changed, or a short 'not modified' response if the client's copy
 * is still valid. Such responses increment both the network count and hit count.
 *
 * <p>The best way to improve the cache hit rate is by configuring the web server to return
 * cacheable responses. Although this client honors all <a
 * href="http://tools.ietf.org/html/rfc7234">HTTP/1.1 (RFC 7234)</a> cache headers, it doesn't cache
 * partial responses.

Cache内部通过requestCount,networkCount,hitCount三个统计指标来优化缓存效率

· CacheStrategy:缓存策略。其内部维护一个request和response,通过指定request和response来描述是通过网络还是缓存获取response,抑或二者同时使用

[CacheStrategy.java]/**
* Given a request and cached response, this figures out whether to use the network, the cache, or
* both.
*
* <p>Selecting a cache strategy may add conditions to the request (like the "If-Modified-Since"
* header for conditional GETs) or warnings to the cached response (if the cached data is
* potentially stale).
*/public final class CacheStrategy {
 /** The request to send on the network, or null if this call doesn't use the network. */
 public final Request networkRequest;
 
 /** The cached response to return or validate; or null if this call doesn't use a cache. */
 public final Response cacheResponse;
 ......
}

· CacheStrategy$Factory:缓存策略工厂类根据实际请求返回对应的缓存策略

既然实际的缓存工作都是在CacheInterceptor中完成的,那么接下来看下CahceInterceptor的核心方法intercept方法源码:

[CacheInterceptor.java]@Override public Response intercept(Chain chain) throws IOException {
    //首先尝试获取缓存
    Response cacheCandidate = cache != null
        ? cache.get(chain.request())
        : null;
 
    long now = System.currentTimeMillis();
 
      //获取缓存策略
    CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
    Request networkRequest = strategy.networkRequest;
    Response cacheResponse = strategy.cacheResponse;
 
    //如果有缓存,更新下相关统计指标:命中率
    if (cache != null) {
      cache.trackResponse(strategy);
    }
 
    //如果当前缓存不符合要求,将其close
    if (cacheCandidate != null && cacheResponse == null) {
      closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
    }
 
    // 如果不能使用网络,同时又没有符合条件的缓存,直接抛504错误
    if (networkRequest == null && cacheResponse == null) {
      return new Response.Builder()
          .request(chain.request())
          .protocol(Protocol.HTTP_1_1)
          .code(504)
          .message("Unsatisfiable Request (only-if-cached)")
          .body(Util.EMPTY_RESPONSE)
          .sentRequestAtMillis(-1L)
          .receivedResponseAtMillis(System.currentTimeMillis())
          .build();
    }
 
    // 如果有缓存同时又不使用网络,则直接返回缓存结果
    if (networkRequest == null) {
      return cacheResponse.newBuilder()
          .cacheResponse(stripBody(cacheResponse))
          .build();
    }
 
    //尝试通过网络获取回复
    Response networkResponse = null;
    try {
      networkResponse = chain.proceed(networkRequest);
    } finally {
      // If we're crashing on I/O or otherwise, don't leak the cache body.
      if (networkResponse == null && cacheCandidate != null) {
        closeQuietly(cacheCandidate.body());
      }
    }
 
    // 如果既有缓存,同时又发起了请求,说明此时是一个Conditional Get请求
    if (cacheResponse != null) {
      // 如果服务端返回的是NOT_MODIFIED,缓存有效,将本地缓存和网络响应做合并
      if (networkResponse.code() == HTTP_NOT_MODIFIED) {
        Response response = cacheResponse.newBuilder()
            .headers(combine(cacheResponse.headers(), networkResponse.headers()))
            .sentRequestAtMillis(networkResponse.sentRequestAtMillis())
            .receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
            .cacheResponse(stripBody(cacheResponse))
            .networkResponse(stripBody(networkResponse))
            .build();
        networkResponse.body().close();
 
        // Update the cache after combining headers but before stripping the
        // Content-Encoding header (as performed by initContentStream()).
        cache.trackConditionalCacheHit();
        cache.update(cacheResponse, response);
        return response;
      } else {// 如果响应资源有更新,关掉原有缓存
        closeQuietly(cacheResponse.body());
      }
    }
 
    Response response = networkResponse.newBuilder()
        .cacheResponse(stripBody(cacheResponse))
        .networkResponse(stripBody(networkResponse))
        .build();
 
    if (cache != null) {
      if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
        // 将网络响应写入cache中
        CacheRequest cacheRequest = cache.put(response);
        return cacheWritingResponse(cacheRequest, response);
      }
 
      if (HttpMethod.invalidatesCache(networkRequest.method())) {
        try {
          cache.remove(networkRequest);
        } catch (IOException ignored) {
          // The cache cannot be written.
        }
      }
    }
 
    return response;
  }

核心逻辑都以中文注释的形式在代码中标注出来了,大家看代码即可。通过上面的代码可以看出,几乎所有的动作都是以CacheStrategy缓存策略为依据做出的,那么接下来看下缓存策略是如何生成的,相关代码实现在CacheStrategy$Factory.get()方法中:

[CacheStrategy$Factory]
 
    /**
     * Returns a strategy to satisfy {@code request} using the a cached response {@code response}.
     */
    public CacheStrategy get() {
      CacheStrategy candidate = getCandidate();
 
      if (candidate.networkRequest != null && request.cacheControl().onlyIfCached()) {
        // We're forbidden from using the network and the cache is insufficient.
        return new CacheStrategy(null, null);
      }
 
      return candidate;
    }
 
    /** Returns a strategy to use assuming the request can use the network. */
    private CacheStrategy getCandidate() {
      // 若本地没有缓存,发起网络请求
      if (cacheResponse == null) {
        return new CacheStrategy(request, null);
      }
 
      // 如果当前请求是HTTPS,而缓存没有TLS握手,重新发起网络请求
      if (request.isHttps() && cacheResponse.handshake() == null) {
        return new CacheStrategy(request, null);
      }
 
      // If this response shouldn't have been stored, it should never be used
      // as a response source. This check should be redundant as long as the
      // persistence store is well-behaved and the rules are constant.
      if (!isCacheable(cacheResponse, request)) {
        return new CacheStrategy(request, null);
      }
        
 
      //如果当前的缓存策略是不缓存或者是conditional get,发起网络请求
      CacheControl requestCaching = request.cacheControl();
      if (requestCaching.noCache() || hasConditions(request)) {
        return new CacheStrategy(request, null);
      }
 
      //ageMillis:缓存age
      long ageMillis = cacheResponseAge();
      //freshMillis:缓存保鲜时间
      long freshMillis = computeFreshnessLifetime();
 
      if (requestCaching.maxAgeSeconds() != -1) {
        freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
      }
 
      long minFreshMillis = 0;
      if (requestCaching.minFreshSeconds() != -1) {
        minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
      }
 
      long maxStaleMillis = 0;
      CacheControl responseCaching = cacheResponse.cacheControl();
      if (!responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() != -1) {
        maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
      }
 
      //如果 age + min-fresh >= max-age && age + min-fresh < max-age + max-stale,则虽然缓存过期了,     //但是缓存继续可以使用,只是在头部添加 110 警告码
      if (!responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis)         {
        Response.Builder builder = cacheResponse.newBuilder();
        if (ageMillis + minFreshMillis >= freshMillis) {
          builder.addHeader("Warning", "110 HttpURLConnection \"Response is stale\"");
        }
        long oneDayMillis = 24 * 60 * 60 * 1000L;
        if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
          builder.addHeader("Warning", "113 HttpURLConnection \"Heuristic expiration\"");
        }
        return new CacheStrategy(null, builder.build());
      }
 
      // 发起conditional get请求
      String conditionName;
      String conditionValue;
      if (etag != null) {
        conditionName = "If-None-Match";
        conditionValue = etag;
      } else if (lastModified != null) {
        conditionName = "If-Modified-Since";
        conditionValue = lastModifiedString;
      } else if (servedDate != null) {
        conditionName = "If-Modified-Since";
        conditionValue = servedDateString;
      } else {
        return new CacheStrategy(request, null); // No condition! Make a regular request.
      }
 
      Headers.Builder conditionalRequestHeaders = request.headers().newBuilder();
      Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue);
 
      Request conditionalRequest = request.newBuilder()
          .headers(conditionalRequestHeaders.build())
          .build();
      return new CacheStrategy(conditionalRequest, cacheResponse);
    }

可以看到其核心逻辑在getCandidate函数中。基本就是HTTP缓存协议的实现,核心代码逻辑已通过中文注释说明,大家直接看代码就好。

3. DiskLruCache

Cache内部通过DiskLruCache管理cache在文件系统层面的创建,读取,清理等等工作,接下来看下DiskLruCache的主要逻辑:

public final class DiskLruCache implements Closeable, Flushable {
  
  final FileSystem fileSystem;
  final File directory;
  private final File journalFile;
  private final File journalFileTmp;
  private final File journalFileBackup;
  private final int appVersion;
  private long maxSize;
  final int valueCount;
  private long size = 0;
  BufferedSink journalWriter;
  final LinkedHashMap<String, Entry> lruEntries = new LinkedHashMap<>(0, 0.75f, true);
 
  // Must be read and written when synchronized on 'this'.
  boolean initialized;
  boolean closed;
  boolean mostRecentTrimFailed;
  boolean mostRecentRebuildFailed;
 
  /**
   * To differentiate between old and current snapshots, each entry is given a sequence number each
   * time an edit is committed. A snapshot is stale if its sequence number is not equal to its
   * entry's sequence number.
   */
  private long nextSequenceNumber = 0;
 
  /** Used to run 'cleanupRunnable' for journal rebuilds. */
  private final Executor executor;
  private final Runnable cleanupRunnable = new Runnable() {
    public void run() {
        ......
    }
  };
  ...
  }
3.1 journalFile

DiskLruCache内部日志文件,对cache的每一次读写都对应一条日志记录,DiskLruCache通过分析日志分析和创建cache。日志文件格式如下:

      libcore.io.DiskLruCache
      1
      100
      2
 
      CLEAN 3400330d1dfc7f3f7f4b8d4d803dfcf6 832 21054
      DIRTY 335c4c6028171cfddfbaae1a9c313c52
      CLEAN 335c4c6028171cfddfbaae1a9c313c52 3934 2342
      REMOVE 335c4c6028171cfddfbaae1a9c313c52
      DIRTY 1ab96a171faeeee38496d8b330771a7a
      CLEAN 1ab96a171faeeee38496d8b330771a7a 1600 234
      READ 335c4c6028171cfddfbaae1a9c313c52
      READ 3400330d1dfc7f3f7f4b8d4d803dfcf6
     
     前5行固定不变,分别为:常量:libcore.io.DiskLruCache;diskCache版本;应用程序版本;valueCount(后文介绍),空行
     
     接下来每一行对应一个cache entry的一次状态记录,其格式为:[状态(DIRTY,CLEAN,READ,REMOVE),key,状态相关value(可选)]:
     - DIRTY:表明一个cache entry正在被创建或更新,每一个成功的DIRTY记录都应该对应一个CLEAN或REMOVE操作。如果一个DIRTY缺少预期匹配的CLEAN/REMOVE,则对应entry操作失败,需要将其从lruEntries中删除
     - CLEAN:说明cache已经被成功操作,当前可以被正常读取。每一个CLEAN行还需要记录其每一个value的长度
     - READ: 记录一次cache读取操作
     - REMOVE:记录一次cache清除
     

日志文件的应用场景主要有四个:

· DiskCacheLru初始化时通过读取日志文件创建cache容器:lruEntries。同时通过日志过滤操作不成功的cache项。相关逻辑在DiskLruCache.readJournalLine,DiskLruCache.processJournal

· 初始化完成后,为避免日志文件不断膨胀,对日志进行重建精简,具体逻辑在DiskLruCache.rebuildJournal

· 每当有cache操作时将其记录入日志文件中以备下次初始化时使用

· 当冗余日志过多时,通过调用cleanUpRunnable线程重建日志

3.2 DiskLruCache.Entry

每一个DiskLruCache.Entry对应一个cache记录:

  private final class Entry {
    final String key;
 
    /** Lengths of this entry's files. */
    final long[] lengths;
    final File[] cleanFiles;
    final File[] dirtyFiles;
 
    /** True if this entry has ever been published. */
    boolean readable;
 
    /** The ongoing edit or null if this entry is not being edited. */
    Editor currentEditor;
 
    /** The sequence number of the most recently committed edit to this entry. */
    long sequenceNumber;
 
    Entry(String key) {
      this.key = key;
 
      lengths = new long[valueCount];
      cleanFiles = new File[valueCount];
      dirtyFiles = new File[valueCount];
 
      // The names are repetitive so re-use the same builder to avoid allocations.
      StringBuilder fileBuilder = new StringBuilder(key).append('.');
      int truncateTo = fileBuilder.length();
      for (int i = 0; i < valueCount; i++) {
        fileBuilder.append(i);
        cleanFiles[i] = new File(directory, fileBuilder.toString());
        fileBuilder.append(".tmp");
        dirtyFiles[i] = new File(directory, fileBuilder.toString());
        fileBuilder.setLength(truncateTo);
      }
    }
    ...
     
        /**
     * Returns a snapshot of this entry. This opens all streams eagerly to guarantee that we see a
     * single published snapshot. If we opened streams lazily then the streams could come from
     * different edits.
     */
    Snapshot snapshot() {
      if (!Thread.holdsLock(DiskLruCache.this)) throw new AssertionError();
 
      Source[] sources = new Source[valueCount];
      long[] lengths = this.lengths.clone(); // Defensive copy since these can be zeroed out.
      try {
        for (int i = 0; i < valueCount; i++) {
          sources[i] = fileSystem.source(cleanFiles[i]);
        }
        return new Snapshot(key, sequenceNumber, sources, lengths);
      } catch (FileNotFoundException e) {
        // A file must have been deleted manually!
        for (int i = 0; i < valueCount; i++) {
          if (sources[i] != null) {
            Util.closeQuietly(sources[i]);
          } else {
            break;
          }
        }
        // Since the entry is no longer valid, remove it so the metadata is accurate (i.e. the cache
        // size.)
        try {
          removeEntry(this);
        } catch (IOException ignored) {
        }
        return null;
      }
    }
  }

一个Entry主要由以下几部分构成:

· key:每个cache都有一个key作为其标识符。当前cache的key为其对应URL的MD5字符串

· cleanFiles/dirtyFiles:每一个Entry对应多个文件,其对应的文件数由DiskLruCache.valueCount指定。当前在OkHttp中valueCount为2。即每个cache对应2个cleanFiles,2个dirtyFiles。其中第一个cleanFiles/dirtyFiles记录cache的meta数据(如URL,创建时间,SSL握手记录等等),第二个文件记录cache的真正内容。cleanFiles记录处于稳定状态的cache结果,dirtyFiles记录处于创建或更新状态的cache

· currentEditor:entry编辑器,对entry的所有操作都是通过其编辑器完成。编辑器内部添加了同步锁

3.3 cleanupRunnable

清理线程,用于重建精简日志:

  private final Runnable cleanupRunnable = new Runnable() {
    public void run() {
      synchronized (DiskLruCache.this) {
        if (!initialized | closed) {
          return; // Nothing to do
        }
 
        try {
          trimToSize();
        } catch (IOException ignored) {
          mostRecentTrimFailed = true;
        }
 
        try {
          if (journalRebuildRequired()) {
            rebuildJournal();
            redundantOpCount = 0;
          }
        } catch (IOException e) {
          mostRecentRebuildFailed = true;
          journalWriter = Okio.buffer(Okio.blackhole());
        }
      }
    }
  };

其触发条件在journalRebuildRequired()方法中:

  /**
   * We only rebuild the journal when it will halve the size of the journal and eliminate at least
   * 2000 ops.
   */
  boolean journalRebuildRequired() {
    final int redundantOpCompactThreshold = 2000;
    return redundantOpCount >= redundantOpCompactThreshold
        && redundantOpCount >= lruEntries.size();
  }

当冗余日志超过日志文件本身的一般且总条数超过2000时执行

3.4 SnapShot

cache快照,记录了特定cache在某一个特定时刻的内容。每次向DiskLruCache请求时返回的都是目标cache的一个快照,相关逻辑在DiskLruCache.get中:

[DiskLruCache.java]  /**
   * Returns a snapshot of the entry named {@code key}, or null if it doesn't exist is not currently
   * readable. If a value is returned, it is moved to the head of the LRU queue.
   */
  public synchronized Snapshot get(String key) throws IOException {
    initialize();
 
    checkNotClosed();
    validateKey(key);
    Entry entry = lruEntries.get(key);
    if (entry == null || !entry.readable) return null;
 
    Snapshot snapshot = entry.snapshot();
    if (snapshot == null) return null;
 
    redundantOpCount++;
    //日志记录
    journalWriter.writeUtf8(READ).writeByte(' ').writeUtf8(key).writeByte('\n');
    if (journalRebuildRequired()) {
      executor.execute(cleanupRunnable);
    }
 
    return snapshot;
  }
3.5 lruEntries

管理cache entry的容器,其数据结构是LinkedHashMap。通过LinkedHashMap本身的实现逻辑达到cache的LRU替换

3.6 FileSystem

使用Okio对File的封装,简化了I/O操作。

3.7 DiskLruCache.edit

DiskLruCache可以看成是Cache在文件系统层的具体实现,所以其基本操作接口存在一一对应的关系:

· Cache.get() —>DiskLruCache.get()

· Cache.put()—>DiskLruCache.edit() //cache插入

· Cache.remove()—>DiskLruCache.remove()

· Cache.update()—>DiskLruCache.edit()//cache更新

其中get操作在3.4已经介绍了,remove操作较为简单,put和update大致逻辑相似,因为篇幅限制,这里仅介绍Cache.put操作的逻辑,其他的操作大家看代码就好:

[okhttp3.Cache.java]
  CacheRequest put(Response response) {
    String requestMethod = response.request().method();
 
    if (HttpMethod.invalidatesCache(response.request().method())) {
      try {
        remove(response.request());
      } catch (IOException ignored) {
        // The cache cannot be written.
      }
      return null;
    }
    if (!requestMethod.equals("GET")) {
      // Don't cache non-GET responses. We're technically allowed to cache
      // HEAD requests and some POST requests, but the complexity of doing
      // so is high and the benefit is low.
      return null;
    }
 
    if (HttpHeaders.hasVaryAll(response)) {
      return null;
    }
 
    Entry entry = new Entry(response);
    DiskLruCache.Editor editor = null;
    try {
      editor = cache.edit(key(response.request().url()));
      if (editor == null) {
        return null;
      }
      entry.writeTo(editor);
      return new CacheRequestImpl(editor);
    } catch (IOException e) {
      abortQuietly(editor);
      return null;
    }
  }

可以看到核心逻辑在editor = cache.edit(key(response.request().url()));,相关代码在DiskLruCache.edit:

[okhttp3.internal.cache.DiskLruCache.java]  synchronized Editor edit(String key, long expectedSequenceNumber) throws IOException {
    initialize();
 
    checkNotClosed();
    validateKey(key);
    Entry entry = lruEntries.get(key);
    if (expectedSequenceNumber != ANY_SEQUENCE_NUMBER && (entry == null
        || entry.sequenceNumber != expectedSequenceNumber)) {
      return null; // Snapshot is stale.
    }
    if (entry != null && entry.currentEditor != null) {
      return null; // 当前cache entry正在被其他对象操作
    }
    if (mostRecentTrimFailed || mostRecentRebuildFailed) {
      // The OS has become our enemy! If the trim job failed, it means we are storing more data than
      // requested by the user. Do not allow edits so we do not go over that limit any further. If
      // the journal rebuild failed, the journal writer will not be active, meaning we will not be
      // able to record the edit, causing file leaks. In both cases, we want to retry the clean up
      // so we can get out of this state!
      executor.execute(cleanupRunnable);
      return null;
    }
 
    // 日志接入DIRTY记录
    journalWriter.writeUtf8(DIRTY).writeByte(' ').writeUtf8(key).writeByte('\n');
    journalWriter.flush();
 
    if (hasJournalErrors) {
      return null; // Don't edit; the journal can't be written.
    }
 
    if (entry == null) {
      entry = new Entry(key);
      lruEntries.put(key, entry);
    }
    Editor editor = new Editor(entry);
    entry.currentEditor = editor;
    return editor;
  }

edit方法返回对应CacheEntry的editor编辑器。接下来再来看下Cache.put()方法的entry.writeTo(editor);,其相关逻辑:

[okhttp3.internal.cache.DiskLruCache.java]    public void writeTo(DiskLruCache.Editor editor) throws IOException {
      BufferedSink sink = Okio.buffer(editor.newSink(ENTRY_METADATA));
 
      sink.writeUtf8(url)
          .writeByte('\n');
      sink.writeUtf8(requestMethod)
          .writeByte('\n');
      sink.writeDecimalLong(varyHeaders.size())
          .writeByte('\n');
      for (int i = 0, size = varyHeaders.size(); i < size; i++) {
        sink.writeUtf8(varyHeaders.name(i))
            .writeUtf8(": ")
            .writeUtf8(varyHeaders.value(i))
            .writeByte('\n');
      }
 
      sink.writeUtf8(new StatusLine(protocol, code, message).toString())
          .writeByte('\n');
      sink.writeDecimalLong(responseHeaders.size() + 2)
          .writeByte('\n');
      for (int i = 0, size = responseHeaders.size(); i < size; i++) {
        sink.writeUtf8(responseHeaders.name(i))
            .writeUtf8(": ")
            .writeUtf8(responseHeaders.value(i))
            .writeByte('\n');
      }
      sink.writeUtf8(SENT_MILLIS)
          .writeUtf8(": ")
          .writeDecimalLong(sentRequestMillis)
          .writeByte('\n');
      sink.writeUtf8(RECEIVED_MILLIS)
          .writeUtf8(": ")
          .writeDecimalLong(receivedResponseMillis)
          .writeByte('\n');
 
      if (isHttps()) {
        sink.writeByte('\n');
        sink.writeUtf8(handshake.cipherSuite().javaName())
            .writeByte('\n');
        writeCertList(sink, handshake.peerCertificates());
        writeCertList(sink, handshake.localCertificates());
        // The handshake’s TLS version is null on HttpsURLConnection and on older cached responses.
        if (handshake.tlsVersion() != null) {
          sink.writeUtf8(handshake.tlsVersion().javaName())
              .writeByte('\n');
        }
      }
      sink.close();
    }

其主要逻辑就是将对应请求的meta数据写入对应CacheEntry的索引为ENTRY_METADATA(0)的dirtyfile中。

最后再来看Cache.put()方法的return new CacheRequestImpl(editor);:

[okhttp3.Cache$CacheRequestImpl]private final class CacheRequestImpl implements CacheRequest {
    private final DiskLruCache.Editor editor;
    private Sink cacheOut;
    private Sink body;
    boolean done;
 
    public CacheRequestImpl(final DiskLruCache.Editor editor) {
      this.editor = editor;
      this.cacheOut = editor.newSink(ENTRY_BODY);
      this.body = new ForwardingSink(cacheOut) {
        @Override public void close() throws IOException {
          synchronized (Cache.this) {
            if (done) {
              return;
            }
            done = true;
            writeSuccessCount++;
          }
          super.close();
          editor.commit();
        }
      };
    }
 
    @Override public void abort() {
      synchronized (Cache.this) {
        if (done) {
          return;
        }
        done = true;
        writeAbortCount++;
      }
      Util.closeQuietly(cacheOut);
      try {
        editor.abort();
      } catch (IOException ignored) {
      }
    }
 
    @Override public Sink body() {
      return body;
    }
  }

其中close,abort方法会调用editor.aborteditor.commit来更新日志,editor.commit还会将dirtyFile重置为cleanFile作为稳定可用的缓存,相关逻辑在okhttp3.internal.cache.DiskLruCache$Editor.completeEdit中:

[okhttp3.internal.cache.DiskLruCache$Editor.completeEdit]  synchronized void completeEdit(Editor editor, boolean success) throws IOException {
    Entry entry = editor.entry;
    if (entry.currentEditor != editor) {
      throw new IllegalStateException();
    }
 
    // If this edit is creating the entry for the first time, every index must have a value.
    if (success && !entry.readable) {
      for (int i = 0; i < valueCount; i++) {
        if (!editor.written[i]) {
          editor.abort();
          throw new IllegalStateException("Newly created entry didn't create value for index " + i);
        }
        if (!fileSystem.exists(entry.dirtyFiles[i])) {
          editor.abort();
          return;
        }
      }
    }
 
    for (int i = 0; i < valueCount; i++) {
      File dirty = entry.dirtyFiles[i];
      if (success) {
        if (fileSystem.exists(dirty)) {
          File clean = entry.cleanFiles[i];
          fileSystem.rename(dirty, clean);//将dirtyfile置为cleanfile
          long oldLength = entry.lengths[i];
          long newLength = fileSystem.size(clean);
          entry.lengths[i] = newLength;
          size = size - oldLength + newLength;
        }
      } else {
        fileSystem.delete(dirty);//若失败则删除dirtyfile
      }
    }
 
    redundantOpCount++;
    entry.currentEditor = null;
    //更新日志
    if (entry.readable | success) {
      entry.readable = true;
      journalWriter.writeUtf8(CLEAN).writeByte(' ');
      journalWriter.writeUtf8(entry.key);
      entry.writeLengths(journalWriter);
      journalWriter.writeByte('\n');
      if (success) {
        entry.sequenceNumber = nextSequenceNumber++;
      }
    } else {
      lruEntries.remove(entry.key);
      journalWriter.writeUtf8(REMOVE).writeByte(' ');
      journalWriter.writeUtf8(entry.key);
      journalWriter.writeByte('\n');
    }
    journalWriter.flush();
 
    if (size > maxSize || journalRebuildRequired()) {
      executor.execute(cleanupRunnable);
    }
  }

CacheRequestImpl实现CacheRequest接口,向外部类(主要是CacheInterceptor)透出,外部对象通过CacheRequestImpl更新或写入缓存数据。

3.8总结

总结起来DiskLruCache主要有以下几个特点:

· 通过LinkedHashMap实现LRU替换

· 通过本地维护Cache操作日志保证Cache原子性与可用性,同时为防止日志过分膨胀定时执行日志精简

· 每一个Cache项对应两个状态副本:DIRTY,CLEAN。CLEAN表示当前可用状态Cache,外部访问到的cache快照均为CLEAN状态;DIRTY为更新态Cache。由于更新和创建都只操作DIRTY状态副本,实现了Cache的读写分离

· 每一个Cache项有四个文件,两个状态(DIRTY,CLEAN),每个状态对应两个文件:一个文件存储Cache meta数据,一个文件存储Cache内容数据

5 - OkHttp 3.7源码分析(五)——连接池

简介: 接下来讲下OkHttp的连接池管理,这也是OkHttp的核心部分。通过维护连接池,最大限度重用现有连接,减少网络连接的创建开销,以此提升网络请求效率。

OkHttp3.7源码分析文章列表如下:

· OkHttp源码分析——整体架构

· OkHttp源码分析——拦截器

· OkHttp源码分析——任务队列

· OkHttp源码分析——缓存策略

· OkHttp源码分析——多路复用

接下来讲下OkHttp的连接池管理,这也是OkHttp的核心部分。通过维护连接池,最大限度重用现有连接,减少网络连接的创建开销,以此提升网络请求效率。

1. 背景

1.1 keep-alive机制

在HTTP1.0中HTTP的请求流程如下:

在这里插入图片描述

这种方法的好处是简单,各个请求互不干扰。但在复杂的网络请求场景下这种方式几乎不可用。例如:浏览器加载一个HTML网页,HTML中可能需要加载数十个资源,典型场景下这些资源中大部分来自同一个站点。按照HTTP1.0的做法,这需要建立数十个TCP连接,每个连接负责一个资源请求。创建一个TCP连接需要3次握手,而释放连接则需要2次或4次握手。重复的创建和释放连接极大地影响了网络效率,同时也增加了系统开销。

为了有效地解决这一问题,HTTP/1.1提出了Keep-Alive机制:当一个HTTP请求的数据传输结束后,TCP连接不立即释放,如果此时有新的HTTP请求,且其请求的Host通上次请求相同,则可以直接复用为释放的TCP连接,从而省去了TCP的释放和再次创建的开销,减少了网络延时:

在这里插入图片描述

在现代浏览器中,一般同时开启6~8个keepalive connections的socket连接,并保持一定的链路生命,当不需要时再关闭;而在服务器中,一般是由软件根据负载情况(比如FD最大值、Socket内存、超时时间、栈内存、栈数量等)决定是否主动关闭。

1.2 HTTP/2

在HTTP/1.x中,如果客户端想发起多个并行请求必须建立多个TCP连接,这无疑增大了网络开销。另外HTTP/1.x不会压缩请求和响应报头,导致了不必要的网络流量;HTTP/1.x不支持资源优先级导致底层TCP连接利用率低下。而这些问题都是HTTP/2要着力解决的。简单来说HTTP/2主要解决了以下问题:

· 报头压缩:HTTP/2使用HPACK压缩格式压缩请求和响应报头数据,减少不必要流量开销

· 请求与响应复用:HTTP/2通过引入新的二进制分帧层实现了完整的请求和响应复用,客户端和服务器可以将HTTP消息分解为互不依赖的帧,然后交错发送,最后再在另一端将其重新组装

· 指定数据流优先级:将 HTTP 消息分解为很多独立的帧之后,我们就可以复用多个数据流中的帧,客户端和服务器交错发送和传输这些帧的顺序就成为关键的性能决定因素。为了做到这一点,HTTP/2 标准允许每个数据流都有一个关联的权重和依赖关系

· 流控制:HTTP/2 提供了一组简单的构建块,这些构建块允许客户端和服务器实现其自己的数据流和连接级流控制

HTTP/2所有性能增强的核心在于新的二进制分帧层,它定义了如何封装HTTP消息并在客户端与服务器之间进行传输:

在这里插入图片描述

同时HTTP/2引入了三个新的概念:

· 数据流:基于TCP连接之上的逻辑双向字节流,对应一个请求及其响应。客户端每发起一个请求就建立一个数据流,后续该请求及其响应的所有数据都通过该数据流传输

· 消息:一个请求或响应对应的一系列数据帧

· 帧:HTTP/2的最小数据切片单位

上述概念之间的逻辑关系:

· 所有通信都在一个 TCP 连接上完成,此连接可以承载任意数量的双向数据流

· 每个数据流都有一个唯一的标识符和可选的优先级信息,用于承载双向消息

· 每条消息都是一条逻辑 HTTP 消息(例如请求或响应),包含一个或多个帧

· 帧是最小的通信单位,承载着特定类型的数据,例如 HTTP 标头、消息负载,等等。 来自不同数据流的帧可以交错发送,然后再根据每个帧头的数据流标识符重新组装

· 每个HTTP消息被分解为多个独立的帧后可以交错发送,从而在宏观上实现了多个请求或响应并行传输的效果。这类似于多进程环境下的时间分片机制

在这里插入图片描述

2. 连接池的使用与分析

无论是HTTP/1.1的Keep-Alive机制还是HTTP/2的多路复用机制,在实现上都需要引入连接池来维护网络连接。接下来看下OkHttp中的连接池实现。

OkHttp内部通过ConnectionPool来管理连接池,首先来看下ConnectionPool的主要成员:

public final class ConnectionPool {
  private static final Executor executor = new ThreadPoolExecutor(0 /* corePoolSize */,
      Integer.MAX_VALUE /* maximumPoolSize */, 60L /* keepAliveTime */, TimeUnit.SECONDS,
      new SynchronousQueue<Runnable>(), Util.threadFactory("OkHttp ConnectionPool", true));
 
  /** The maximum number of idle connections for each address. */
  private final int maxIdleConnections;
  private final long keepAliveDurationNs;
  private final Runnable cleanupRunnable = new Runnable() {
    @Override public void run() {
        ......
    }
  };
 
  private final Deque<RealConnection> connections = new ArrayDeque<>();
  final RouteDatabase routeDatabase = new RouteDatabase();
  boolean cleanupRunning;
  ......
    
    /**
    *返回符合要求的可重用连接,如果没有返回NULL
   */
  RealConnection get(Address address, StreamAllocation streamAllocation, Route route) {
    ......
  }
  
  /*
  * 去除重复连接。主要针对多路复用场景下一个address只需要一个连接
  */
  Socket deduplicate(Address address, StreamAllocation streamAllocation) {
    ......
    }
  
  /*
  * 将连接加入连接池
  */
  void put(RealConnection connection) {
      ......
  }
  
  /*
  * 当有连接空闲时唤起cleanup线程清洗连接池
  */
  boolean connectionBecameIdle(RealConnection connection) {
      ......
  }
  
  /**
   * 扫描连接池,清除空闲连接
  */
  long cleanup(long now) {
    ......
  }
  
  /*
   * 标记泄露连接
  */
  private int pruneAndGetAllocationCount(RealConnection connection, long now) {
    ......
  }
}

相关概念:

· Call:对Http请求的封装

· Connection/RealConnection:物理连接的封装,其内部有List<WeakReference<StreamAllocation>>的引用计数

· StreamAllocation: okhttp中引入了StreamAllocation负责管理一个连接上的流,同时在connection中也通过一个StreamAllocation的引用的列表来管理一个连接的流,从而使得连接与流之间解耦。关于StreamAllocation的定义可以看下这篇文章:okhttp源码学习笔记(二)-- 连接与连接管理

· connections: Deque双端队列,用于维护连接的容器

· routeDatabase:用来记录连接失败的Route的黑名单,当连接失败的时候就会把失败的线路加进去

2.1 实例化

首先来看下ConnectionPool的实例化过程,一个OkHttpClient只包含一个ConnectionPool,其实例化过程也在OkHttpClient的实例化过程中实现,值得一提的是ConnectionPool各个方法的调用并没有直接对外暴露,而是通过OkHttpClient的Internal接口统一对外暴露:

public class OkHttpClient implements Cloneable, Call.Factory, WebSocket.Factory {
    static {
    Internal.instance = new Internal() {
      @Override public void addLenient(Headers.Builder builder, String line) {
        builder.addLenient(line);
      }
 
      @Override public void addLenient(Headers.Builder builder, String name, String value) {
        builder.addLenient(name, value);
      }
 
      @Override public void setCache(Builder builder, InternalCache internalCache) {
        builder.setInternalCache(internalCache);
      }
 
      @Override public boolean connectionBecameIdle(
          ConnectionPool pool, RealConnection connection) {
        return pool.connectionBecameIdle(connection);
      }
 
      @Override public RealConnection get(ConnectionPool pool, Address address,
          StreamAllocation streamAllocation, Route route) {
        return pool.get(address, streamAllocation, route);
      }
 
      @Override public boolean equalsNonHost(Address a, Address b) {
        return a.equalsNonHost(b);
      }
 
      @Override public Socket deduplicate(
          ConnectionPool pool, Address address, StreamAllocation streamAllocation) {
        return pool.deduplicate(address, streamAllocation);
      }
 
      @Override public void put(ConnectionPool pool, RealConnection connection) {
        pool.put(connection);
      }
 
      @Override public RouteDatabase routeDatabase(ConnectionPool connectionPool) {
        return connectionPool.routeDatabase;
      }
 
      @Override public int code(Response.Builder responseBuilder) {
        return responseBuilder.code;
      }
 
      @Override
      public void apply(ConnectionSpec tlsConfiguration, SSLSocket sslSocket, boolean isFallback)         {
        tlsConfiguration.apply(sslSocket, isFallback);
      }
 
      @Override public HttpUrl getHttpUrlChecked(String url)
          throws MalformedURLException, UnknownHostException {
        return HttpUrl.getChecked(url);
      }
 
      @Override public StreamAllocation streamAllocation(Call call) {
        return ((RealCall) call).streamAllocation();
      }
 
      @Override public Call newWebSocketCall(OkHttpClient client, Request originalRequest) {
        return new RealCall(client, originalRequest, true);
      }
    };
     ......
}

这样做的原因是:

Escalate internal APIs in {@code okhttp3} so they can be used from OkHttp's implementation
packages. The only implementation of this interface is in {@link OkHttpClient}.

Internal的唯一实现在OkHttpClient中,OkHttpClient通过这种方式暴露其API给外部类使用。

2.2 连接池维护

ConnectionPool内部通过一个双端队列(dequeue)来维护当前所有连接,主要涉及到的操作包括:

· put:放入新连接

· get:从连接池中获取连接

· evictAll:关闭所有连接

· connectionBecameIdle:连接变空闲后调用清理线程

· deduplicate:清除重复的多路复用线程

2.2.1 StreamAllocation.findConnection

get是ConnectionPool中最为重要的方法,StreamAllocation在其findConnection方法内部通过调用get方法为其找到stream找到合适的连接,如果没有则新建一个连接。首先来看下findConnection的逻辑:

private RealConnection findConnection(int connectTimeout, int readTimeout, int writeTimeout,
                                        boolean connectionRetryEnabled) throws IOException {
    Route selectedRoute;
    synchronized (connectionPool) {
      if (released) throw new IllegalStateException("released");
      if (codec != null) throw new IllegalStateException("codec != null");
      if (canceled) throw new IOException("Canceled");
 
      // 一个StreamAllocation刻画的是一个Call的数据流动,一个Call可能存在多次请求(重定向,Authenticate等),所以当发生类似重定向等事件时优先使用原有的连接
      RealConnection allocatedConnection = this.connection;
      if (allocatedConnection != null && !allocatedConnection.noNewStreams) {
        return allocatedConnection;
      }
 
      // 试图从连接池中找到可复用的连接
      Internal.instance.get(connectionPool, address, this, null);
      if (connection != null) {
        return connection;
      }
 
      selectedRoute = route;
    }
 
    // 获取路由配置,所谓路由其实就是代理,ip地址等参数的一个组合
    if (selectedRoute == null) {
      selectedRoute = routeSelector.next();
    }
 
    RealConnection result;
    synchronized (connectionPool) {
      if (canceled) throw new IOException("Canceled");
 
      //拿到路由后可以尝试重新从连接池中获取连接,这里主要针对http2协议下清除域名碎片机制
      Internal.instance.get(connectionPool, address, this, selectedRoute);
      if (connection != null) return connection;
 
      //新建连接
      route = selectedRoute;
      refusedStreamCount = 0;
      result = new RealConnection(connectionPool, selectedRoute);
      //修改result连接stream计数,方便connection标记清理
      acquire(result);
    }
 
    // Do TCP + TLS handshakes. This is a blocking operation.
    result.connect(connectTimeout, readTimeout, writeTimeout, connectionRetryEnabled);
    routeDatabase().connected(result.route());
 
    Socket socket = null;
    synchronized (connectionPool) {
      // 将新建的连接放入到连接池中
      Internal.instance.put(connectionPool, result);
 
      // 如果同时存在多个连向同一个地址的多路复用连接,则关闭多余连接,只保留一个
      if (result.isMultiplexed()) {
        socket = Internal.instance.deduplicate(connectionPool, address, this);
        result = connection;
      }
    }
    closeQuietly(socket);
 
    return result;
  }

其主要逻辑大致分为以下几个步骤:

· 查看当前streamAllocation是否有之前已经分配过的连接,有则直接使用

· 从连接池中查找可复用的连接,有则返回该连接

· 配置路由,配置后再次从连接池中查找是否有可复用连接,有则直接返回

· 新建一个连接,并修改其StreamAllocation标记计数,将其放入连接池中

· 查看连接池是否有重复的多路复用连接,有则清除

2.2.2 ConnectionPool.get

接下来再来看get方法的源码:

[ConnectionPool.java]
  RealConnection get(Address address, StreamAllocation streamAllocation, Route route) {
    assert (Thread.holdsLock(this));
    for (RealConnection connection : connections) {
      if (connection.isEligible(address, route)) {
        streamAllocation.acquire(connection);
        return connection;
      }
    }
    return null;
  }

其逻辑比较简单,遍历当前连接池,如果有符合条件的连接则修改器标记计数,然后返回。这里的关键逻辑在RealConnection.isEligible方法:

[RealConnection.java]/**
   * Returns true if this connection can carry a stream allocation to {@code address}. If non-null
   * {@code route} is the resolved route for a connection.
   */
  public boolean isEligible(Address address, Route route) {
    // If this connection is not accepting new streams, we're done.
    if (allocations.size() >= allocationLimit || noNewStreams) return false;
 
    // If the non-host fields of the address don't overlap, we're done.
    if (!Internal.instance.equalsNonHost(this.route.address(), address)) return false;
 
    // If the host exactly matches, we're done: this connection can carry the address.
    if (address.url().host().equals(this.route().address().url().host())) {
      return true; // This connection is a perfect match.
    }
 
    // At this point we don't have a hostname match. But we still be able to carry the request if
    // our connection coalescing requirements are met. See also:
    // https://hpbn.co/optimizing-application-delivery/#eliminate-domain-sharding
    // https://daniel.haxx.se/blog/2016/08/18/http2-connection-coalescing/
 
    // 1. This connection must be HTTP/2.
    if (http2Connection == null) return false;
 
    // 2. The routes must share an IP address. This requires us to have a DNS address for both
    // hosts, which only happens after route planning. We can't coalesce connections that use a
    // proxy, since proxies don't tell us the origin server's IP address.
    if (route == null) return false;
    if (route.proxy().type() != Proxy.Type.DIRECT) return false;
    if (this.route.proxy().type() != Proxy.Type.DIRECT) return false;
    if (!this.route.socketAddress().equals(route.socketAddress())) return false;
 
    // 3. This connection's server certificate's must cover the new host.
    if (route.address().hostnameVerifier() != OkHostnameVerifier.INSTANCE) return false;
    if (!supportsUrl(address.url())) return false;
 
    // 4. Certificate pinning must match the host.
    try {
      address.certificatePinner().check(address.url().host(), handshake().peerCertificates());
    } catch (SSLPeerUnverifiedException e) {
      return false;
    }
 
    return true; // The caller's address can be carried by this connection.
  }

· 连接没有达到共享上限

· 非host域必须完全一样

· 如果此时host域也相同,则符合条件,可以被复用

· 如果host不相同,在HTTP/2的域名切片场景下一样可以复用,具体细节可以参考:https://hpbn.co/optimizing-application-delivery/#eliminate-domain-sharding

2.2.3 deduplicate

deduplicate方法主要是针对在HTTP/2场景下多个多路复用连接清除的场景。如果当前连接是HTTP/2,那么所有指向该站点的请求都应该基于同一个TCP连接:

[ConnectionPool.java]
  /**
   * Replaces the connection held by {@code streamAllocation} with a shared connection if possible.
   * This recovers when multiple multiplexed connections are created concurrently.
   */
  Socket deduplicate(Address address, StreamAllocation streamAllocation) {
    assert (Thread.holdsLock(this));
    for (RealConnection connection : connections) {
      if (connection.isEligible(address, null)
          && connection.isMultiplexed()
          && connection != streamAllocation.connection()) {
        return streamAllocation.releaseAndAcquire(connection);
      }
    }
    return null;
  }

put和evictAll比较简单,在这里就不写了,大家自行看源码。

2.3 自动回收

连接池中有socket回收,而这个回收是以RealConnection的弱引用List<Reference<StreamAllocation>>是否为0来为依据的。ConnectionPool有一个独立的线程cleanupRunnable来清理连接池,其触发时机有两个:

· 当连接池中put新的连接时

· 当connectionBecameIdle接口被调用时

其代码如下:

while (true) {
  //执行清理并返回下场需要清理的时间
  long waitNanos = cleanup(System.nanoTime());
  if (waitNanos == -1) return;
  if (waitNanos > 0) {
    synchronized (ConnectionPool.this) {
      try {
        //在timeout内释放锁与时间片
        ConnectionPool.this.wait(TimeUnit.NANOSECONDS.toMillis(waitNanos));
      } catch (InterruptedException ignored) {
      }
    }
  }
}

这段死循环实际上是一个阻塞的清理任务,首先进行清理(clean),并返回下次需要清理的间隔时间,然后调用wait(timeout)进行等待以释放锁与时间片,当等待时间到了后,再次进行清理,并返回下次要清理的间隔时间…

接下来看下cleanup函数:

[ConnectionPool.java]long cleanup(long now) {
  int inUseConnectionCount = 0;
  int idleConnectionCount = 0;
  RealConnection longestIdleConnection = null;
  long longestIdleDurationNs = Long.MIN_VALUE;
 
  //遍历`Deque`中所有的`RealConnection`,标记泄漏的连接
  synchronized (this) {
    for (RealConnection connection : connections) {
      // 查询此连接内部StreamAllocation的引用数量
      if (pruneAndGetAllocationCount(connection, now) > 0) {
        inUseConnectionCount++;
        continue;
      }
 
      idleConnectionCount++;
 
      //选择排序法,标记出空闲连接
      long idleDurationNs = now - connection.idleAtNanos;
      if (idleDurationNs > longestIdleDurationNs) {
        longestIdleDurationNs = idleDurationNs;
        longestIdleConnection = connection;
      }
    }
 
    if (longestIdleDurationNs >= this.keepAliveDurationNs
        || idleConnectionCount > this.maxIdleConnections) {
      //如果(`空闲socket连接超过5个`
      //且`keepalive时间大于5分钟`)
      //就将此泄漏连接从`Deque`中移除
      connections.remove(longestIdleConnection);
    } else if (idleConnectionCount > 0) {
      //返回此连接即将到期的时间,供下次清理
      //这里依据是在上文`connectionBecameIdle`中设定的计时
      return keepAliveDurationNs - longestIdleDurationNs;
    } else if (inUseConnectionCount > 0) {
      //全部都是活跃的连接,5分钟后再次清理
      return keepAliveDurationNs;
    } else {
      //没有任何连接,跳出循环
      cleanupRunning = false;
      return -1;
    }
  }
 
  //关闭连接,返回`0`,也就是立刻再次清理
  closeQuietly(longestIdleConnection.socket());
  return 0;
}

其基本逻辑如下:

· 遍历连接池中所有连接,标记泄露连接

· 如果被标记的连接满足(空闲socket连接超过5个&&keepalive``时间大于5分钟),就将此连接从Deque中移除,并关闭连接,返回0,也就是将要执行wait(0),提醒立刻再次扫描

· 如果(目前还可以塞得下5个连接,但是有可能泄漏的连接(即空闲时间即将达到5分钟)),就返回此连接即将到期的剩余时间,供下次清理

· 如果(全部都是活跃的连接),就返回默认的keep-alive时间,也就是5分钟后再执行清理

pruneAndGetAllocationCount负责标记并找到不活跃连接:

[ConnnecitonPool.java]//类似于引用计数法,如果引用全部为空,返回立刻清理private int pruneAndGetAllocationCount(RealConnection connection, long now) {
  //虚引用列表
  List<Reference<StreamAllocation>> references = connection.allocations;
  //遍历弱引用列表
  for (int i = 0; i < references.size(); ) {
    Reference<StreamAllocation> reference = references.get(i);
    //如果正在被使用,跳过,接着循环
    //是否置空是在上文`connectionBecameIdle`的`release`控制的
    if (reference.get() != null) {
      //非常明显的引用计数
      i++;
      continue;
    }
 
    //否则移除引用
    references.remove(i);
    connection.noNewStreams = true;
 
    //如果所有分配的流均没了,标记为已经距离现在空闲了5分钟
    if (references.isEmpty()) {
      connection.idleAtNanos = now - keepAliveDurationNs;
      return 0;
    }
  }
 
  return references.size();
}

= 0;
RealConnection longestIdleConnection = null;
long longestIdleDurationNs = Long.MIN_VALUE;

//遍历Deque中所有的RealConnection,标记泄漏的连接
synchronized (this) {
for (RealConnection connection : connections) {
// 查询此连接内部StreamAllocation的引用数量
if (pruneAndGetAllocationCount(connection, now) > 0) {
inUseConnectionCount++;
continue;
}

  idleConnectionCount++;

  //选择排序法,标记出空闲连接
  long idleDurationNs = now - connection.idleAtNanos;
  if (idleDurationNs > longestIdleDurationNs) {
    longestIdleDurationNs = idleDurationNs;
    longestIdleConnection = connection;
  }
}

if (longestIdleDurationNs >= this.keepAliveDurationNs
    || idleConnectionCount > this.maxIdleConnections) {
  //如果(`空闲socket连接超过5个`
  //且`keepalive时间大于5分钟`)
  //就将此泄漏连接从`Deque`中移除
  connections.remove(longestIdleConnection);
} else if (idleConnectionCount > 0) {
  //返回此连接即将到期的时间,供下次清理
  //这里依据是在上文`connectionBecameIdle`中设定的计时
  return keepAliveDurationNs - longestIdleDurationNs;
} else if (inUseConnectionCount > 0) {
  //全部都是活跃的连接,5分钟后再次清理
  return keepAliveDurationNs;
} else {
  //没有任何连接,跳出循环
  cleanupRunning = false;
  return -1;
}

}

//关闭连接,返回0,也就是立刻再次清理
closeQuietly(longestIdleConnection.socket());
return 0;
}


其基本逻辑如下:

·     遍历连接池中所有连接,标记泄露连接

·     如果被标记的连接满足(`空闲socket连接超过5个`&&`keepalive``时间大于5分钟`),就将此连接从`Deque`中移除,并关闭连接,返回`0`,也就是将要执行`wait(0)`,提醒立刻再次扫描

·     如果(`目前还可以塞得下5个连接,但是有可能泄漏的连接(即空闲时间即将达到5分钟)`),就返回此连接即将到期的剩余时间,供下次清理

·     如果(`全部都是活跃的连接`),就返回默认的`keep-alive`时间,也就是5分钟后再执行清理

而`pruneAndGetAllocationCount`负责标记并找到不活跃连接:

```java
[ConnnecitonPool.java]//类似于引用计数法,如果引用全部为空,返回立刻清理private int pruneAndGetAllocationCount(RealConnection connection, long now) {
  //虚引用列表
  List<Reference<StreamAllocation>> references = connection.allocations;
  //遍历弱引用列表
  for (int i = 0; i < references.size(); ) {
    Reference<StreamAllocation> reference = references.get(i);
    //如果正在被使用,跳过,接着循环
    //是否置空是在上文`connectionBecameIdle`的`release`控制的
    if (reference.get() != null) {
      //非常明显的引用计数
      i++;
      continue;
    }
 
    //否则移除引用
    references.remove(i);
    connection.noNewStreams = true;
 
    //如果所有分配的流均没了,标记为已经距离现在空闲了5分钟
    if (references.isEmpty()) {
      connection.idleAtNanos = now - keepAliveDurationNs;
      return 0;
    }
  }
 
  return references.size();
}

OkHttp的连接池通过计数+标记清理的机制来管理连接池,使得无用连接可以被会回收,并保持多个健康的keep-alive连接。这也是OkHttp的连接池能保持高效的关键原因。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值