OkHttp4源码分析
HuiMeng 2022-03-
29
HTTP1.0 HTTP1.1 HTTP2.0 主要特性对比 - SegmentFault 思否
Java线程池分析 - Gityuan博客 | 袁辉辉的技术博客
https://cloud.tencent.com/developer/article/1634914
https://www.cnblogs.com/chenqf/p/6386163.html
一.OkHttp介绍
由Square公司贡献的一个处理网络请求的开源项目,是目前Android使用最广泛的网络框架。从Android4.4开始
HttpURLConnection的底层实现采用的是OkHttp.
支持HTTP/2并允许对同一主机的所有请求共享一个套接字;
如果非HTTP/2,则通过连接池,减少了请求延迟;
默认请求GZip压缩数据;
响应缓存,避免了重复请求的网络;
二.使用方法
同步请求举例:
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
var okHttpClient = OkHttpClient.Builder().build() // 构建一个build
var request = Request.Builder().url("https://www.baidu.com") // 构建一个request
.cacheControl(CacheControl.FORCE_CACHE)
.build()
var call = okHttpClient.newCall(request) // 构建一个call
val result = call.execute() // 执行请求
println(result.isSuccessful)
result.close()
}
}
异步请求举例:
String url = "http://wwww.baidu.com";
OkHttpClient okHttpClient = new OkHttpClient();
final Request request = new Request.Builder()
.url(url)
.get()//默认就是GET请求,可以不写
.build();
Call call = okHttpClient.newCall(request);
call.enqueue(new Callback() {
@Override
public void onFailure(Call call, IOException e) {
Log.d(TAG, "onFailure: ");
}
@Override
public void onResponse(Call call, Response response) throws IOException {
Log.d(TAG, "onResponse: " + response.body().string());
}
});
三.调用流程
OkHttp请求过程中最少只需要接触OkHttpClient、Request、Call、
Response,但是框架内部进行大量的逻辑处理。
所有的逻辑大部分集中在拦截器中,但是在进入拦截器之前还需要依靠
分发器来调配请求任务。
Ø 分发器:内部维护队列与线程池,完成请求调配;
Ø 拦截器:完成整个请求过程。
四.分发器
1.分发器:异步请求工作流程
newCall:
在newCall 中很简单 就是创建出一个 RealCall对象并返回,其中Call是一个接口
override fun newCall(request: Request): Call = RealCall(this, request, forWebSocket = false)
Call接口:
interface Call : Cloneable {
/** Returns the original request that initiated this call. */
fun request(): Request
@Throws(IOException::class)
fun execute(): Response
fun enqueue(responseCallback: Callback)
fun cancel()
fun isExecuted(): Boolean
fun isCanceled(): Boolean
fun timeout(): Timeout
public override fun clone(): Call
fun interface Factory {
fun newCall(request: Request): Call
}
}
进行请求同步或者异步请求 call.execute()/call.enqueue
其中 同步请求比较简单
RealCall:
// 进行同步请求
override fun execute(): Response {
check(executed.compareAndSet(false, true)) { "Already Executed" }
timeout.enter()
callStart()
try {
client.dispatcher.executed(this)
return getResponseWithInterceptorChain()
} finally {
client.dispatcher.finished(this)
}
}
// 进行异步请求
override fun enqueue(responseCallback: Callback) {
check(executed.compareAndSet(false, true)) { "Already Executed" }
callStart()
client.dispatcher.enqueue(AsyncCall(responseCallback))
}
在上面的代码中,分别是进行同步请求和进行异步请求,指的注意的是无论是在同步还是异步中都调用了 check()方法,check是检查这个Realcall对象是否已经请求了过了(或者说是使用过了),当着个Realcall对象再次被使用时,会抛出异常。
private val executed = AtomicBoolean()
@kotlin.internal.InlineOnly
public inline fun check(value: Boolean, lazyMessage: () -> Any): Unit {
contract {
returns() implies value
}
if (!value) {
val message = lazyMessage()
throw IllegalStateException(message.toString())
}
}
如上代码,通过一个AtomicBoolean类型来标记是否已经请求过。
同步请求比较简单,因为是调用即马上请求,没有像异步请求一样有着复杂的过程,所以接下来的主要分析异步请求:
// 进行异步请求
override fun enqueue(responseCallback: Callback) {
check(executed.compareAndSet(false, true)) { "Already Executed" }
callStart() // 注释1 支线流程 以后再分析(好像是 回调一些事件的监听)
client.dispatcher.enqueue(AsyncCall(responseCallback)) // 主线任务
}
AsyncCall
主线流程: client.dispatcher.enqueue(AsyncCall(responseCallback)) 其中client为OkHttpClient,dispatcher为分发器,如上代码中就是拿到我们创建出来的OkHttpClient分发器,调用它的enqueue,参数为AsyncCall对象,简单了解一下:可以看到AsyncCall是RealCall的内部类,而且实现了Runnable接口,所以重写了run方法。到这里能联想到,这个AsyncCall应该会交给一个线程池进行处理,而且请求的主要耗时的操作应该在他的run方法里面。
internal inner class AsyncCall(
private val responseCallback: Callback
) : Runnable {
@Volatile var callsPerHost = AtomicInteger(0)
private set
fun reuseCallsPerHostFrom(other: AsyncCall) {
this.callsPerHost = other.callsPerHost
}
val host: String
get() = originalRequest.url.host
val request: Request
get() = originalRequest
val call: RealCall
get() = this@RealCall
/**
* Attempt to enqueue this async call on [executorService]. This will attempt to clean up
* if the executor has been shut down by reporting the call as failed.
*/
fun executeOn(executorService: ExecutorService) {
client.dispatcher.assertThreadDoesntHoldLock()
var success = false
try {
executorService.execute(this)
success = true
} catch (e: RejectedExecutionException) {
val ioException = InterruptedIOException("executor rejected")
ioException.initCause(e)
noMoreExchanges(ioException)
responseCallback.onFailure(this@RealCall, ioException)
} finally {
if (!success) {
client.dispatcher.finished(this) // This call is no longer running!
}
}
}
override fun run() {
threadName("OkHttp ${redactedUrl()}") {
var signalledCallback = false
timeout.enter()
try {
// 执行请求
val response = getResponseWithInterceptorChain()
signalledCallback = true
responseCallback.onResponse(this@RealCall, response)
} catch (e: IOException) {
if (signalledCallback) {
// Do not signal the callback twice!
Platform.get().log("Callback failure for ${toLoggableString()}", Platform.INFO, e)
} else {
responseCallback.onFailure(this@RealCall, e)
}
} catch (t: Throwable) {
cancel()
if (!signalledCallback) {
val canceledException = IOException("canceled due to $t")
canceledException.addSuppressed(t)
responseCallback.onFailure(this@RealCall, canceledException)
}
throw t
} finally {
client.dispatcher.finished(this)
}
}
}
}
dispatcher
在 Dispatcher对象中有三个队列,分别是 准备执行的异步请求 正在执行的异步请求 正在执行的同步请求,同步请求我们暂且不分析,主要看看异步的请求队列,为什么在异步请求中要有 正在执行的异步请求 正在执行的同步请求 这两个请求呢 ,这是因为当OKHttp收到大量请求时,不可能同时开启这么多请求,因为这样这样不仅占用大量的cpu而且会消耗掉大量的内存,正因为这样,在OKHttp中有一个最大同时请求数的限制,把请求放到两个队列里面,让请求排队执行。
Dispatcher.kt
/** Ready async calls in the order they'll be run. */
//准备执行的异步请求
private val readyAsyncCalls = ArrayDeque<AsyncCall>()
//正在执行的异步请求
/** Running asynchronous calls. Includes canceled calls that haven't finished yet. */
private val runningAsyncCalls = ArrayDeque<AsyncCall>()
//正在执行的同步请求
/** Running synchronous calls. Includes canceled calls that haven't finished yet. */
private val runningSyncCalls = ArrayDeque<RealCall>()
constructor(executorService: ExecutorService) : this() {
this.executorServiceOrNull = executorService
}
dispatcher.enqueue
Dispatcher.kt
internal fun enqueue(call: AsyncCall) {
synchronized(this) {
// 把AsyncCall对象添加到 readyAsyncCalls 里面去 就是先把请求放到Dispatcher的readyAsyncCalls队列里面
readyAsyncCalls.add(call)
// Mutate the AsyncCall so that it shares the AtomicInteger of an existing running call to
// the same host.
if (!call.call.forWebSocket) { // 是http请求
val existingCall = findExistingCallWithHost(call.host) //注释1 在readyAsyncCalls 和 runningAsyncCalls 寻找有没有之前提交过异步 // 请求和这个提交的请求Host一模一样的AsyncCall
if (existingCall != null) call.reuseCallsPerHostFrom(existingCall) // 注释2 : 让相同Host的Async对象的AtomicInteger //callPerHost是同一个对象
}
}
promoteAndExecute() // 注释3
}
注释1 findExistingCallWithHost: 注释1 在readyAsyncCalls 和 runningAsyncCalls 寻找有没有之前提交过异步请求和这个提交的请求Host一模一样的AsyncCall,如果找到了,就把 之前存在的那个call对象拿出来
private fun findExistingCallWithHost(host: String): AsyncCall? {
for (existingCall in runningAsyncCalls) {
if (existingCall.host == host) return existingCall
}
for (existingCall in readyAsyncCalls) {
if (existingCall.host == host) return existingCall
}
return null
}
fun reuseCallsPerHostFrom(other: AsyncCall) {
this.callsPerHost = other.callsPerHost
}
在上面代码中 if (existingCall != null) call.reuseCallsPerHostFrom(existingCall) 如果在在readyAsyncCalls 和 runningAsyncCalls 中找到了 相同host的AsyncCall,让相同Host的Async对象的中的属性 AtomicInteger callPerHost是同一个对象,为什么这么做呢? 之后分析
接下来分析注释3处的promoteAndExecute方法,这个方法比较关键,因为从前面的流程中可以看到并没有进行实际意义上的网络请求。
Dispatcher.kt
private fun promoteAndExecute(): Boolean {
this.assertThreadDoesntHoldLock()
val executableCalls = mutableListOf<AsyncCall>()
val isRunning: Boolean
synchronized(this) {
val i = readyAsyncCalls.iterator()
//迭代等待执行异步请求
while (i.hasNext()) {
val asyncCall = i.next()
//正在执行异步请求的任务数 不能大于 64个
if (runningAsyncCalls.size >= this.maxRequests) break // Max capacity.
//同一个host的请求数 不能大于5
if (asyncCall.callsPerHost.get() >= this.maxRequestsPerHost) continue // Host max capacity.
i.remove() //注释1 这个任务马上就要进行真实的网络请求了,先从readyAsyncCalls
asyncCall.callsPerHost.incrementAndGet() //注释2 callsPerHost属性加1
executableCalls.add(asyncCall) //注释3 需要开始执行的任务集合
runningAsyncCalls.add(asyncCall) // 注释4 把这个 AsyncCall 对象添加到 runningAsyncCalls中
}
isRunning = runningCallsCount() > 0
}
// 注释5 遍历这个需要开始执行的任务集合,去执行asyncCall.executeOn
for (i in 0 until executableCalls.size) {
val asyncCall = executableCalls[i]
asyncCall.executeOn(executorService)
}
return isRunning
}
如上代码中,会迭代执行readyAsyncCalls中的请求,正在执行异步请求的任务数 是否大于 64个(这个数字可以设置的),如果大于64,就break出去,这就保证了一个OKHttp的最大同时请求数,然后再会通过callsPerHost来判断,当前正在进行请求中, 这个host的请求数是否大于5(这个数字也可以单数设置),要是大于5,进行continue,这样做为了避免向一个host同时进行了大量请求,其他host的请求无法得到及时执行的问题,同时这也避免了客户端向服务端同时开启大量请求,导致服务端压力过大的问题。
①在注释1 处这个AsyncCall就要进行网络请求了,先把他从先从readyAsyncCalls队列中移除,②然后在注释2处对这个AsyncCallma对象的callsPerHost加1, ③然后在注释3处把这个 AsyncCall 对象添加到一个临时的集合里面,这个集合里面放着 需要开始执行的任务,④然后在注释4处 把这个 AsyncCall 对象添加到 runningAsyncCalls中。
在注释5处,遍历这个需要开始执行的任务集合,去执行 asyncCall.executeOn(executorService) 其中executorService是一个线程池。
fun executeOn(executorService: ExecutorService) {
client.dispatcher.assertThreadDoesntHoldLock()
var success = false
try {
executorService.execute(this) // 注释1 将当前的AsyncCall(Runnable对象),会调用到AsyncCall的run方法
success = true
} catch (e: RejectedExecutionException) { // 进行一些异常处理
val ioException = InterruptedIOException("executor rejected")
ioException.initCause(e)
noMoreExchanges(ioException)
responseCallback.onFailure(this@RealCall, ioException)
} finally {
if (!success) {
client.dispatcher.finished(this) // This call is no longer running!
}
}
}
如上代码中,在注释1处,将当前的AsyncCall(Runnable对象),会调用到AsyncCall的run方法,接下来分析run方法:
AsyncCall
override fun run() {
threadName("OkHttp ${redactedUrl()}") {
var signalledCallback = false
timeout.enter()
try {
//注释1 执行请求 在这里完成请求
val response = getResponseWithInterceptorChain()
signalledCallback = true
responseCallback.onResponse(this@RealCall, response)
} catch (e: IOException) {
if (signalledCallback) {
// Do not signal the callback twice!
Platform.get().log("Callback failure for ${toLoggableString()}", Platform.INFO, e)
} else {
responseCallback.onFailure(this@RealCall, e)
}
} catch (t: Throwable) {
cancel()
if (!signalledCallback) {
val canceledException = IOException("canceled due to $t")
canceledException.addSuppressed(t)
responseCallback.onFailure(this@RealCall, canceledException)
}
throw t
} finally {
// 注释2 在这里调用分发器的finished方法
client.dispatcher.finished(this)
}
}
}
这里我们先关注分发器的分发流程,在注释1处会执行请求 在这里完成请求,具体细节先不关注,在注释2处已经完成了请求,执行了 client.dispatcher.finished(this),其中dispatcher是分发器对象,this指的是AsyncCall对象:
AsyncCall
internal fun finished(call: AsyncCall) {
call.callsPerHost.decrementAndGet() //该 AsyncCall的callsPerHost减一
finished(runningAsyncCalls, call)
}
private fun <T> finished(calls: Deque<T>, call: T) {
val idleCallback: Runnable?
synchronized(this) {
if (!calls.remove(call)) throw AssertionError("Call wasn't in-flight!")
idleCallback = this.idleCallback
}
val isRunning = promoteAndExecute() // 在这里又去执行了promoteAndExecute
if (!isRunning && idleCallback != null) {
idleCallback.run()
}
}
线程池:
当一个任务通过execute(Runnable)方法添加到线程池时:
线程数量小于corePoolSize,新建线程(核心)来处理被添加的任务
线程数量大于等于corePoolSize,新任务被添加到等待队列,若添加失败:
线程数量小于maximumPoolSize,新建线程执行新任务;
线程数量等于maximumPoolSize,使用RejectedExecutionHandler拒绝策略
分发器中的线程池:
需要注意的是该线程池的等待队列是 SynchronousQueue(),这个等待队列是没有容量的等待队列,当向该线程池添加任务时,新任务被添加到等待队列,会添加失败,然后线程数量小于maximumPoolSize,新建线程执行新任务;线程数量等于maximumPoolSize,使用RejectedExecutionHandler拒绝策略。 这种等待队列侧策略提高了OKHttp的并发性能
executorServiceOrNull = ThreadPoolExecutor(0, Int.MAX_VALUE, 60, TimeUnit.SECONDS,
SynchronousQueue(), threadFactory("$okHttpName Dispatcher", false))
同步请求
接下来简单了解下同步请求:
RealCall:
// 进行同步请求
override fun execute(): Response {
check(executed.compareAndSet(false, true)) { "Already Executed" }
timeout.enter()
callStart()
try {
client.dispatcher.executed(this) // 注释1 把该同步请求添加到runningSyncCalls队列中
return getResponseWithInterceptorChain() // 注释2 去执行请求
} finally {
client.dispatcher.finished(this) // 注释3
}
}
非常明了 ,把该同步请求添加到runningSyncCalls队列中(同步请求队列),然后执行请求,然后调用 client.dispatcher.finished(this)
internal fun finished(call: AsyncCall) {
call.callsPerHost.decrementAndGet() // 将该同步请求从 runningSyncCalls队列中移除
finished(runningAsyncCalls, call) // 注意
}
如上代码中将该同步请求从 runningSyncCalls队列中移除,值得注意的是 同步请求结束后也会调用到finished的两个参数的方法,所以会到用到异步请求中的 promoteAndExecute方法。
private fun <T> finished(calls: Deque<T>, call: T) {
val idleCallback: Runnable?
synchronized(this) {
if (!calls.remove(call)) throw AssertionError("Call wasn't in-flight!")
idleCallback = this.idleCallback
}
val isRunning = promoteAndExecute()
if (!isRunning && idleCallback != null) {
idleCallback.run()
}
}
通过对比异步请求和同步请求过程我么你可以发现,在真正请求的时候,都是通过getResponseWithInterceptorChain()方法完成的,接在分发完成后通过getResponseWithInterceptorChain来完成正式的请求,getResponseWithInterceptorChain中有着OKHttp中重要的内容:拦截器
五.拦截器
拦截器在okhttp
的请求流程中用于处理request
和response
,下面贴一段官方给的示例代码:
class LoggingInterceptor implements Interceptor {
@Override public Response intercept(Interceptor.Chain chain) throws IOException {
Request request = chain.request();
long t1 = System.nanoTime();
logger.info(String.format("Sending request %s on %s%n%s",
request.url(), chain.connection(), request.headers()));
Response response = chain.proceed(request);
long t2 = System.nanoTime();
logger.info(String.format("Received response for %s in %.1fms%n%s",
response.request().url(), (t2 - t1) / 1e6d, response.headers()));
return response;
}
}
okhttp
的interceptor
采用的是责任链模式,在这条责任链中其中,前面的interceptor
根据自己的需求处理request
对象,处理完之后将其交给下一个interceptor
,也就是上面代码中的chain.proceed(request)
方法,然后等待下一个拦截器返回一个response
,再对返回的结果进行处理,最终给请求的发起者返回一个响应结果。
五大拦截器作用的简单描述:
RetryAndFollowUpInterceptor
:用于重定向和发生错误时重试
BridgeInterceptor
:应用层与网络层的桥梁,从代码中看主要是为request
添加请求头,为response
去除响应头
CacheInterceptor
:处理请求与响应缓存
ConnectInterceptor
:与服务器建立连接
CallServerInterceptor
:责任链中最后一个拦截器,用最终得到的request
发送请求,将获取到response
返回给前面的拦截器处理
除了这个5个OKHttp官方给出的拦截器,我们还可以自定义拦截器:
internal fun getResponseWithInterceptorChain(): Response {
// Build a full stack of interceptors.
val interceptors = mutableListOf<Interceptor>()
interceptors += client.interceptors // 注释1 添加用户自定义的拦截器
interceptors += RetryAndFollowUpInterceptor(client)
interceptors += BridgeInterceptor(client.cookieJar)
interceptors += CacheInterceptor(client.cache)
interceptors += ConnectInterceptor
if (!forWebSocket) {
interceptors += client.networkInterceptors // 注释2 添加用户自定义的拦截器
}
interceptors += CallServerInterceptor(forWebSocket)
由上面的图和代码可以观察到,除了OKHttp的自带的5大拦截器外,我们可也以在上面所示的地方添加自定义的拦截器,可以注意到的是通过addInterceptor()添加的拦截器在前面(上面代码的注释1处),通过addNetworkInterceptor()添加的拦截器位于后面(上面的代码注释2处。可以总结出来通过addInterceptor添加的拦截器位于最前面,而通过addNetworkInterceptor添加的拦截器位于ConnectInterceptor拦截器之后,CallServerInterceptor拦截器之前。 其中networkInterceptors拦截器只会在http请求中会生效。
我们自定义拦截器的添加举例如下代码所示:
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
var okHttpClient = OkHttpClient.Builder()
.addInterceptor(object : Interceptor { // 添加普通的自定义拦截器
override fun intercept(chain: Interceptor.Chain): Response {
TODO("Not yet implemented")
}
}).addNetworkInterceptor(object : Interceptor{ // 添加的自定义网络拦截器
override fun intercept(chain: Interceptor.Chain): Response {
TODO("Not yet implemented")
}
})
.build()
var request = Request.Builder().url("https://www.baidu.com")
.cacheControl(CacheControl.FORCE_CACHE)
.build()
var call = okHttpClient.newCall(request)
val result = call.execute()
println(result.isSuccessful)
result.close()
}
}
1.拦截器的执行流程
责任链, 顾名思义是将多个节点通过链条的方式连接起来,每一个节点相当于一个对象,而每一个对象层层相关,直接或者间接引用下一个对象(节点);直到链条中有一个节点处理头节点传下来的事件截止;
有一事件,可以被多个对象同时处理,但是由哪个对象处理则在运行时动态决定!在请求处理者不明确时向多个对象中提交一个请求,动态指定一组对象处理请求。
责任链设计模式
对象行为型模式,为请求创建了一个接收者对象的链,在处理请求的时候执行过滤(各司其职)。
责任链上的处理者负责处理请求,客户只需要将请求发送到责任链即可,无须关心请求的处理细节和请求的传递,所以职
责链将请求的发送者和请求的处理者解耦了。
2.模仿OKHttp的责任链设计模式
下面将通过一些模仿OKHttp的设计模式:
interface Interceptor {
fun intercept(chain: Chain): String
}
import java.util.List;
public class Chain {
private List<Interceptor> interceptors; // 链条中的 interceptors 中放着我们添加的拦截器
private int index; // 下标 用来表明从interceptors取出第几个拦截器
public String request;
public Chain(List<Interceptor> interceptors, int index, String request) {
this.interceptors = interceptors;
this.index = index;
this.request = request;
}
public Chain(List<Interceptor> interceptors, int index) {
this.interceptors = interceptors;
this.index = index;
}
public String processd(String request) { // 接收请求,
if (index >= interceptors.size()) {
throw new AssertionError();
}
Chain chain = new Chain(interceptors, index + 1, request); // 新建出一个链条对象,注意到index + 1,交给下一个拦截器
Interceptor interceptor = interceptors.get(index); // 取出当前对应位置的拦截器
return interceptor.intercept(chain); // 执行该拦截器的相关操作,结果会返回给上一层拦截器
}
}
五个拦截器类,每个拦截器都要实现Interceptor接口。
/**
* 第一个拿到请求
* 最后一个拿到响应
*/
class RetryAndFollowUpInterceptor : Interceptor {
override fun intercept(chain: Chain): String {
//可以在执行下一个拦截器之前,做自己的事情
println("开始执行重试重定向拦截器");
// 执行下一个拦截器
var result = chain.processd(chain.request + "==>经过重试重定向拦截器");
//获得结果后,加一些自己的东西
println("结束执行重试重定向拦截器");
return "$result==>经过重试重定向拦截器";
}
}
class BridgeInterceptor : Interceptor {
override fun intercept(chain: Chain): String {
println("开始执行桥接拦截器")
var result = chain.processd(chain.request + "==>经过桥接拦截器");
println("结束执行桥接拦截器")
return "$result==>经过桥接拦截器";
}
}
class CacheInterceptor : Interceptor {
override fun intercept(chain: Chain): String {
println("开始执行缓存拦截器");
var result = chain.processd(chain.request + "==>经过缓存拦截器");
println("结束执行缓存拦截器");
return "$result==>经过缓存拦截器";
}
}
class ConnectInterceptor : Interceptor {
override fun intercept(chain: Chain): String {
println("开始执行连接拦截器");
var result = chain.processd(chain.request + "==>经过连接拦截器");
println("结束执行连接拦截器");
return "$result==>经过连接拦截器";
}
}
class CallServerInterceptor : Interceptor {
override fun intercept(chain: Chain): String {
println("开始执行请求服务器拦截器");
println("===发起请求===");
println("结束执行请求服务器拦截器");
return chain.request + "==>经过请求服务器拦截器\nHttp响应==>经过请求服务器拦截器";
}
}
主类:
fun main(args: Array<String>) {
var interceptors = ArrayList<Interceptor>()
interceptors.add(RetryAndFollowUpInterceptor())
interceptors.add(BridgeInterceptor())
interceptors.add(CacheInterceptor())
interceptors.add(ConnectInterceptor())
interceptors.add(CallServerInterceptor())
//链条对象
var chain = Chain(interceptors, 0);
System.out.println(chain.processd("Http请求"));
}
由上面的例子我们可以看出,在我们模仿OKHttp拦截器执行的过程中,请求从前面的拦截器一层一层往更深层次的拦截器调用,当最深一层的拦截器处理完毕请求后,返回的结果又由最深层次的拦截器一层一层往浅层次的拦截器返回。在真实的OKHttp的请求过程中,request从前面的拦截器一层一层往更深层次的拦截器传递,当最深一层的拦截器处理完毕请求后,response会从最深层次的拦截器一层一层往浅层次的拦截器传递。
3.OKHttp五大拦截器概述
1、重试拦截器:在交出(交给下一个拦截器)之前,负责判断用户是否取消了请求;在获得了结果之后 ,会根据响应码判断是否需要重定向,如果满足条件那么就会重启执行所有拦截器。
2、桥接拦截器:在交出之前,负责将HTTP协议必备的请求头加入其中(如:Host)并添加一些默认 的行为(如:GZIP压缩);在获得了结果后,调用保存cookie接口并解析GZIP数据。
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
var okHttpClient = OkHttpClient.Builder()
.cookieJar(object : CookieJar { // 添加一个CookieJar
override fun saveFromResponse(url: HttpUrl, cookies: List<Cookie>) {
TODO("Not yet implemented")
// 获得response时会回调,开发过程中可以根据HttpUrl,List<Cookie> 来决定是否把该url对应的cookie保存
}
override fun loadForRequest(url: HttpUrl): List<Cookie> {
TODO("Not yet implemented")
// 发出请求时会调用,开发过程中可以根据,url来决定是把之前保存的该url对应的cookie设置到该url对应的请求中去。
}
})
.build()
var request = Request.Builder().url("https://www.baidu.com")
.cacheControl(CacheControl.FORCE_CACHE)
.build()
var call = okHttpClient.newCall(request)
val result = call.execute()
println(result.isSuccessful)
result.close()
}
}
3、缓存拦截器:顾名思义,交出之前读取并判断是否使用缓存;获得结果后判断是否缓存。
4、连接拦截器:在交出之前,负责找到或者新建一个连接,并获得对应的socket流;在获得结果后 不进行额外的处理。
5、请求服务器拦截器进行真正的与服务器的通信,向服务器发送数据,解析读取的响应数据。
注意区别自定义的普通拦截器和自定义的network拦截器左作用上的区别
4.OKHttp五大拦截器代码分析
(1)重定向与重试拦截器
第一个拦截器:RetryAndFollowUpInterceptor
,主要就是完成两件事情:重试与重定向。
在OKHttp中,重试的次数无限制的,而重定向的次数最高为20次
class RetryAndFollowUpInterceptor(private val client: OkHttpClient) : Interceptor {
@Throws(IOException::class)
override fun intercept(chain: Interceptor.Chain): Response {
val realChain = chain as RealInterceptorChain
var request = chain.request
val call = realChain.call
var followUpCount = 0
var priorResponse: Response? = null
var newExchangeFinder = true
var recoveredFailures = listOf<IOException>()
while (true) {
//ExchangeFinder: 获取连接 (ConnectInterceptor中使用)
call.enterNetworkInterceptorExchange(request, newExchangeFinder)
var response: Response
var closeActiveExchange = true
try { // 判断用户是否取消了这次请求
if (call.isCanceled()) {
throw IOException("Canceled")
}
try {
response = realChain.proceed(request) // 将请求交给责任链中的下一个拦截器
newExchangeFinder = true
} catch (e: RouteException) { // 路线异常
// The attempt to connect via a route failed. The request will not have been sent.
//检查是否需要重试
if (!recover(e.lastConnectException, call, request, requestSendStarted = false)) {
throw e.firstConnectException.withSuppressed(recoveredFailures)
} else {
recoveredFailures += e.firstConnectException
}
newExchangeFinder = false
continue
} catch (e: IOException) { // IO异常
// An attempt to communicate with a server failed. The request may have been sent.
// HTTP2才会有ConnectionShutdownException 代表连接中断
//如果是因为IO异常,那么requestSendStarted=true (若是HTTP2的连接中断异常仍然为false)
if (!recover(e, call, request, requestSendStarted = e !is ConnectionShutdownException)) { // 注释1 在这里进行重试操作
throw e.withSuppressed(recoveredFailures)
} else {
recoveredFailures += e
}
newExchangeFinder = false
continue
}
// Attach the prior response if it exists. Such responses never have a body.
// priorResponse:上一次请求的响应
if (priorResponse != null) {
response = response.newBuilder()
.priorResponse(priorResponse.newBuilder()
.body(null)
.build())
.build()
}
val exchange = call.interceptorScopedExchange
val followUp = followUpRequest(response, exchange) // 注释2 在这里进行重定向操作
if (followUp == null) {
if (exchange != null && exchange.isDuplex) {
call.timeoutEarlyExit()
}
closeActiveExchange = false
return response
}
val followUpBody = followUp.body
if (followUpBody != null && followUpBody.isOneShot()) {
closeActiveExchange = false
return response
}
response.body?.closeQuietly()
if (++followUpCount > MAX_FOLLOW_UPS) {
throw ProtocolException("Too many follow-up requests: $followUpCount")
}
request = followUp
priorResponse = response
} finally {
call.exitNetworkInterceptorExchange(closeActiveExchange)
}
}
}
/**
* Report and attempt to recover from a failure to communicate with a server. Returns true if
* `e` is recoverable, or false if the failure is permanent. Requests with a body can only
* be recovered if the body is buffered or if the failure occurred before the request has been
* sent.
*/
private fun recover(
e: IOException,
call: RealCall,
userRequest: Request,
requestSendStarted: Boolean
): Boolean {
// The application layer has forbidden retries.
//okhttpclient配置不重试 在构建OKHttp的时候可以设置这个值,默认为ture 允许重试
if (!client.retryOnConnectionFailure) return false
// We can't send the request body again.
// 不重试:
// 1、如果是IO异常(非http2中断异常)表示请求可能发出
// 2、如果请求体只能被使用一次(默认为false)
if (requestSendStarted && requestIsOneShot(e, userRequest)) return false
// This exception is fatal.
// 异常不重试:协议异常、IO中断异常(除Socket读写超时之外),ssl认证异常
if (!isRecoverable(e, requestSendStarted)) return false
// No more routes to attempt.
//是否有更多的路线
if (!call.retryAfterFailure()) return false
// For failure recovery, use the same route selector with a new connection.
return true
}
private fun requestIsOneShot(e: IOException, userRequest: Request): Boolean {
val requestBody = userRequest.body
// 第一个条件默认为false
// 第二个条件,比如上传文件,但是本地上传的文件不存在
return (requestBody != null && requestBody.isOneShot()) ||
e is FileNotFoundException
}
private fun isRecoverable(e: IOException, requestSendStarted: Boolean): Boolean {
// If there was a protocol problem, don't recover.
// 协议异常 不重试
if (e is ProtocolException) {
return false
}
// If there was an interruption don't recover, but if there was a timeout connecting to a route
// we should try the next route (if there is one).
// 如果发生中断 不重试,但如果连接到路由时超时可以重试
if (e is InterruptedIOException) {
return e is SocketTimeoutException && !requestSendStarted
}
// Look for known client-side or negotiation errors that are unlikely to be fixed by trying
// again with a different route.
// 证书有问题 不重试
if (e is SSLHandshakeException) {
// If the problem was a CertificateException from the X509TrustManager,
// do not retry.
if (e.cause is CertificateException) {
return false
}
}
// 证书验证失败
if (e is SSLPeerUnverifiedException) {
// e.g. a certificate pinning error.
return false
}
// An example of one we might want to retry with a different route is a problem connecting to a
// proxy and would manifest as a standard IOException. Unless it is one we know we should not
// retry, we return true and try a new route.
return true
}
可以发现,重试与重定向拦截器并不会对requset做过多的处理,对response的处理比较多。
重试:
在上面代码的注释1处进行重试的操作。
要满足重试的条件,才能进行重试,如上面代码和下图中所展示的:
重定向:
在上面的代码中,在注释2 val followUp = followUpRequest(response, exchange) 进行重定向的操作,把代码截取下来:
val followUp = followUpRequest(response, exchange) // 在这里进行是否需要进行重定向的判断,如果需要重定向则会返回一个新的request
if (followUp == null) { // followUp为空,则说明不要进行重定向操作,则返回这请求的response
if (exchange != null && exchange.isDuplex) {
call.timeoutEarlyExit()
}
closeActiveExchange = false
return response
}
val followUpBody = followUp.body
if (followUpBody != null && followUpBody.isOneShot()) {
closeActiveExchange = false
return response
}
response.body?.closeQuietly()
if (++followUpCount > MAX_FOLLOW_UPS) {
throw ProtocolException("Too many follow-up requests: $followUpCount") // 如果重定向的次数大于20次,则抛出协议异常 不进行重试了
}
// 因为这一段被while(ture)包裹,如果判断出来要进行重定向,则又回到开始的地方,把这个新的请求交给一个拦截器
下面是followUpRequest的代码 看看就行
private Request followUpRequest(Response userResponse) throws IOException {
if (userResponse == null) throw new IllegalStateException();
Connection connection = streamAllocation.connection();
Route route = connection != null
? connection.route()
: null;
int responseCode = userResponse.code();
final String method = userResponse.request().method();
switch (responseCode) {
// 407 客户端使用了HTTP代理服务器,在请求头中添加 “Proxy-Authorization”,给代理服务器授权
case HTTP_PROXY_AUTH:
Proxy selectedProxy = route != null
? route.proxy()
: client.proxy();
if (selectedProxy.type() != Proxy.Type.HTTP) {
throw new ProtocolException("Received HTTP_PROXY_AUTH (407) code while not using proxy");
}
return client.proxyAuthenticator().authenticate(route, userResponse);
// 401 需要身份验证 有些服务器接口需要验证使用者身份 在请求头中添加 “Authorization”
case HTTP_UNAUTHORIZED:
return client.authenticator().authenticate(route, userResponse);
// 308 永久重定向
// 307 临时重定向
case HTTP_PERM_REDIRECT:
case HTTP_TEMP_REDIRECT:
// 如果请求方式不是GET或者HEAD,框架不会自动重定向请求
if (!method.equals("GET") && !method.equals("HEAD")) {
return null;
}
// 300 301 302 303
case HTTP_MULT_CHOICE:
case HTTP_MOVED_PERM:
case HTTP_MOVED_TEMP:
case HTTP_SEE_OTHER:
// 如果用户不允许重定向,那就返回null
if (!client.followRedirects()) return null;
// 从响应头取出location
String location = userResponse.header("Location");
if (location == null) return null;
// 根据location 配置新的请求 url
HttpUrl url = userResponse.request().url().resolve(location);
// 如果为null,说明协议有问题,取不出来HttpUrl,那就返回null,不进行重定向
if (url == null) return null;
// 如果重定向在http到https之间切换,需要检查用户是不是允许(默认允许)
boolean sameScheme = url.scheme().equals(userResponse.request().url().scheme());
if (!sameScheme && !client.followSslRedirects()) return null;
Request.Builder requestBuilder = userResponse.request().newBuilder();
/**
* 重定向请求中 只要不是 PROPFIND 请求,无论是POST还是其他的方法都要改为GET请求方式,
* 即只有 PROPFIND 请求才能有请求体
*/
//请求不是get与head
if (HttpMethod.permitsRequestBody(method)) {
final boolean maintainBody = HttpMethod.redirectsWithBody(method);
// 除了 PROPFIND 请求之外都改成GET请求
if (HttpMethod.redirectsToGet(method)) {
requestBuilder.method("GET", null);
} else {
RequestBody requestBody = maintainBody ? userResponse.request().body() : null;
requestBuilder.method(method, requestBody);
}
// 不是 PROPFIND 的请求,把请求头中关于请求体的数据删掉
if (!maintainBody) {
requestBuilder.removeHeader("Transfer-Encoding");
requestBuilder.removeHeader("Content-Length");
requestBuilder.removeHeader("Content-Type");
}
}
// 在跨主机重定向时,删除身份验证请求头
if (!sameConnection(userResponse, url)) {
requestBuilder.removeHeader("Authorization");
}
return requestBuilder.url(url).build();
// 408 客户端请求超时
case HTTP_CLIENT_TIMEOUT:
// 408 算是连接失败了,所以判断用户是不是允许重试
if (!client.retryOnConnectionFailure()) {
return null;
}
// UnrepeatableRequestBody实际并没发现有其他地方用到
if (userResponse.request().body() instanceof UnrepeatableRequestBody) {
return null;
}
// 如果是本身这次的响应就是重新请求的产物同时上一次之所以重请求还是因为408,那我们这次不再重请求了
if (userResponse.priorResponse() != null
&& userResponse.priorResponse().code() == HTTP_CLIENT_TIMEOUT) {
return null;
}
// 如果服务器告诉我们了 Retry-After 多久后重试,那框架不管了。
if (retryAfter(userResponse, 0) > 0) {
return null;
}
return userResponse.request();
// 503 服务不可用 和408差不多,但是只在服务器告诉你 Retry-After:0(意思就是立即重试) 才重请求
case HTTP_UNAVAILABLE:
if (userResponse.priorResponse() != null
&& userResponse.priorResponse().code() == HTTP_UNAVAILABLE) {
return null;
}
if (retryAfter(userResponse, Integer.MAX_VALUE) == 0) {
return userResponse.request();
}
return null;
default:
return null;
}
需要进行重定向的相应码:
本拦截器是整个责任链中的第一个,这意味着它会是首次接触到Request与最后接收到Response的角色,在这个拦截器中主要功能就是判断是否需要重试与重定向。
重试的前提是出现了RouteException或者IOException。一但在后续的拦截器执行过程中出现这两个异常,就会通过recover方法进行判断是否进行连接重试。
重定向发生在重试的判定之后,如果不满足重试的条件,还需要进一步调用followUpRequest根据Response 的响应码(当然,如果直接请求失败,Response都不存在就会抛出异常)。followup最大发生20次。
(2).桥接拦截器
BridgeInterceptor
,连接应用程序和服务器的桥梁,我们发出的请求将会经过它的处理才能发给服务器,比如设置请求内容长度,编码,gzip压缩,cookie等,获取响应后保存Cookie等操作。这个拦截器相对比较简单,功能总结起来就是:补全请求与响应后处理
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-IfxDVD0d-1648555734800)(C:%5CUsers%5C1%5CAppData%5CRoaming%5CTypora%5Ctypora-user-images%5Cimage-20220329014436323.png)]
/**
* Bridges from application code to network code. First it builds a network request from a user
* request. Then it proceeds to call the network. Finally it builds a user response from the network
* response.
*/
class BridgeInterceptor(private val cookieJar: CookieJar) : Interceptor {
@Throws(IOException::class)
override fun intercept(chain: Interceptor.Chain): Response {
val userRequest = chain.request()
val requestBuilder = userRequest.newBuilder()
val body = userRequest.body
// 处理封装"Content-Type", "Content-Length","Transfer-Encoding","Host","Connection",
// "Accept-Encoding","Cookie","User-Agent"等请求头
if (body != null) {
val contentType = body.contentType()
if (contentType != null) {
requestBuilder.header("Content-Type", contentType.toString())
}
val contentLength = body.contentLength()
if (contentLength != -1L) {
requestBuilder.header("Content-Length", contentLength.toString())
requestBuilder.removeHeader("Transfer-Encoding")
} else {
requestBuilder.header("Transfer-Encoding", "chunked")
requestBuilder.removeHeader("Content-Length")
}
}
if (userRequest.header("Host") == null) {
requestBuilder.header("Host", userRequest.url.toHostHeader())
}
if (userRequest.header("Connection") == null) {
requestBuilder.header("Connection", "Keep-Alive")
}
// 在服务器支持gzip压缩的前提下,客户端不设置Accept-Encoding=gzip的话,
// okhttp会自动帮我们开启gzip和解压数据,如果客户端自己开启了gzip,就需要自己解压服务器返回的数据了。
// If we add an "Accept-Encoding: gzip" header field we're responsible for also decompressing
// the transfer stream.
var transparentGzip = false
if (userRequest.header("Accept-Encoding") == null && userRequest.header("Range") == null) {
transparentGzip = true
requestBuilder.header("Accept-Encoding", "gzip")
}
// 从cookieJar中获取cookie,添加到header
val cookies = cookieJar.loadForRequest(userRequest.url)
if (cookies.isNotEmpty()) {
requestBuilder.header("Cookie", cookieHeader(cookies))
}
if (userRequest.header("User-Agent") == null) {
requestBuilder.header("User-Agent", userAgent)
}
// 把处理好的新请求往下传递,执行后续的拦截器的逻辑
val networkResponse = chain.proceed(requestBuilder.build())
//从networkResponse中获取 header "Set-Cookie" 存入cookieJar
cookieJar.receiveHeaders(userRequest.url, networkResponse.headers)
// 获取返回体的Builder
val responseBuilder = networkResponse.newBuilder()
.request(userRequest)
// 处理返回的Response的"Content-Encoding"、"Content-Length"、"Content-Type"等返回头
// 如果我们没有手动添加"Accept-Encoding: gzip",这里会创建 能自动解压的responseBody--GzipSource
if (transparentGzip &&
"gzip".equals(networkResponse.header("Content-Encoding"), ignoreCase = true) &&
networkResponse.promisesBody()) {
val responseBody = networkResponse.body
if (responseBody != null) {
val gzipSource = GzipSource(responseBody.source())
val strippedHeaders = networkResponse.headers.newBuilder()
.removeAll("Content-Encoding")
.removeAll("Content-Length")
.build()
responseBuilder.headers(strippedHeaders)
val contentType = networkResponse.header("Content-Type")
responseBuilder.body(RealResponseBody(contentType, -1L, gzipSource.buffer()))
}
}
//然后新构建的response返回出去
return responseBuilder.build()
}
/** Returns a 'Cookie' HTTP request header with all cookies, like `a=b; c=d`. */
private fun cookieHeader(cookies: List<Cookie>): String = buildString {
cookies.forEachIndexed { index, cookie ->
if (index > 0) append("; ")
append(cookie.name).append('=').append(cookie.value)
}
}
}
首先,chain.proceed() 执行前,对 请求添加了header:“Content-Type”、“Content-Length” 或 “Transfer-Encoding”、“Host”、“Connection”、“Accept-Encoding”、“Cookie”、“User-Agent”,即网络层真正可执行的请求。其中,注意到,默认是没有cookie处理的,需要我们在初始化OkhttpClient时配置我们自己的cookieJar。
chain.proceed() 执行后,先把响应header中的cookie存入cookieJar(如果有),然后如果没有手动添加请求header “Accept-Encoding: gzip”,那么会通过 创建能自动解压的responseBody——GzipSource,接着构建新的response返回。
(3).缓存拦截器
CacheInterceptor
,OKHttp第三个执行的拦截器就是缓存拦截器了,在发出请求前,判断是否命中缓存。如果命中则可以不请求,直接使用缓存的响应(只会存在Get请求的缓存)
在解析CacheInterceptor源码前,先了解下http的缓存机制:
https://www.cnblogs.com/chenqf/p/6386163.html(http缓存机制 非常的清晰)
第一次请求:
第二次请求:
缓存规则:
强制缓存 :在缓存数据未失效的情况下,可以直接使用缓存数据,那么浏览器是如何判断缓存数据是否失效呢?
我们知道,在没有缓存数据的时候,浏览器向服务器请求数据时,服务器会将数据和缓存规则一并返回,缓存规则信息包含在响应header中。对于强制缓存来说,响应header中会有两个字段来标明失效规则(Expires/Cache-Control。
协商缓存:顾名思义,需要进行比较判断是否可以使用缓存。
浏览器第一次请求数据时,服务器会将缓存标识与数据一起返回给客户端,客户端将二者备份至缓存数据库中。
再次请求数据时,客户端将备份的缓存标识发送给服务器,服务器根据缓存标识进行判断,判断成功后,返回304状态码,通知客户端比较成功,可以使用缓存数据。
·命中强缓存时,浏览器并不会将请求发送给服务器。强缓存是利用http的返回头中的Expires(资源过期时间)或者Cache- Control两个字段来控制的,用来表示资源的缓存时间;
·若未命中强缓存
,则浏览器会将请求发送至服务器。服务器根据http头信息中的Last-Modify/If-Modify- Since或Etag/If-None-Match来判断是否命中协商缓存。如果命中,则http返回码为304,客户端从缓存中加载资源。若不命中,则完整的向服务器请求一次。(协商缓存 又被称为对比缓存)
缓存策略:
拦截器通过CacheStrategy判断使用缓存或发起网络请求。此对象中的networkRequest与cacheResponse分别代表
需要发起请求或者直接使用缓存。
即:networkRequest存在则优先发起网络请求,否则使用cacheResponse缓存,若都不存在则请求失败!
CacheInterceptor.kt
override fun intercept(chain: Interceptor.Chain): Response {
val call = chain.call()
val cacheCandidate = cache?.get(chain.request())
val now = System.currentTimeMillis()
// 执行获取缓存策略的逻辑
// 缓存策略决定是否使用缓存:
// strategy.networkRequest为null,不使用网络
// strategy.cacheResponse为null,不使用缓存。
val strategy = CacheStrategy.Factory(now, chain.request(), cacheCandidate).compute()
// 网络请求
val networkRequest = strategy.networkRequest
// 本地的缓存保存的请求
val cacheResponse = strategy.cacheResponse
//根据缓存策略更新统计指标:请求次数、网络请求次数、使用缓存次数
cache?.trackResponse(strategy)
val listener = (call as? RealCall)?.eventListener ?: EventListener.NONE
if (cacheCandidate != null && cacheResponse == null) {
// The cache candidate wasn't applicable. Close it.
cacheCandidate.body?.closeQuietly()
}
// If we're forbidden from using the network and the cache is insufficient, fail.
// networkRequest == null 不能用网络
// 如果不使用网络数据且缓存数据为空,那么返回一个504的Response,并且body为空
// If we're forbidden from using the network and the cache is insufficient, fail.
if (networkRequest == null && cacheResponse == null) {
return Response.Builder()
.request(chain.request())
.protocol(Protocol.HTTP_1_1)
.code(HTTP_GATEWAY_TIMEOUT)
.message("Unsatisfiable Request (only-if-cached)")
.body(EMPTY_RESPONSE)
.sentRequestAtMillis(-1L)
.receivedResponseAtMillis(System.currentTimeMillis())
.build().also {
listener.satisfactionFailure(call, it)
}
}
// If we don't need the network, we're done.
// 如果不需要使用网络数据,那么就直接返回缓存的数据
// If we don't need the network, we're done.
if (networkRequest == null) {
return cacheResponse!!.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.build().also {
listener.cacheHit(call, it)
}
}
if (cacheResponse != null) {
listener.cacheConditionalHit(call, cacheResponse)
} else if (cache != null) {
listener.cacheMiss(call)
}
/*
* 到这里,networkRequest != null (cacheResponse可能null,可能!null)
* 没有命中强缓存的情况下,进行网络请求,获取response
* 先判断是否是协商缓存(304)命中,命中则更新缓存返回response
* 未命中使用网络请求的response返回并添加缓存
*/
var networkResponse: Response? = null
try {
networkResponse = chain.proceed(networkRequest)
} finally {
// If we're crashing on I/O or otherwise, don't leak the cache body.
if (networkResponse == null && cacheCandidate != null) {
cacheCandidate.body?.closeQuietly()
}
}
// If we have a cache response too, then we're doing a conditional get.
if (cacheResponse != null) {
// 如果缓存数据不为空并且code为304,表示数据没有变化,继续使用缓存数据;
if (networkResponse?.code == HTTP_NOT_MODIFIED) {
val response = cacheResponse.newBuilder()
.headers(combine(cacheResponse.headers, networkResponse.headers))
.sentRequestAtMillis(networkResponse.sentRequestAtMillis)
.receivedResponseAtMillis(networkResponse.receivedResponseAtMillis)
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build()
networkResponse.body!!.close()
// Update the cache after combining headers but before stripping the
// Content-Encoding header (as performed by initContentStream()).
cache!!.trackConditionalCacheHit()
// 更新缓存数据
cache.update(cacheResponse, response)
return response.also {
listener.cacheHit(call, it)
}
} else {
//如果是非304,说明服务端资源有更新,就关闭缓存body
cacheResponse.body?.closeQuietly()
}
}
// 协商缓存也未命中,获取网络返回的response
val response = networkResponse!!.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build()
if (cache != null) {
//网络响应可缓存(请求和响应的 头 Cache-Control都不是'no-store')
if (response.promisesBody() && CacheStrategy.isCacheable(response, networkRequest)) {
// Offer this request to the cache.
// 将网络数据保存到缓存中
// InternalCache接口,实现在Cache类中
// Offer this request to the cache.
val cacheRequest = cache.put(response)
return cacheWritingResponse(cacheRequest, response).also {
if (cacheResponse != null) {
// This will log a conditional cache miss only.
listener.cacheMiss(call)
}
}
}
//OkHttp默认只会对get请求进行缓存
//不是get请求就移除缓存
if (HttpMethod.invalidatesCache(networkRequest.method)) {
try {
cache.remove(networkRequest)
} catch (_: IOException) {
// The cache cannot be written.
}
}
}
return response
}
那我们来总结一下整体的步骤:
1、从缓存中获得对应请求的响应缓存。
2、创建 CacheStrategy ,创建时会判断是否能够使用缓存,在 CacheStrategy 中存在两个成员:networkRequest
与 cacheResponse
。他们的组合如下:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-srAbQoSl-1648555734801)(/home/hui/.config/Typora/typora-user-images/image-20220329144339195.png)]
3、交给下一个责任链继续处理。
4、后续工作,返回304则用缓存的响应;否则使用网络响应并缓存本次响应(只缓存Get请求的响应)
缓存拦截器的工作说起来比较简单,但是具体的实现,需要处理的内容很多。在缓存拦截器中判断是否可以使用缓存,或是请求服务器都是通过 CacheStrategy
判断。
整体思路:使用缓存策略CacheStrategy来决定是否使用缓存及如何使用。
总的来说需要知道:strategy.networkRequest为null,不使用网络;strategy.cacheResponse为null,不使用缓存。
strategy 具体内容不展开了。
https://blog.csdn.net/qq_22090073/article/details/111942694
https://www.freesion.com/article/62531460366/
(4).连接拦截器
如果缓存判定失败,就会走到这里进行真正的网络连接了。
前面分析了RetryAndFollowUpInterceptor、BridgeInterceptor、CacheInterceptor,三个拦截器,它们在请求建立连接之前做了一些预处理。请求经过这三个拦截器后,接下要分析剩下的两个拦截器:ConnectInterceptor、CallServerInterceptor,分别负责 连接建立、请求服务读写。
/**
* Opens a connection to the target server and proceeds to the next interceptor. The network might
* be used for the returned response, or to validate a cached response with a conditional GET.
*/
object ConnectInterceptor : Interceptor {
@Throws(IOException::class)
override fun intercept(chain: Interceptor.Chain): Response {
val realChain = chain as RealInterceptorChain
// 获取连接 Exchange:数据交换(封装了连接)
val exchange = realChain.call.initExchange(chain) // 注释1
val connectedChain = realChain.copy(exchange = exchange)
return connectedChain.proceed(realChain.request)
}
}
其中Exchange是一个重要的类,**Exchange(意为交换)主要作用就是真正的IO操作:写入请求、读取响应 **,网络连接的一系列操作都封装到了Exchange
对象中,当一个请求发出,需要建立连接,连接建立后需要使用流用来读写数据;而这个Exchange
就是协调请求、连接与数据流三者之间的关系,它负责为一次请求寻找连接,然后获得流来实现网络通信。
重点关注 val exchange = realChain.call.initExchange(chain)
这里使用的 exchangeFinder!!.find()
方法实际上就是去查找或者建立一个与请求主机有效的连接,返回值为ExchangeCodec
,包含了输入输出流,并且封装了对HTTP请求报文的编码与解码,直接使用它就能够与请求主机完成HTTP通信。
val result = Exchange(this, eventListener, exchangeFinder, codec)
子这里创建出了一个Exchange对象,并把之前的codec封装进去了。
Exchange
/** Finds a new or pooled connection to carry a forthcoming request and response. */
internal fun initExchange(chain: RealInterceptorChain): Exchange {
synchronized(this) {
check(expectMoreExchanges) { "released" }
check(!responseBodyOpen)
check(!requestBodyOpen)
}
val exchangeFinder = this.exchangeFinder!!
//ExchangeCodec: 编解码器 find:查找连接Realconnection
val codec = exchangeFinder.find(client, chain)
// Exchange:数据交换器 包含了exchangecodec与Realconnection,里面封装了codec
val result = Exchange(this, eventListener, exchangeFinder, codec)
this.interceptorScopedExchange = result
this.exchange = result
synchronized(this) {
this.requestBodyOpen = true
this.responseBodyOpen = true
}
if (canceled) throw IOException("Canceled")
return result // 把Exchange对象返回
}
继续关注exchangeFinder!!.find()
上文说到该方法实际上就是去查找或者建立一个与请求主机有效的连接,返回值为ExchangeCodec
,包含了输入输出流,并且封装了对HTTP请求报文的编码与解码,直接使用它就能够与请求主机完成HTTP通信。其实Http1.x
和Http2.x
实现有所不同,可以看到它最终的两个实现类的创建http2Connection
和Http1ExchangeCodec
:
fun find(
client: OkHttpClient,
chain: RealInterceptorChain
): ExchangeCodec {
try {
// 建立连接
val resultConnection = findHealthyConnection( // 在方法findHealthyConnection 就帮我们完成了连接的查找和连接的获取
connectTimeout = chain.connectTimeoutMillis,
readTimeout = chain.readTimeoutMillis,
writeTimeout = chain.writeTimeoutMillis,
pingIntervalMillis = client.pingIntervalMillis,
connectionRetryEnabled = client.retryOnConnectionFailure,
doExtensiveHealthChecks = chain.request.method != "GET"
)
// 创建HttpCodec
return resultConnection.newCodec(client, chain)
} catch (e: RouteException) {
trackFailure(e.lastConnectException)
throw e
} catch (e: IOException) {
trackFailure(e)
throw RouteException(e)
}
}
@Throws(SocketException::class)
internal fun newCodec(client: OkHttpClient, chain: RealInterceptorChain): ExchangeCodec {
val socket = this.socket!!
val source = this.source!!
val sink = this.sink!!
val http2Connection = this.http2Connection
return if (http2Connection != null) {
Http2ExchangeCodec(client, this, chain, http2Connection) // http2 对应的返回
} else {
socket.soTimeout = chain.readTimeoutMillis()
source.timeout().timeout(chain.readTimeoutMillis.toLong(), MILLISECONDS)
sink.timeout().timeout(chain.writeTimeoutMillis.toLong(), MILLISECONDS)
Http1ExchangeCodec(client, this, source, sink) // http1 对应的返回
}
}
接下来分析resultConnection是如何创建的 val resultConnection = findHealthyConnection(···):
/**
* Finds a connection and returns it if it is healthy. If it is unhealthy the process is repeated
* until a healthy connection is found.
*/
@Throws(IOException::class)
private fun findHealthyConnection(
connectTimeout: Int,
readTimeout: Int,
writeTimeout: Int,
pingIntervalMillis: Int,
connectionRetryEnabled: Boolean,
doExtensiveHealthChecks: Boolean
): RealConnection {
while (true) {
val candidate = findConnection( // 去查找连接
connectTimeout = connectTimeout,
readTimeout = readTimeout,
writeTimeout = writeTimeout,
pingIntervalMillis = pingIntervalMillis,
connectionRetryEnabled = connectionRetryEnabled
)
// Confirm that the connection is good.
if (candidate.isHealthy(doExtensiveHealthChecks)) { // 判断这个连接是否健康,健康 则直接返回这个连接
return candidate
}
// If it isn't, take it out of the pool.
candidate.noNewExchanges() // 如果这个连接不健康,则把它从连接池中移除
// Make sure we have some routes left to try. One example where we may exhaust all the routes
// would happen if we made a new connection and it immediately is detected as unhealthy.
if (nextRouteToTry != null) continue
val routesLeft = routeSelection?.hasNext() ?: true
if (routesLeft) continue
val routesSelectionLeft = routeSelector?.hasNext() ?: true
if (routesSelectionLeft) continue
throw IOException("exhausted all routes")
}
}
如上面的代码中,在findHealthyConnection函数里面 的 while循环中有这样一个函数 findConnection,观察函数名可以看出来 findHealthyConnection意思是找一个健康的连接,而findConnection的意思是找一个连接,实际上这两个函数的功能和他们的名字所描述的差不多。在findConnection会返回一个连接,然后在findHealthyConnection对这个连接是否健康进行判断,该连接健康,则findHealthyConnection会直接返回这个连接,若不健康,findHealthyConnection会把这个连接从连接池中移除,然后进入下一次循环,寻找下一个连接。
接下来对获取连接的函数 findConnection(…)进行分析:
/**
* Returns a connection to host a new stream. This prefers the existing connection if it exists,
* then the pool, finally building a new connection.
*
* This checks for cancellation before each blocking operation.
*/
@Throws(IOException::class)
private fun findConnection(
connectTimeout: Int,
readTimeout: Int,
writeTimeout: Int,
pingIntervalMillis: Int,
connectionRetryEnabled: Boolean
): RealConnection {
if (call.isCanceled()) throw IOException("Canceled") // 请求取消了 就不去找了
// Attempt to reuse the connection from the call.
val callConnection = call.connection // This may be mutated by releaseConnectionNoEvents()!
if (callConnection != null) {
var toClose: Socket? = null
synchronized(callConnection) {
if (callConnection.noNewExchanges || !sameHostAndPort(callConnection.route().address.url)) {
toClose = call.releaseConnectionNoEvents()
}
}
// If the call's connection wasn't released, reuse it. We don't call connectionAcquired() here
// because we already acquired it.
if (call.connection != null) {
check(toClose == null)
return callConnection
}
// The call's connection was released.
toClose?.closeQuietly()
eventListener.connectionReleased(call, callConnection)
}
// We need a new connection. Give it fresh stats.
refusedStreamCount = 0
connectionShutdownCount = 0
otherFailureCount = 0
// 第一次请求 call.connection == null
// Attempt to get a connection from the pool.
// 在连接池查找 有没有与相应的服务端建立 好的连接
if (connectionPool.callAcquirePooledConnection(address, call, null, false)) {
val result = call.connection!!
eventListener.connectionAcquired(call, result)
return result // 在连接池中找到了对应的连接,则直接返回该连接
}
// 在连接池中没有找到相应的连接
// Nothing in the pool. Figure out what route we'll try next.
val routes: List<Route>?
val route: Route
if (nextRouteToTry != null) {
// Use a route from a preceding coalesced connection.
routes = null
route = nextRouteToTry!!
nextRouteToTry = null
} else if (routeSelection != null && routeSelection!!.hasNext()) {
// Use a route from an existing route selection.
routes = null
route = routeSelection!!.next()
} else {
// Compute a new route selection. This is a blocking operation!
var localRouteSelector = routeSelector
if (localRouteSelector == null) {
localRouteSelector = RouteSelector(address, call.client.routeDatabase, call, eventListener)
this.routeSelector = localRouteSelector
}
val localRouteSelection = localRouteSelector.next()
routeSelection = localRouteSelection
routes = localRouteSelection.routes
if (call.isCanceled()) throw IOException("Canceled")
// Now that we have a set of IP addresses, make another attempt at getting a connection from
// the pool. We have a better chance of matching thanks to connection coalescing.
if (connectionPool.callAcquirePooledConnection(address, call, routes, false)) {
val result = call.connection!!
eventListener.connectionAcquired(call, result)
return result
}
route = localRouteSelection.next()
}
// 在连接池中没有找到相应的连接 ,就创建一个新的连接
// Connect. Tell the call about the connecting call so async cancels work.
val newConnection = RealConnection(connectionPool, route)
call.connectionToCancel = newConnection
try {
newConnection.connect( // 去建立连接
connectTimeout,
readTimeout,
writeTimeout,
pingIntervalMillis,
connectionRetryEnabled,
call,
eventListener
)
} finally {
call.connectionToCancel = null
}
call.client.routeDatabase.connected(newConnection.route())
// If we raced another call connecting to this host, coalesce the connections. This makes for 3
// different lookups in the connection pool!
if (connectionPool.callAcquirePooledConnection(address, call, routes, true)) {
val result = call.connection!!
nextRouteToTry = route
newConnection.socket().closeQuietly()
eventListener.connectionAcquired(call, result)
return result
}
synchronized(newConnection) {
connectionPool.put(newConnection)
call.acquireConnectionNoEvents(newConnection)
}
eventListener.connectionAcquired(call, newConnection)
return newConnection
}
在findConnection()过程中,先在连接池查找有没有与相应的服务端建立 好的连接 (先不分析连接池),如果找到了相应的连接 就把该连接返回去,要是没找到相应的连接,就会新建一个连接 :重点关注建立连接的过程 : newConnection.connect():
fun connect(
connectTimeout: Int,
readTimeout: Int,
writeTimeout: Int,
pingIntervalMillis: Int,
connectionRetryEnabled: Boolean,
call: Call,
eventListener: EventListener
) {
check(protocol == null) { "already connected" }
var routeException: RouteException? = null
val connectionSpecs = route.address.connectionSpecs
val connectionSpecSelector = ConnectionSpecSelector(connectionSpecs)
if (route.address.sslSocketFactory == null) {
if (ConnectionSpec.CLEARTEXT !in connectionSpecs) {
throw RouteException(UnknownServiceException(
"CLEARTEXT communication not enabled for client"))
}
val host = route.address.url.host
if (!Platform.get().isCleartextTrafficPermitted(host)) {
throw RouteException(UnknownServiceException(
"CLEARTEXT communication to $host not permitted by network security policy"))
}
} else {
if (Protocol.H2_PRIOR_KNOWLEDGE in route.address.protocols) {
throw RouteException(UnknownServiceException(
"H2_PRIOR_KNOWLEDGE cannot be used with HTTPS"))
}
}
while (true) {
try {
if (route.requiresTunnel()) {
//1、connectSocket
//2、发connect 请求 建立隧道代理
connectTunnel(connectTimeout, readTimeout, writeTimeout, call, eventListener)
if (rawSocket == null) {
// We were unable to connect the tunnel but properly closed down our resources.
break
}
} else {
connectSocket(connectTimeout, readTimeout, call, eventListener)
}
establishProtocol(connectionSpecSelector, pingIntervalMillis, call, eventListener)
eventListener.connectEnd(call, route.socketAddress, route.proxy, protocol)
break
} catch (e: IOException) {
socket?.closeQuietly()
rawSocket?.closeQuietly()
socket = null
rawSocket = null
source = null
sink = null
handshake = null
protocol = null
http2Connection = null
allocationLimit = 1
eventListener.connectFailed(call, route.socketAddress, route.proxy, null, e)
if (routeException == null) {
routeException = RouteException(e)
} else {
routeException.addConnectException(e)
}
if (!connectionRetryEnabled || !connectionSpecSelector.connectionFailed(e)) {
throw routeException
}
}
}
if (route.requiresTunnel() && rawSocket == null) {
throw RouteException(ProtocolException(
"Too many tunnel connections attempted: $MAX_TUNNEL_ATTEMPTS"))
}
idleAtNs = System.nanoTime()
}
还有一些关于代理的内容 这里不再进行分析。
总结一下连接流程:
连接池:
之前在分析中,当我们获取一个连接时先在连接池中寻找,要是连接池中找不到相应的连接,则新建一个连接,接下来着重分析连接池:
每次Request都创建新的Http连接,容易浪费资源和时间,TCP3次握手,断开连接要2次或4次挥手。Http1.0中有Keep-Alive用来保持连接,在一定时间范围内,相同的请求复用旧连接。在OkHttp中使用了连接池,最大允许5个并发连接,存活时间为5分钟,这样节约资源和缩短响应时间。
连接池具有以下特点:
- 连接池管理所有Socket连接,当有新的请求时从池中分配一个链路
- 默认支持5个并发keepalive,链路生命为5分钟(链路数据传输完成,保持5分钟的存活时间)
- 自动清除线程,将超过5分钟的链路关闭socket
在我们使用OKHttp的时候可以配置自己的连接池:
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
var okHttpClient = OkHttpClient.Builder()
.connectionPool(ConnectionPool()) // 配置自己的连接池
.build()
var request = Request.Builder().url("https://www.baidu.com")
.cacheControl(CacheControl.FORCE_CACHE)
.build()
var call = okHttpClient.newCall(request)
val result = call.execute()
println(result.isSuccessful)
result.close()
}
}
观察下面自己配置ConnectionPool的代码。
* @constructor Create a new connection pool with tuning parameters appropriate for a single-user
* application. The tuning parameters in this pool are subject to change in future OkHttp releases.
* Currently this pool holds up to 5 idle connections which will be evicted after 5 minutes of
* inactivity.
*/
class ConnectionPool internal constructor(
internal val delegate: RealConnectionPool
) {
constructor(
maxIdleConnections: Int,
keepAliveDuration: Long,
timeUnit: TimeUnit
) : this(RealConnectionPool(
taskRunner = TaskRunner.INSTANCE,
maxIdleConnections = maxIdleConnections,
keepAliveDuration = keepAliveDuration,
timeUnit = timeUnit
))
constructor() : this(5, 5, TimeUnit.MINUTES)
/** Returns the number of idle connections in the pool. */
fun idleConnectionCount(): Int = delegate.idleConnectionCount()
/** Returns total number of connections in the pool. */
fun connectionCount(): Int = delegate.connectionCount()
/** Close and remove all idle connections in the pool. */
fun evictAll() {
delegate.evictAll()
}
}
在ConnectionPool的构造函数中有着三个参数,参数的意义及默认值如下:
maxIdleConnections
: 最大闲置的连接数,可以看到,最大连接数为5个
maxIdleConnections
: 闲置连接最大存活时间,如果超过了存活时间,那么就会将连接关闭,默认5分钟
timeUnit
: 时间单位,默认是分钟
1.连接池中 连接对象闲置的时间超过5分钟则会被清理掉。
2.连接池中的 存放的空闲连接对象,当存在的空闲连接数量超过5个时,会把闲置时间最长的一个一个清理掉,直到连接池中空闲的连接对象不超过5个。 (LRU思想)
以上的ConnectionPool只是把我们对连接池的配置信息告诉了OKHttp,真正连接池的实现在RealConnectionPool中。
//连接对象复用池
class RealConnectionPool(
taskRunner: TaskRunner,
/** The maximum number of idle connections for each address. */
private val maxIdleConnections: Int,
keepAliveDuration: Long,
timeUnit: TimeUnit
) {
private val keepAliveDurationNs: Long = timeUnit.toNanos(keepAliveDuration)
private val cleanupQueue: TaskQueue = taskRunner.newQueue()
private val cleanupTask = object : Task("$okHttpName ConnectionPool") {
override fun runOnce() = cleanup(System.nanoTime())
}
/**
* Holding the lock of the connection being added or removed when mutating this, and check its
* [RealConnection.noNewExchanges] property. This defends against races where a connection is
* simultaneously adopted and removed.
*/
// 这里面保存了连接池中的所有连接
private val connections = ConcurrentLinkedQueue<RealConnection>()
init {
// Put a floor on the keep alive duration, otherwise cleanup will spin loop.
require(keepAliveDuration > 0L) { "keepAliveDuration <= 0: $keepAliveDuration" }
}
fun idleConnectionCount(): Int {
return connections.count {
synchronized(it) { it.calls.isEmpty() }
}
}
fun connectionCount(): Int {
return connections.size
}
/**
* Attempts to acquire a recycled connection to [address] for [call]. Returns true if a connection
* was acquired.
*
* If [routes] is non-null these are the resolved routes (ie. IP addresses) for the connection.
* This is used to coalesce related domains to the same HTTP/2 connection, such as `square.com`
* and `square.ca`.
*/
fun callAcquirePooledConnection(
address: Address,
call: RealCall,
routes: List<Route>?,
requireMultiplexed: Boolean
): Boolean {
for (connection in connections) {
synchronized(connection) {
if (requireMultiplexed && !connection.isMultiplexed) return@synchronized
if (!connection.isEligible(address, routes)) return@synchronized
call.acquireConnectionNoEvents(connection)
return true
}
}
return false
}
// 把连接对象放入到连接池 注释1
fun put(connection: RealConnection) {
connection.assertThreadHoldsLock()
// 放入连接池
connections.add(connection)
// 启动了一个周期性任务,定时清理连接池中的连接
cleanupQueue.schedule(cleanupTask)
}
/**
* Notify this pool that [connection] has become idle. Returns true if the connection has been
* removed from the pool and should be closed.
*/
fun connectionBecameIdle(connection: RealConnection): Boolean {
connection.assertThreadHoldsLock()
return if (connection.noNewExchanges || maxIdleConnections == 0) {
connection.noNewExchanges = true
connections.remove(connection)
if (connections.isEmpty()) cleanupQueue.cancelAll()
true
} else {
cleanupQueue.schedule(cleanupTask)
false
}
}
fun evictAll() {
val i = connections.iterator()
while (i.hasNext()) {
val connection = i.next()
val socketToClose = synchronized(connection) {
if (connection.calls.isEmpty()) {
i.remove()
connection.noNewExchanges = true
return@synchronized connection.socket()
} else {
return@synchronized null
}
}
socketToClose?.closeQuietly()
}
if (connections.isEmpty()) cleanupQueue.cancelAll()
}
/**
* Performs maintenance on this pool, evicting the connection that has been idle the longest if
* either it has exceeded the keep alive limit or the idle connections limit.
*
* Returns the duration in nanoseconds to sleep until the next scheduled call to this method.
* Returns -1 if no further cleanups are required.
*/
fun cleanup(now: Long): Long {
var inUseConnectionCount = 0
var idleConnectionCount = 0
var longestIdleConnection: RealConnection? = null
var longestIdleDurationNs = Long.MIN_VALUE
// Find either a connection to evict, or the time that the next eviction is due.
for (connection in connections) {
synchronized(connection) {
// If the connection is in use, keep searching.
// 记录正在使用与已经闲置的连接数
if (pruneAndGetAllocationCount(connection, now) > 0) {
inUseConnectionCount++
} else {
idleConnectionCount++
// 记录最长闲置时间的连接longestIdleConnection
// If the connection is ready to be evicted, we're done.
val idleDurationNs = now - connection.idleAtNs
if (idleDurationNs > longestIdleDurationNs) {
longestIdleDurationNs = idleDurationNs
longestIdleConnection = connection
} else {
Unit
}
}
}
}
when {
//最长闲置时间的连接超过了允许闲置时间 或者 闲置数量超过允许数量,清理此连接
longestIdleDurationNs >= this.keepAliveDurationNs
|| idleConnectionCount > this.maxIdleConnections -> {
// We've chosen a connection to evict. Confirm it's still okay to be evict, then close it.
val connection = longestIdleConnection!!
synchronized(connection) {
if (connection.calls.isNotEmpty()) return 0L // No longer idle.
if (connection.idleAtNs + longestIdleDurationNs != now) return 0L // No longer oldest.
connection.noNewExchanges = true
connections.remove(longestIdleConnection)
}
connection.socket().closeQuietly()
if (connections.isEmpty()) cleanupQueue.cancelAll()
// Clean up again immediately.
return 0L
}
//存在闲置连接,下次执行清理任务在 允许闲置时间-已经闲置时候后
idleConnectionCount > 0 -> {
// A connection will be ready to evict soon.
return keepAliveDurationNs - longestIdleDurationNs
}
//存在使用中的连接,下次清理在 允许闲置时间后
inUseConnectionCount > 0 -> {
// All connections are in use. It'll be at least the keep alive duration 'til we run
// again.
return keepAliveDurationNs
}
else -> {
// No connections, idle or in use.
return -1
}
}
}
}
如上代码中,在这个RealConnectionPool中有个人用来专门保存连接的列表。在注释1处,我们像这个连接池中放入一个连接,首先会把这个连接放入到保存连接的列表中,之后在启动一个定期任务,这个任务会定期执行,执行的内容是清理连接池(通过cleanup(函数)) cleanup():
RealConnectionPool
/**
* Performs maintenance on this pool, evicting the connection that has been idle the longest if
* either it has exceeded the keep alive limit or the idle connections limit.
*
* Returns the duration in nanoseconds to sleep until the next scheduled call to this method.
* Returns -1 if no further cleanups are required.
*/
fun cleanup(now: Long): Long {
var inUseConnectionCount = 0
var idleConnectionCount = 0
var longestIdleConnection: RealConnection? = null
var longestIdleDurationNs = Long.MIN_VALUE
// Find either a connection to evict, or the time that the next eviction is due.
for (connection in connections) {
synchronized(connection) {
// If the connection is in use, keep searching.
// 记录正在使用与已经闲置的连接数
if (pruneAndGetAllocationCount(connection, now) > 0) {
inUseConnectionCount++
} else {
idleConnectionCount++
// 记录最长闲置时间的连接longestIdleConnection 计算并记录闲置时间最长的那个连接
// If the connection is ready to be evicted, we're done.
val idleDurationNs = now - connection.idleAtNs
if (idleDurationNs > longestIdleDurationNs) {
longestIdleDurationNs = idleDurationNs
longestIdleConnection = connection
} else {
Unit
}
}
}
}
when {
//最长闲置时间的连接超过了允许闲置时间 或者 闲置数量超过允许数量,清理此连接
longestIdleDurationNs >= this.keepAliveDurationNs
|| idleConnectionCount > this.maxIdleConnections -> {
// We've chosen a connection to evict. Confirm it's still okay to be evict, then close it.
val connection = longestIdleConnection!!
synchronized(connection) {
if (connection.calls.isNotEmpty()) return 0L // No longer idle.
if (connection.idleAtNs + longestIdleDurationNs != now) return 0L // No longer oldest.
connection.noNewExchanges = true
connections.remove(longestIdleConnection)
}
connection.socket().closeQuietly()
if (connections.isEmpty()) cleanupQueue.cancelAll()
// Clean up again immediately.
return 0L
}
//存在闲置连接,下次执行清理任务在 允许闲置时间-已经闲置时候后
idleConnectionCount > 0 -> {
// A connection will be ready to evict soon.
return keepAliveDurationNs - longestIdleDurationNs
}
//存在使用中的连接,下次清理在 允许闲置时间后
inUseConnectionCount > 0 -> {
// All connections are in use. It'll be at least the keep alive duration 'til we run
// again.
return keepAliveDurationNs
}
else -> {
// No connections, idle or in use.
return -1
}
}
}
如上代码中,首先会遍历连接池中的所有连接,记录下正在使用的连接数和闲置的连接数,如果遍历到了一个空闲的连接,会计算并记录闲置时间最长的那个连接(就是从闲置时间最长的连接开始清理),然后 最长闲置时间的连接超过了允许闲置时间 或者 闲置数量超过允许数量,清理此连接,如果闲置连接的时间没有超过允许的闲置时间且连接池中的闲置连接数没有超过允许的数量,那么就会计算当前的这个闲置的连接在多少时间后超过允许闲置时间 ,然后安排经过这个计算出的时间后来进行清理(执行cleanup函数),要是连接池中没有闲置连接的话,则在安排一个 允许闲置时间 后的 清理任务。
以上分析了向连接池put()一个连接的过程,接下来分析从连接池中取出一个连接的过程。
从连接池中获取连接是通过 callAcquirePooledConnection函数,可以注意到这个函数的返回类型为Boolean,当返回ture时表示从连接池中获取到连接,连接保存在这个函数的参数RealCall中,当返回false时表示从连接器中没有获取到连接。
RealConnectionPool
/**
* Attempts to acquire a recycled connection to [address] for [call]. Returns true if a connection
* was acquired.
*
* If [routes] is non-null these are the resolved routes (ie. IP addresses) for the connection.
* This is used to coalesce related domains to the same HTTP/2 connection, such as `square.com`
* and `square.ca`.
*/
fun callAcquirePooledConnection(
address: Address,
call: RealCall,
routes: List<Route>?,
requireMultiplexed: Boolean
): Boolean {
for (connection in connections) {
synchronized(connection) {
if (requireMultiplexed && !connection.isMultiplexed) return@synchronized
if (!connection.isEligible(address, routes)) return@synchronized
call.acquireConnectionNoEvents(connection)
return true
}
}
return false
}
(5).请求服务拦截器
在 ConnectInterceptor 拦截器的功能就是负责与服务器建立 Socket 连接,并且创建了一个 HttpStream 它包括通向服务器的输入流和输出流。而接下来的 CallServerInterceptor 拦截器的功能使用 HttpStream 与服务器进行数据的读写操作的。 也就是真正地去进行网络IO读写了——写入http请求的header和body数据、读取响应的header和body。
public final class CallServerInterceptor implements Interceptor {
private final boolean forWebSocket;
public CallServerInterceptor(boolean forWebSocket) {
this.forWebSocket = forWebSocket;
}
@Override public Response intercept(Chain chain) throws IOException {
RealInterceptorChain realChain = (RealInterceptorChain) chain;
//ConnectInterceptor拦截器传入的exchange
Exchange exchange = realChain.exchange();
Request request = realChain.request();
long sentRequestMillis = System.currentTimeMillis();
// 将请求头写入到socket中,底层通过ExchangeCodec协议类
// (对应Http1ExchangeCodec和Http2ExchangeCodec)
// 最终是通过Okio来实现的,具体实现在RealBufferedSink这个类里面
exchange.writeRequestHeaders(request);
// 如果有body的话,通过Okio将body写入到socket中,用于发送给服务器
boolean responseHeadersStarted = false;
Response.Builder responseBuilder = null;
//含body的请求
if (HttpMethod.permitsRequestBody(request.method()) && request.body() != null) {
// If there's a "Expect: 100-continue" header on the request, wait for a "HTTP/1.1 100
// Continue" response before transmitting the request body. If we don't get that, return
// what we did get (such as a 4xx response) without ever transmitting the request body.
// 若请求头包含 "Expect: 100-continue" ,
// 就会等服务端返回含有 "HTTP/1.1 100 Continue"的响应,然后再发送请求body.
// 如果没有收到这个响应(例如收到的响应是4xx),那就不发送body了。
if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
exchange.flushRequest();
responseHeadersStarted = true;
exchange.responseHeadersStart();
//读取响应头
responseBuilder = exchange.readResponseHeaders(true);
}
// responseBuilder为null说明服务端返回了100,也就是可以继续发送body了
// 底层通过ExchangeCodec协议类(对应Http1ExchangeCodec和Http2ExchangeCodec)来读取返回头header的数据
if (responseBuilder == null) {
if (request.body().isDuplex()) {//默认是false不会进入
// Prepare a duplex body so that the application can send a request body later.
exchange.flushRequest();
BufferedSink bufferedRequestBody = Okio.buffer(
exchange.createRequestBody(request, true));
request.body().writeTo(bufferedRequestBody);
} else {
// Write the request body if the "Expect: 100-continue" expectation was met.
// 满足了 "Expect: 100-continue" ,写请求body
BufferedSink bufferedRequestBody = Okio.buffer(
exchange.createRequestBody(request, false));
request.body().writeTo(bufferedRequestBody);
bufferedRequestBody.close();
}
} else {
//没有满足 "Expect: 100-continue" ,请求发送结束
exchange.noRequestBody();
if (!exchange.connection().isMultiplexed()) {
// If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1 connection
// from being reused. Otherwise we're still obligated to transmit the request body to
// leave the connection in a consistent state.
exchange.noNewExchangesOnConnection();
}
}
} else {
//没有body,请求发送结束
exchange.noRequestBody();
}
//请求发送结束
if (request.body() == null || !request.body().isDuplex()) {
//真正将写到socket输出流的http请求数据发送。
exchange.finishRequest();
}
//回调 读响应头开始事件(如果上面没有)
if (!responseHeadersStarted) {
exchange.responseHeadersStart();
}
//读响应头(如果上面没有)
if (responseBuilder == null) {
responseBuilder = exchange.readResponseHeaders(false);
}
// 创建返回体Response
Response response = responseBuilder
.request(request)
.handshake(exchange.connection().handshake())
.sentRequestAtMillis(sentRequestMillis)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
int code = response.code();
if (code == 100) {
// server sent a 100-continue even though we did not request one.
// try again to read the actual response
// 服务端又返回了个100,就再尝试获取真正的响应
response = exchange.readResponseHeaders(false)
.request(request)
.handshake(exchange.connection().handshake())
.sentRequestAtMillis(sentRequestMillis)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
code = response.code();
}
//回调读响应头结束
exchange.responseHeadersEnd(response);
//这里就是获取响应body了
if (forWebSocket && code == 101) {
// Connection is upgrading, but we need to ensure interceptors see a non-null response body.
response = response.newBuilder()
.body(Util.EMPTY_RESPONSE)
.build();
} else {
// 读取返回体body的数据
// 底层通过ExchangeCodec协议类(对应Http1ExchangeCodec和Http2ExchangeCodec)
response = response.newBuilder()
.body(exchange.openResponseBody(response))
.build();
}
//请求头中Connection是close,表示请求完成后要关闭连接
if ("close".equalsIgnoreCase(response.request().header("Connection"))
|| "close".equalsIgnoreCase(response.header("Connection"))) {
exchange.noNewExchangesOnConnection();
}
//204(无内容)、205(重置内容),body应该是空
if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
throw new ProtocolException(
"HTTP " + code + " had non-zero Content-Length: " + response.body().contentLength());
}
return response;
}
}