Volley源码分析

在进行网络请求时,相信大家都用过Volley,Volley是Google推出来的网络访问框架,它内部仅仅是对HttpUrlConnection和HttpClient的进一步封装,使得网络请求变得简单,而且非常适合频繁的小数量数据的网络请求,使用起来非常简单,三句话就可以搞定网络请求,仅仅会使用还不行,现在我就来带你分析一下Volley的实现原理。先看看使用

//创建请求队列
RequestQueue queue = Volley.newRequestQueue(getApplication());
        //创建请求
        StringRequest request = new StringRequest(path, new Response.Listener<String>() {
            @Override
            public void onResponse(String response) {
                //请求成功
            }
        }, new Response.ErrorListener() {
            @Override
            public void onErrorResponse(VolleyError error) {
                //请求失败
            }
        });
//添加请求队列
queue.add(request);
以上就是Volley的网络请求,非常简单。首先我们来看看第一句话,使用Volley类的newRequestQueue()返回RequestQueue对象,我们跟踪代码

public static RequestQueue newRequestQueue(Context context) {
        return newRequestQueue(context, null);
    }
public static RequestQueue newRequestQueue(Context context, HttpStack stack)
    {
    	return newRequestQueue(context, stack, -1);
    }
public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }

        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }

        Network network = new BasicNetwork(stack);
        
        RequestQueue queue;
        if (maxDiskCacheBytes <= -1)
        {
        	// No maximum size specified
        	queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        }
        else
        {
        	// Disk cache size specified
        	queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);
        }

        queue.start();

        return queue;
    }
我们可以看到还有第二个参数HttpStack和第三个参数int,HttpStack是对网络请求的API的进一步的封装,如果要将OkHttp封装到Volley中,只要将其封装到HttpStack中即可,要想其支持Https请求,请看: http://blog.csdn.net/cj_286/article/details/55195272,maxDiskCacheBytes是缓存的大小,单位是byte,如果不设置默认是5 * 1024 * 1024。如果当前SDK版本大于等于9的时候,网络请求HttpStack封装的是HttpUrlConnection,SDK版本小于9的时候,网络请求HttpStack封装的是HttpClient。Network network = new BasicNetwork(stack);是对HttpStack的进一步封装,封装成Network对象。

public interface Network {
    /**
     * Performs the specified request.
     * @param request Request to process
     * @return A {@link NetworkResponse} with data and caching metadata; will never be null
     * @throws VolleyError on errors
     */
    public NetworkResponse performRequest(Request<?> request) throws VolleyError;
}
用于网络请求的框架有了,下面就是创建一个请求队列,用于存放网络请求的Request。queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);

public RequestQueue(Cache cache, Network network, int threadPoolSize) {
        this(cache, network, threadPoolSize,
                new ExecutorDelivery(new Handler(Looper.getMainLooper())));
    }
public RequestQueue(Cache cache, Network network, int threadPoolSize,
            ResponseDelivery delivery) {
        mCache = cache;//网络缓存 new DiskBasedCache(cacheDir)
        mNetwork = network;//网络请求框架
        mDispatchers = new NetworkDispatcher[threadPoolSize];//用于网络请求的线程Thread,只有4个
        mDelivery = delivery;//响应数据时,将子线程切换到主线程
    }
mDelivery =  new ExecutorDelivery(new Handler(Looper.getMainLooper()))用于响应分发请求的响应数据,主要是从工作线程切换到UI线程。原理也是使用的Handler.

public class ExecutorDelivery implements ResponseDelivery {
    /** Used for posting responses, typically to the main thread. */
    private final Executor mResponsePoster;

    /**
     * Creates a new response delivery interface.
     * @param handler {@link Handler} to post responses on
     */
    public ExecutorDelivery(final Handler handler) {
        // Make an Executor that just wraps the handler.
        mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
    }
......
}
queue.start();启动线程。

public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }
首先开启线程前,先执行以下停止操作,防止之前有在运行。

public void stop() {
        if (mCacheDispatcher != null) {
            mCacheDispatcher.quit();
        }
        for (int i = 0; i < mDispatchers.length; i++) {
            if (mDispatchers[i] != null) {
                mDispatchers[i].quit();
            }
        }
    }
mCacheDispatcher和mDispatcher都是Thread,CacheDispatcher extends Thread  ,NetworkDispatcher extends Thread,用于缓存的读取和网络请求。 mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start();开启线程,在看看线程的run方法

public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

        // Make a blocking call to initialize the cache.
        mCache.initialize();

        Request<?> request;
        while (true) {
            // release previous request object to avoid leaking request object when mQueue is drained.
            request = null;
            try {
                // Take a request from the queue.
                request = mCacheQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
            try {
                request.addMarker("cache-queue-take");

                // If the request has been canceled, don't bother dispatching it.
                if (request.isCanceled()) {
                    request.finish("cache-discard-canceled");
                    continue;
                }

                // Attempt to retrieve this item from cache.
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }

                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }

                // We have a cache hit; parse its data for delivery back to the request.
                request.addMarker("cache-hit");
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");

                if (!entry.refreshNeeded()) {
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);

                    // Mark the response as intermediate.
                    response.intermediate = true;

                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    final Request<?> finalRequest = request;
                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(finalRequest);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
            }
        }
    }
mCache.initialize();是初始化缓存,将本地缓存读取到内存中。
public synchronized void initialize() {
        if (!mRootDirectory.exists()) {
            if (!mRootDirectory.mkdirs()) {
                VolleyLog.e("Unable to create cache dir %s", mRootDirectory.getAbsolutePath());
            }
            return;
        }

        File[] files = mRootDirectory.listFiles();
        if (files == null) {
            return;
        }
        for (File file : files) {
            BufferedInputStream fis = null;
            try {
                fis = new BufferedInputStream(new FileInputStream(file));
                CacheHeader entry = CacheHeader.readHeader(fis);
                entry.size = file.length();
                putEntry(entry.key, entry);
            } catch (IOException e) {
                if (file != null) {
                   file.delete();
                }
            } finally {
                try {
                    if (fis != null) {
                        fis.close();
                    }
                } catch (IOException ignored) { }
            }
        }
    }
mCacheQueue是PriorityBlockingQueue<Request<?>>对象,PriorityBlockingQueue是阻塞队列,take()方法是阻塞方法,在BlockingQueue队列中有数据时才会获取,如果没有数据时就会等待,直到队列中有数据为止,Volley也是利用可java的BlockingQueue的这点特性,将复杂的问题简单化。让Cache Thread不断的去读取缓存队列的的请求数据,一旦缓存队列中有请求数据时,就获取Request,查看其有没有被取消,如果取消就继续去读取下一个Request。

if (request.isCanceled()) {
                    request.finish("cache-discard-canceled");
                    continue;
                }

查看本地缓存中是否有该请求,如果没有就将其Request添加到Network Queue,网络队列和缓存队列一样,都是PriorityBlockingQueue类型,然后继续读取下一个Request。

Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }
如果有缓存,看这缓存是否过期,如果过期了,就将其Request添加到Network Queue.

if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }
如果没有过期就将其缓存中的数据封装成Response对象

Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
然后将其响应体给调用者的成功监听

mDelivery.postResponse(request, response);
调用ExecutorDelivery的postResponse去在UI线程中去调用Request中的成功监听,

public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
        request.markDelivered();
        request.addMarker("post-response");
        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
    }
mResponsePoster.execute()就是将任务通过Handler放到UI线程中去执行

mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
成功监听的回调就是在ResponseDelivery中实现的

private class ResponseDeliveryRunnable implements Runnable {
        private final Request mRequest;
        private final Response mResponse;
        private final Runnable mRunnable;

        public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
            mRequest = request;
            mResponse = response;
            mRunnable = runnable;
        }

        @SuppressWarnings("unchecked")
        @Override
        public void run() {
            // If this request has canceled, finish it and don't deliver.
            if (mRequest.isCanceled()) {
                mRequest.finish("canceled-at-delivery");
                return;
            }

            // Deliver a normal response or error, depending.
            if (mResponse.isSuccess()) {
                mRequest.deliverResponse(mResponse.result);
            } else {
                mRequest.deliverError(mResponse.error);
            }

            // If this is an intermediate response, add a marker, otherwise we're done
            // and the request can be finished.
            if (mResponse.intermediate) {
                mRequest.addMarker("intermediate-response");
            } else {
                mRequest.finish("done");
            }

            // If we have been provided a post-delivery runnable, run it.
            if (mRunnable != null) {
                mRunnable.run();
            }
       }
    }
再次的去判断是否被取消,如果请求成功就回调Request上的成功的监听,失败就回调失败的监听。
if (mResponse.isSuccess()) {
                mRequest.deliverResponse(mResponse.result);
            } else {
                mRequest.deliverError(mResponse.error);
            }
到这里如果有网络缓存的处理过程分析玩了,那么没有缓存呢,还记得之前CacherDispatcher的run方法中,如果没有缓存和缓存过期就会将Request添加到mNetworkQueue队列中去。然后我们在回到RequestQueue的start()方法中,开启缓存线程之后,有开启了4个网络请求线程,主要任务就是不断的读取mNetworkQueue中的Request用于网络请求。

for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }

我们分析一下NetworkDispatcher中的run()方法

public void run() {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        Request<?> request;
        while (true) {
            long startTimeMs = SystemClock.elapsedRealtime();
            // release previous request object to avoid leaking request object when mQueue is drained.
            request = null;
            try {
                // Take a request from the queue.
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }

            try {
                request.addMarker("network-queue-take");

                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }

                addTrafficStatsTag(request);

                // Perform the network request.
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");

                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }

                // Parse the response here on the worker thread.
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");

                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }

                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            }
        }
    }
request = mQueue.take();读取Request请求,然后看时候被取消,如果没有就发起真正的网络请求。

// Perform the network request.
NetworkResponse networkResponse = mNetwork.performRequest(request);
Network是将网络请求进行了封装,在performRequest()中调用的真正的网络API进行了网络请求,然后将网络返回的数据封装成NetworkResponse返回。
如果服务器返回304年和我们已经发表了回应,就别提供第二个相同的反应。

if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }
如果不存在以上的情况就将网络请求下来的数据封装成Response类型

Response<?> response = request.parseNetworkResponse(networkResponse);
如果需要缓存就将其缓存起来

if (request.shouldCache() && response.cacheEntry != null) {
                    mCache.put(request.getCacheKey(), response.cacheEntry);//getCacheKey()-->mMethod + ":" + mUrl;请求方式+url
                    request.addMarker("network-cache-written");
                }
然后将其发送到成功的请求监听中去

// Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
然后这个就和缓存的响应步骤一样了

public void postResponse(Request<?> request, Response<?> response) {
        postResponse(request, response, null);
    }

    @Override
    public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
        request.markDelivered();
        request.addMarker("post-response");
        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
    }
 mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
到这里缓存和网络请求都说完了,但是这只是执行newRequestQueue()方法后Volley内部所做的事情,但是这时mCacheQueue和mNetworkQueue中并没有Request数据,我们还没有添加。接下来我们会创建我们自己的Request对象

StringRequest request = new StringRequest(path, new Response.Listener<String>() {
            @Override
            public void onResponse(String response) {
                
            }
        }, new Response.ErrorListener() {
            @Override
            public void onErrorResponse(VolleyError error) {
                
            }
        });
这个StringRequest是Volley提供的,我们也可以重写Request类来定义属于我们的请求方式。

接下来就是要将Request添加到RequestQueue中。queue.add(request);

public <T> Request<T> add(Request<T> request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);//将RequestQueue设置到request中
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }

        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }

        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }
mCurrentRequests是Set集合,用于存储正在处理的Requset请求,为什么要存储起来呢,为了方便管理,比如要取消所有的请求,就可以遍历这个集合去取消网络请求。

如果请求已经设为不需要缓存,就直接将其添加到mNetworkQueue中去,直接进行网络请求,无需添加到mCacheQueue中去,查看是否有缓存了。

if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }
mWaitingRequests是Map<String, Queue<Request<?>>>集合,为什么需要此集合呢,它主要是避免相同的网络请求多次执行,如果相同的网络我请求的3次,我没有必要真正的去发起三次请求,因为三次的请求相同,没有必要,因为三次的请求结果是相同的。比如我这时来了三次相同的请求,第一次请求时判断mWaitingRequests中是否有该请求mWaitingRequests.containsKey(cacheKey),如果没有(第一次肯定没有)走else分支,将其添加到mWaitingRequests集合和mCacheQueue队列中。
mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
第二次请求来时mWaitingRequests.containsKey(cacheKey)返回ture,走if分支,将第二次请求添加到Queue等待队列中去,当然第三次的请求也会被添加到等待队列中去。那么等待队列中的请求何时执行呢,我们还记得NetworkDispatcher网络请求中的request.finish(String)方法吗,该方法中会调用mRequestQueue.finish(this);方法,我们看看此方法,这时会将等待队列中的请求获取出来添加到缓存队列中去,因为这是此请求已经有缓存了,从而避免了三次相同的网络请求发起不必要的请求。

<T> void finish(Request<T> request) {
        // Remove from the set of requests currently being processed.
        synchronized (mCurrentRequests) {
            mCurrentRequests.remove(request);//将request从但前任务重移除
        }
        synchronized (mFinishedListeners) {
          for (RequestFinishedListener<T> listener : mFinishedListeners) {
            listener.onRequestFinished(request);
          }
        }

        if (request.shouldCache()) {
            synchronized (mWaitingRequests) {
                String cacheKey = request.getCacheKey();
                Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey);//将此请求从等待队列中移除
                if (waitingRequests != null) {
                    if (VolleyLog.DEBUG) {
                        VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
                                waitingRequests.size(), cacheKey);
                    }
                    // Process all queued up requests. They won't be considered as in flight, but
                    // that's not a problem as the cache has been primed by 'request'.
                    mCacheQueue.addAll(waitingRequests);//添加到缓存队列中去
                }
            }
        }
    }

到这里Volley的网络请求源码已经分析完了。其实就是Cache Thread和Network Thread不断的获取mCacheQueue和mNetworkQueue中的Request,因为mCacheQueue和mNetworkQueue都是阻塞队列,只有队列中有数据的时候才会去获取,没有数据的时候就会等待,只要有数据为止。Volley巧妙地利用了java的BlockingQueue的这个特点,完美的实现Request的添加与获取,简化了代码的实现。以下附上分析图

https://img-my.csdn.net/uploads/201703/13/1489417007_3822.png


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值