Volley原理分析之网络请求层

前言

13年google就推出volley了,作为一个喜欢使用这个网络请求框架的娃,也是时候研究研究下该框架的原理了。

初始化

初始化volley,大家都知道会调用Volley.newRequestQueue(),那我们就沿着源码追溯下去。


     /**
     * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
     * You may set a maximum size of the disk cache in bytes.
     *
     * @param context A {@link Context} to use for creating the cache dir.
     * @param stack An {@link HttpStack} to use for the network, or null for default.
     * @param maxDiskCacheBytes the maximum size of the disk cache, in bytes. Use -1 for default size.
     * @return A started {@link RequestQueue} instance.
     */
    public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }

        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                //HurlStack其实是封装了HttpURLConnection的类
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                //封装了HttpClient类
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }

        Network network = new BasicNetwork(stack);

        RequestQueue queue;
        if (maxDiskCacheBytes <= -1)
        {
            // No maximum size specified
            queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        }
        else
        {
            // Disk cache size specified
            queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);
        }
        //注意这里将启动队列
        queue.start();

        return queue;
    }

在上面代码中,关键点如下:
1. 初始化BasicNetwork。这里根据sdk版本选择不同的网络请求类,他的实现正是该框架请求网络所使用的网络请求类,根本还是依赖HttpURLConnection 和 HttpClient
2. 初始化RequestQueue。这是请求分发的队列,构造函数中初始化执行网络请求的线程数为4,而且还初始化ExecutorDelivery,这个是负责处理响应的接口,负责把response传给主线程

    /**
     * Starts the dispatchers in this queue.
     */
    public void start() {
        //调用stop()后之前初始化的缓存线程和网络请求线程都会销毁
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        //初始化是四个线程,注意,这里不是用线程池
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }
  1. CacheDispatcher与NetworkDispatcher都是继承自线程,这里对总共开启了五个线程,1个缓存线程,4个网络工作线程。
  2. 两者都传入mNetworkQueue这个参数,其实例是 PriorityBlockingQueue
 * <p>{@code BlockingQueue} implementations are **thread-safe**.  All
 * queuing methods achieve their effects atomically using internal
 * locks or other forms of concurrency control

可见,BlockQueue是线程安全的,这也是不直接使用Queue的原因,而在RequestQueue中,有个currentRequest,这个是直接使用HashSet,作为记录当前的请求,以便进行cancelAll的处理,这里并没有用上线程安全的集合操作类。
到这里,volley就已经准备就绪了

发起请求

通过RequestQueue.addRequest(),我们把自己的请求信息传递进volley中:

    /**
     * Adds a Request to the dispatch queue.
     * @param request The request to service
     * @return The passed-in request
     */
    public <T> Request<T> add(Request<T> request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }

        // Process requests in the order they are added.
        //这个序列号是获取的AtomicInteger的自增←_←
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            //添加到这里的时候,就会被工作线程所轮询了咯
            //但是默认情况下,shouldCache都是true,也即一般不会直接跳过cache
            mNetworkQueue.add(request);
            return request;
        }

        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            //已经发出请求的东东都丢进请求队列,如果多个相同请求,则丢到等待hashMap中去。
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }

request不是直接丢进networkQueue让工作线程执行,而是先丢进cacheQueue中,让其miss之后再方进networkQueue,再执行网络请求。

执行请求

现在来看一下工作线程,轮询的那部分:

        while (true) {
            long startTimeMs = SystemClock.elapsedRealtime();
            // release previous request object to avoid leaking request object when mQueue is drained.
            request = null;
            try {
                // Take a request from the queue.
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }

            try {
                request.addMarker("network-queue-take");

                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }

                addTrafficStatsTag(request);

                // Perform the network request.
                //主要核心请求,mNetwork就是初始化的netWrok请求方式
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");

                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }

                // Parse the response here on the worker thread.
                //这里对response进行解析,调用自己覆盖的方法,注意
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");

                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {
                //注意这里就把请求缓存起来了
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }

                // Post the response back.
                request.markDelivered();
                //将response发送给主线程的handler
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            }
        }

经过网络请求的调用,获得response后,就通过mDelivery.postResponse()将response传递给Request,并调用具体请求实现类的deliverReponse方法。在具体实现类中,回调listener的onResponse方法,成功实现response的回调,精妙之处在于ExecutorDelivery中完成跨线程通信,使工作线程能够切换至UI线程。
分发器原理主要是靠Executor实现,并通过handler post转移

    /**
     * Creates a new response delivery interface.
     * @param handler {@link Handler} to post responses on
     */
    public ExecutorDelivery(final Handler handler) {
        // Make an Executor that just wraps the handler.
        mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
    }

总结

整个volley的网络请求其实已经很清晰了,实质就是通过线程轮询任务队列来达到并行操作的效果,在处理线程安全与线程通信方面做到了恰到好处,缓存请求的调度使volley请求的效率提高。
可是给我还留下一些小疑问,就是如果是使用线程池来实现,并发性能是否会更好?这个得等我好好分析先,另外对于NetworkImageView,我也会继续探究下去,欢迎大家一起来交流。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值