Volley源码分析

<strong>Request<T>:</strong>
/**
 * 网络请求基类,不同的Request必须继承与它并实现以下两个方法
 */
public abstract class Request<T> implements Comparable<Request<T>> {
    /**
     *  将网络上返回的未加工的数据,转换成自己所期待的数据,包装为一个Response
     */
    abstract protected Response<T> parseNetworkResponse(NetworkResponse response);

    /**
     *  处理的的数据(例如将数据发送给观察者)
     */
    abstract protected void deliverResponse(T response);

Network:联网工具

/**
 *
 *联网工具,处理各种Request,并返回网络未加工数据类NetWorkResponse
 public interface Network {
 
    public NetworkResponse performRequest(Request<?> request) throws VolleyError;
}

Cache:缓存工具,提供抽象的读/写缓存方法

public interface Cache {

    public Entry get(String key);
    public void put(String key, Entry entry);    
    }
}

RequestQueue:所有Request在此被排队,调度。

public class RequestQueue {

    /** 排队的Request队列 */
    private final PriorityBlockingQueue<Request<?>> mCacheQueue =
        new PriorityBlockingQueue<Request<?>>();

    /** 确认没有缓存的Request队列,等待从网络上获取返回结果*/
    private final PriorityBlockingQueue<Request<?>> mNetworkQueue =
        new PriorityBlockingQueue<Request<?>>();

    /**默认线程池大小 */
    private static final int DEFAULT_NETWORK_THREAD_POOL_SIZE = 4;

    /** 缓存工具,默认实现为DiskBasicCache */
    private final Cache mCache;

    /** 联网工具,默认实现为BasicNetwork*/
    private final Network mNetwork;

    /** 传递Request 返回数据/错误 的工具,默认实现为ExecutorDeliver*/
    private final ResponseDelivery mDelivery;

    /** 执行网络请求的线程数组,大小为线程池大小 */
    private NetworkDispatcher[] mDispatchers;

    /** 执行缓存处理的线程 */
    private CacheDispatcher mCacheDispatcher;

   
    /**
     * 构造方法,可以看见由外界传入Cache和Network的具体实现,和线程池大小
     * 同时还有Delivery
    */
    public RequestQueue(Cache cache, Network network, int threadPoolSize,
            ResponseDelivery delivery) {
        mCache = cache;
        mNetwork = network;
        mDispatchers = new NetworkDispatcher[threadPoolSize];
        mDelivery = delivery;
    }

 我们都知道,使用Volley分3步

1、创建RequestQueue

2、根据需求创建具体的Request对象

3、调用RequestQueue.add(Request request)方法将对象加入请求队列


在第一步,没有特殊的需求是不会直接将RequestQueue给new出来的,所以一般直接调用默认方法Volley.newRequestQueue(Context context),在实现中,所有的newRequestQueue都会到如下的方法中:

public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }

        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }

        Network network = new BasicNetwork(stack);
        
        RequestQueue queue;
        if (maxDiskCacheBytes <= -1)
        {
        	// No maximum size specified
        	queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        }
        else
        {
        	// Disk cache size specified
        	queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);
        }

        queue.start();

        return queue;
    }
    

可以看见此方法一共做了四件事:

(HttpStack容我继续研究一下)


1、创建出Network子类,用默认实现BasicNetwork

2、创建出Cache子类,用默认实现DiskBasedCache

3、new 出RequestQueue

4、RequestQueue.start()并返回

再跟进RequestQueue.start()分析:

public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }
很明显,这个start()主要干了两件事

1、实例化CacheDispatcher,将自己的Request队列mCacheQueue,等待联网的Request队列mNetworkQueue,缓存工具mCache,和Delivery传入。然后start()

2、同1,将NetworkDispatcher数组的每一个元素实例化,传入参数并start

别忘了,CacheDispatcher和NetworkDispatcher是Thread的子类,让他们start说明了要去看他们的run方法到底干了什么


先看CacheDispatcher:

@Override
    public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

        // Make a blocking call to initialize the cache.
        mCache.initialize();

        Request<?> request;
        while (true) {
            // release previous request object to avoid leaking request object when mQueue is drained.
            request = null;
            try {
                // 这里,从排队的Request队列中取出一个Request对象
                request = mCacheQueue.take();
            } catch (InterruptedException e) {
                if (mQuit) {
                    return;
                }
                continue;
            }
            try {
                request.addMarker("cache-queue-take");

                // 判断对象有没有被cancel,如果有直接finish
                if (request.isCanceled()) {
                    request.finish("cache-discard-canceled");
                    continue;
                }

                // 根据这个对象的CacheKey,去缓存中看看有没有该Request对应的缓存
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    //如果没有,直接把这个Request丢入NetworkQueue,表示该Request无缓存,需要等待排队联网获取
                     request.addMarker("cache-miss");
                    mNetworkQueue.put(request);
                    continue;
                }

                // 如果有缓存但是该缓存过期,则做好标记同样丢如NetworkQueue
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }

                // 到这里了,表示缓存可用
                request.addMarker("cache-hit");
               //直接取出缓存数据,包装为一个未加工的Response,丢给该Request让它自己处理该未加工数据
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");

                if (!entry.refreshNeeded()) {
                    // 让Delivery发送Response
                    mDelivery.postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);

                    // Mark the response as intermediate.
                    response.intermediate = true;

                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    final Request<?> finalRequest = request;
                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(finalRequest);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
            }
        }
    }

重点已经写上里中文注释,总的来说它在run里干的事如下

1、无限循环的去从CacheQueue中取Request

2、若取出的Request不为空,说明有要处理的Request,接着检查各种条件判断该缓存可用与否

3、如果可用,直接生成Response丢给Request

4、如果不可用,丢到NetworkQueue等待联网获取


到这里其实都猜得到接下来的NetworkDispatcher要干的事是什么了,就是建立一个死循环从NetworkQueue取Request,取到了直接联网获取数据

NetworkDispatcher.run:

@Override
    public void run() {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        Request<?> request;
        while (true) {
            long startTimeMs = SystemClock.elapsedRealtime();
            // release previous request object to avoid leaking request object when mQueue is drained.
            request = null;
            try {
                // 从NetworkQueue取Request
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }

            try {
                request.addMarker("network-queue-take");

                //检查有没有被cancel
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }

                addTrafficStatsTag(request);

                //调用联网工具去执行Request,得到未加工的Response
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");

                // If the server returned 304 AND we delivered a response already,
                // we're done -- don't deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }

                // 再把未加工的Response丢给Request
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");

                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {
                  //重要的一点,还要把Response写入缓存中,以便下次直接从缓存读取
                   mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }

                // 最后将Response发送出去
                request.markDelivered();
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            }
        }
    }

到了这里,上述的第1步就分析完毕,以上就是创建一个RequestQueue的流程

第2步的根据需要构造Request,Volley已经提供了各种各样的Request,比如StringRequest,ImageRequest和JSONRequest,他们无一例外的定制了自己的parseNekworkResponse(),将源码放上

StringRequest:

@Override
    protected Response<String> parseNetworkResponse(NetworkResponse response) {
        String parsed;
        try {
            parsed = new String(response.data, HttpHeaderParser.parseCharset(response.headers));
        } catch (UnsupportedEncodingException e) {
            parsed = new String(response.data);
        }
        return Response.success(parsed, HttpHeaderParser.parseCacheHeaders(response));
    }

ImageRequest:

 @Override
    protected Response<Bitmap> parseNetworkResponse(NetworkResponse response) {
        // Serialize all decode on a global lock to reduce concurrent heap usage.
        synchronized (sDecodeLock) {
            try {
                return doParse(response);
            } catch (OutOfMemoryError e) {
                VolleyLog.e("Caught OOM for %d byte image, url=%s", response.data.length, getUrl());
                return Response.error(new ParseError(e));
            }
        }
    }

    /**
     * The real guts of parseNetworkResponse. Broken out for readability.
     */
    private Response<Bitmap> doParse(NetworkResponse response) {
        byte[] data = response.data;
        BitmapFactory.Options decodeOptions = new BitmapFactory.Options();
        Bitmap bitmap = null;
        if (mMaxWidth == 0 && mMaxHeight == 0) {
            decodeOptions.inPreferredConfig = mDecodeConfig;
            bitmap = BitmapFactory.decodeByteArray(data, 0, data.length, decodeOptions);
        } else {
            // If we have to resize this image, first get the natural bounds.
            decodeOptions.inJustDecodeBounds = true;
            BitmapFactory.decodeByteArray(data, 0, data.length, decodeOptions);
            int actualWidth = decodeOptions.outWidth;
            int actualHeight = decodeOptions.outHeight;

            // Then compute the dimensions we would ideally like to decode to.
            int desiredWidth = getResizedDimension(mMaxWidth, mMaxHeight,
                    actualWidth, actualHeight, mScaleType);
            int desiredHeight = getResizedDimension(mMaxHeight, mMaxWidth,
                    actualHeight, actualWidth, mScaleType);

            // Decode to the nearest power of two scaling factor.
            decodeOptions.inJustDecodeBounds = false;
            // TODO(ficus): Do we need this or is it okay since API 8 doesn't support it?
            // decodeOptions.inPreferQualityOverSpeed = PREFER_QUALITY_OVER_SPEED;
            decodeOptions.inSampleSize =
                findBestSampleSize(actualWidth, actualHeight, desiredWidth, desiredHeight);
            Bitmap tempBitmap =
                BitmapFactory.decodeByteArray(data, 0, data.length, decodeOptions);

            // If necessary, scale down to the maximal acceptable size.
            if (tempBitmap != null && (tempBitmap.getWidth() > desiredWidth ||
                    tempBitmap.getHeight() > desiredHeight)) {
                bitmap = Bitmap.createScaledBitmap(tempBitmap,
                        desiredWidth, desiredHeight, true);
                tempBitmap.recycle();
            } else {
                bitmap = tempBitmap;
            }
        }

        if (bitmap == null) {
            return Response.error(new ParseError(response));
        } else {
            return Response.success(bitmap, HttpHeaderParser.parseCacheHeaders(response));
        }
    }

那么第3步,讲Requestadd进RequestQueue中

其实经过上述分析,我们知道一个请求是先从CacheQueue被取出,没有缓存时才放入NetworkQueue中的,所以可以猜想得知,RequestQueue.add方法应该是直接将Request放入CacheQueue中

public <T> Request<T> add(Request<T> request) {
 

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }

        // Insert request into stage if there's already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert 'null' queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }
删除掉无关代码后发现,果然如此,一个Request被传入经过了如下过程

1、检查该Request允不允许缓存,如果不允许,直接放入NetworkQueue

2、允许缓存的Request,则再看它在不在等待的Request队列中,如果在,则让它继续等待,如果不在,直接放入CacheQueue,等待Cache线程的读取。


这样,一个Request从被创建,到入队列,再到被取出获取Response的过程就一清二楚了。

但是还有两个问题:

1、得到的Response是如何被发送出去到达用户调用端的

2、为什么我们得到的Response是在主线程得到的,在哪里进行了线程切换


这就要看上文多次提到,但是并没有详细说明的Deliver了。

ResponseDelivery最先出现在RequestQueue的构造方法中,但是会发现该Delivery只是被当做参数传过来然后赋值了而已。并没有实例化的过程,那是因为上文中的构造方法已经是最后一步的构造方法了,调用Volley.newRequestQueue(Context context)会经过一个这样的构造方法:

 public RequestQueue(Cache cache, Network network, int threadPoolSize) {
        this(cache, network, threadPoolSize,
                new ExecutorDelivery(new Handler(Looper.getMainLooper())));
    }

这样就清楚了,ResponseDelivery的默认实现类是ExecutorDelivery,实例化它传入了一个Handler,可以猜想,这个Handler就是线程切换的关键。

这样也就清楚了在new出CacheDispatcher和NetworkDispatcher时传的Delivery是什么了。

回顾上面的两个run方法,在获得到Response后,都接着调用了mDelivery.postResponse(request,response)。所以就进入ExecutorDelivery中看看该方法如何实现的。

    @Override
    public void postResponse(Request<?> request, Response<?> response) {
        postResponse(request, response, null);
    }

    @Override
    public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
        request.markDelivered();
        request.addMarker("post-response");
        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
    }

可以看见重载了多个postResponse(),调用的是第一个,它继而又调用了第二个三个参数的方法。这里将request、response和runnable传入一个Runnable,让线程池去执行,该线程池的初始化在ExecutorDelivery的构造方法中

public ExecutorDelivery(final Handler handler) {
        // Make an Executor that just wraps the handler.
        mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
    }

可以得知,execute调用中,直接调用了handler的.post(Runnable r),这样就实现了线程切换,因为在实例化ExecutorDelivery时,传入的Handler为new Handler(Looper.getMainLooper());使用的是主线程的Looper构造的Handler,所以Looper中的消息队列和消息循环都发生在主线程,所以当这个Runnable在Looper.loop被取出时会在主线程执行它的run(),所以继续跟踪到它的run方法的实现

 private class ResponseDeliveryRunnable implements Runnable {
        private final Request mRequest;
        private final Response mResponse;
        private final Runnable mRunnable;

        public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
            mRequest = request;
            mResponse = response;
            mRunnable = runnable;
        }

        @SuppressWarnings("unchecked")
        @Override
        public void run() {
            // If this request has canceled, finish it and don't deliver.
            if (mRequest.isCanceled()) {
                mRequest.finish("canceled-at-delivery");
                return;
            }

            // Deliver a normal response or error, depending.
            if (mResponse.isSuccess()) {
                mRequest.deliverResponse(mResponse.result);
            } else {
                mRequest.deliverError(mResponse.error);
            }

            // If this is an intermediate response, add a marker, otherwise we're done
            // and the request can be finished.
            if (mResponse.intermediate) {
                mRequest.addMarker("intermediate-response");
            } else {
                mRequest.finish("done");
            }

            // If we have been provided a post-delivery runnable, run it.
            if (mRunnable != null) {
                mRunnable.run();
            }
       }
}
如上,调用了request.deliverResponse(T result)。这个方法也是我们一开始说的继承Request必须实现的抽象方法。所以到这里,上面的两个问题就解决了。

最后,解释一下为什么使用StringRequest、ImageRequest等时没有看见这个方法反而要传入两个Listener,原因是这样的,StringRequest自己对deliverResponse进行了处理,直接将result交给了用户传入的Listener,观察者模式的实践。

@Override
    protected void deliverResponse(String response) {
        if (mListener != null) {
            mListener.onResponse(response);
        }
    }

所以自己如果定制自己的Request,也可以 选择这样的方式,或者提供一个供外部调用的方法也行。


补充:HttpStack的作用

HttpStack是BasicNetwork中真正用来联网的对象的基类。同样是个接口,定义了联网的方法

public interface HttpStack {
    /**
    */
    public HttpResponse performRequest(Request<?> request, Map<String, String> additionalHeaders)
        throws IOException, AuthFailureError;

}

在上述的Volley.newRequestQueue()中,如果传入的HttpStack为空,那么就进行对SDK版本的判断,如果SDK小于9那么HttpStack的实现类就为HttpClientStack,否则就是HurlStack,它们的区别如下,代码就不跟进了:

HttpClientStack:内部联网方式是使用了apache的HttpClient

HurlStack:内部联网方式是使用了HttpUrlConnection

在BasicNetwork的performRequest()中,就是使用了HttpStack.performRequest()联网获取HttpResonse的。


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值