在Android中,如果需要处理多个数据量小得请求,那么建议使用Volley框架,以下是对该框架的分析。
volley中有一个requestqueue类来负责请求的添加跟分派,可以通过Volley类来获得一个requestqueue对象的实例,Volley中的代码
public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
}
if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
}
Network network = new BasicNetwork(stack);
RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
queue.start();
return queue;
}
/**
* Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
*
* @param context A {@link Context} to use for creating the cache dir.
* @return A started {@link RequestQueue} instance.
*/
public static RequestQueue newRequestQueue(Context context) {
return newRequestQueue(context, null);
}
}
这个静态类返回一个RequestQueue对象,其中参数HttpStack类型的参数是负责发送一个Request,如果传入的该值为null,那么会使用默认的HurlStack来处理Request的发送。
RequestQueue类拥有2个PriorityBlockingQueue对象,一个是负责缓存已经获得response的request,一个是负责保存需要发送到服务器的request,就是他的2个全局变量
/** The cache triage queue. */
private final PriorityBlockingQueue<Request> mCacheQueue =
new PriorityBlockingQueue<Request>();
/** The queue of requests that are actually going out to the network. */
private final PriorityBlockingQueue<Request> mNetworkQueue =
new PriorityBlockingQueue<Request>();
该类会被volley调用start方法启动,下面是他的start方法
/**
* Starts the dispatchers in this queue.
*/
public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();
// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}
这个start方法实际上是启动多个dispatcher,一个CacheDispatcher + 多个NetworkDispatcher。
当有一个新的请求被添加到请求队列中的时候,会调用requestqueue的add方法,该方法实际上是将请求放到合适的队列中。
/**
* Adds a Request to the dispatch queue.
* @param request The request to service
* @return The passed-in request
*/
public Request add(Request request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request);
}
// Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");
// If the request is uncacheable, skip the cache queue and go straight to the network.
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}
// Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
if (mWaitingRequests.containsKey(cacheKey)) {
// There is already a request in flight. Queue up.
Queue<Request> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests);
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
mWaitingRequests.put(cacheKey, null);
mCacheQueue.add(request);
}
return request;
}
}
该方法本质上是将reques放到2个PriorityBlockingQueue对象中,以供对应的 一个CacheDispatcher 和 多个NetworkDispatcher 调用。
下面CacheDispatcher线程的run方法
@Override
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
// Make a blocking call to initialize the cache.
mCache.initialize();
while (true) {
try {
// Get a request from the cache triage queue, blocking until
// at least one is available.
final Request request = mCacheQueue.take();
request.addMarker("cache-queue-take");
// If the request has been canceled, don't bother dispatching it.
if (request.isCanceled()) {
request.finish("cache-discard-canceled");
continue;
}
// Attempt to retrieve this item from cache.
Cache.Entry entry = mCache.get(request.getCacheKey());
if (entry == null) {
request.addMarker("cache-miss");
// Cache miss; send off to the network dispatcher.
mNetworkQueue.put(request);
continue;
}
// If it is completely expired, just send it to the network.
if (entry.isExpired()) {
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
mNetworkQueue.put(request);
continue;
}
// We have a cache hit; parse its data for delivery back to the request.
request.addMarker("cache-hit");
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data));
request.addMarker("cache-hit-parsed");
if (!entry.refreshNeeded()) {
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else {
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry);
// Mark the response as intermediate.
response.intermediate = true;
// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
}
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
}
}
该线程的逻辑是这样的,从cachequeue中获得request,当cachequeue中没有request的时候,cachequeue会阻塞该线程,这个具体的分析在下面的PriorityBlockingQueue类的分析中,总之,获取请求,消耗请求采用的是生产者,消费者的模式。当获得请求之后,会判断该请求有没有过期,有没有Entry,有没有cancle掉,如果均没有,那么会判断该请求是否需要刷新,如果还是没有,那么说明该请求的response已经在缓存中,直接将缓存中的response返回。否则的话,将请求放到NetworkQueue中,表示需要进行网络请求。
以下是NetworkDispatcher线程的run方法
@Override
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
Request request;
while (true) {
try {
// Take a request from the queue.
request = mQueue.take();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
try {
request.addMarker("network-queue-take");
// If the request was cancelled already, do not perform the
// network request.
if (request.isCanceled()) {
request.finish("network-discard-cancelled");
continue;
}
// Tag the request (if API >= 14)
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH) {
TrafficStats.setThreadStatsTag(request.getTrafficStatsTag());
}
// Perform the network request.
NetworkResponse networkResponse = mNetwork.performRequest(request);
request.addMarker("network-http-complete");
// If the server returned 304 AND we delivered a response already,
// we're done -- don't deliver a second identical response.
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
request.finish("not-modified");
continue;
}
// Parse the response here on the worker thread.
Response<?> response = request.parseNetworkResponse(networkResponse);
request.addMarker("network-parse-complete");
// Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
}
// Post the response back.
request.markDelivered();
mDelivery.postResponse(request, response);
} catch (VolleyError volleyError) {
parseAndDeliverNetworkError(request, volleyError);
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
mDelivery.postError(request, new VolleyError(e));
}
}
}
该线程的逻辑是这样的,从NetworkQueue中获得request,当NetworkQueue中没有request的时候,NetworkQueue会阻塞该线程,这个具体的分析在下面的PriorityBlockingQueue类的分析中,总之,获取请求,消耗请求采用的是生产者,消费者的模式。当获得请求之后,会判断该请求有没cancle掉,如果没有,那么会调用
mNetwork.performRequest(request);来发送reques,获得response,如何发送请求会在下面的NetWork的分析中。
到目前为止我们已经知道了,一个请求的发送流程
volley创建requestqueue,并且启动他的dispatcher线程,外界将请求放到合适的队列中去,requestqueue启动的多个dispatcher线程会取相应队列中的request处理。
PriorityBlockingQueue的请求管理其实很简单,就不贴代码了,直接说一下逻辑。当有request进入的时候,会调用到offer方法,该方法使用Lock来保证线程安全,需要注意的是,当队列中空间不足的时候,即请求的数量达到了initialCapacity的限制,那么会调用tryGrow方法,该方法在拷贝old数组之前,使用Lock加锁,因此,扩展的时候,会使得添加请求的线程处于阻塞状态,个人认为,就是因为这个,使得volley不适合数据量请求比较大的环境中,因为那样会使得很多调用线程处于阻塞状态。
至于发送请求,使用的是BasicNetwork类,以下是该类的发送请求的方法
@Override
public NetworkResponse performRequest(Request<?> request) throws VolleyError {
long requestStart = SystemClock.elapsedRealtime();
while (true) {
HttpResponse httpResponse = null;
byte[] responseContents = null;
Map<String, String> responseHeaders = new HashMap<String, String>();
try {
// Gather headers.
Map<String, String> headers = new HashMap<String, String>();
addCacheHeaders(headers, request.getCacheEntry());
httpResponse = mHttpStack.performRequest(request, headers);
StatusLine statusLine = httpResponse.getStatusLine();
int statusCode = statusLine.getStatusCode();
responseHeaders = convertHeaders(httpResponse.getAllHeaders());
// Handle cache validation.
if (statusCode == HttpStatus.SC_NOT_MODIFIED) {
return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED,
request.getCacheEntry().data, responseHeaders, true);
}
responseContents = entityToBytes(httpResponse.getEntity());
// if the request is slow, log it.
long requestLifetime = SystemClock.elapsedRealtime() - requestStart;
logSlowRequests(requestLifetime, request, responseContents, statusLine);
if (statusCode != HttpStatus.SC_OK && statusCode != HttpStatus.SC_NO_CONTENT) {
throw new IOException();
}
return new NetworkResponse(statusCode, responseContents, responseHeaders, false);
} catch (SocketTimeoutException e) {
attemptRetryOnException("socket", request, new TimeoutError());
} catch (ConnectTimeoutException e) {
attemptRetryOnException("connection", request, new TimeoutError());
} catch (MalformedURLException e) {
throw new RuntimeException("Bad URL " + request.getUrl(), e);
} catch (IOException e) {
int statusCode = 0;
NetworkResponse networkResponse = null;
if (httpResponse != null) {
statusCode = httpResponse.getStatusLine().getStatusCode();
} else {
throw new NoConnectionError(e);
}
VolleyLog.e("Unexpected response code %d for %s", statusCode, request.getUrl());
if (responseContents != null) {
networkResponse = new NetworkResponse(statusCode, responseContents,
responseHeaders, false);
if (statusCode == HttpStatus.SC_UNAUTHORIZED ||
statusCode == HttpStatus.SC_FORBIDDEN) {
attemptRetryOnException("auth",
request, new AuthFailureError(networkResponse));
} else {
// TODO: Only throw ServerError for 5xx status codes.
throw new ServerError(networkResponse);
}
} else {
throw new NetworkError(networkResponse);
}
}
}
}
该方法实际上是一个代理方法,因为最后会辗转调用HttpStack的performRequest方法,该方法才是真正处理请求的方法。如果程序没有提供HttpStack,会使用默认的HurlStack类,该类是HttpStack的子类。HurlStack的performRequest方法如下
@Override
public HttpResponse performRequest(Request<?> request, Map<String, String> additionalHeaders)
throws IOException, AuthFailureError {
。。。。。
URL parsedUrl = new URL(url);
HttpURLConnection connection = openConnection(parsedUrl, request);
for (String headerName : map.keySet()) {
connection.addRequestProperty(headerName, map.get(headerName));
}
。。。。。
StatusLine responseStatus = new BasicStatusLine(protocolVersion,
connection.getResponseCode(), connection.getResponseMessage());
BasicHttpResponse response = new BasicHttpResponse(responseStatus);
。。。。。
return response;
}
该方法会调用openconnection方法打开一个connection,openconnection会调用url的openConnection方法,url的openConnection会调用StreamHandler的openConnection方法。打开一个HttpUrlConnection之后,调用他的getResponseCode方法,会调用getInputStream方法,这时候,会等待服务器返回数据。
当获得response之后,request会将response传递给之前注册在request中的response。
至此,volley的请求结束了。