本节主要介绍性能优化相关,不局限于Bitmap优化等相关知识点,文末有Bitmap解压缩和利用LruCache和DiskLruCache写成的ImageLoader并发加载图片示例
优化详情参考 https://www.jianshu.com/p/f7006ab64da7
LruCache和DiskLruCache参考 https://blog.csdn.net/ztchun/article/details/69365715
一、布局优化
- include标签
- 只支持android:layout_开头的属性,android:id属性是特例。如果include标签指定了android:layout_属性的话,那么android:layout_width和android:layout_height的话就必须存在
- merge标签
- 该标签可以减少布局层级,一般和include标签一起使用,作为布局的根标签,然后include进来
- ViewStub
- 按需加载,只能加载一次,调用ViewStub.inflate或者setVisibility(View.VISIBLE)
- ViewStub还不支持merge标签
二、绘制优化
-
onDraw尽量不要创建新的局部对象,如果被频繁调用,则可能会在一瞬间产生大量的临时对象
-
View的绘制帧率保证60fps是最佳的,每帧不超过16ms(1000/60)
三、内存泄漏优化
- 静态变量导致内存泄漏,例如持有Activity对象,最后无法释放
- 单例模式导致的内存泄漏,例如在单例中注册了某个监听事件,在Activity销毁的时候没有取消监听该事件,导致持有的Activity的对象无法释放
- 属性动画导致的内存泄漏,例如ObjectAnimator播放后,在onDestroy中没有调用cancel来停止动画,Activity的View被动画持有,而View持有Activity,最终Activity无法释放
四、相应速度优化和ANR
- 响应速度核心思想就是避免在主线程中进行耗时操作
- 响应速度过慢将导致ANR,ANR日志在data/anr/traces.txt文件中
- 主线程执行了IO操作
- 主线程中存在耗时的计算
五、其它优化
- 利用线程池代替线程,最大可能复用线程
- 避免创建过多的对象
- 不要过多使用枚举,枚举占用的内存空间较大
- 常量使用static final来修饰
- 采用静态内部类,这样可能避免潜在的由于内部类而导致内存泄漏
- 适当采用软引用和弱引用
- OOM 内存溢出
- 当前占用的内存加上我们申请的内存资源超过了虚拟机内存限制
- 内存抖动 短时间内创建了大量对象,马上释放,瞬间产生的对象很快被回收
- 内存泄漏 某些垃圾对象,已经没有被引用了,但是它们可以直接或者间接引用到GC ROOTS(其它没有被回收的对象),GC无法产生作用
- bitmap相关,例如在ListView滑动的时候不去调用滑动请求,停止时候才加载图片
- 及时释放内存Bitmap对象,BitmapFactory创建Bitmap时,会调用native方法,简单来说会产生两块区域,java区和C区,recycle方法调用的是C区。
- 图片压缩
- 捕获异常
- 内存泄漏
- java内存分配
- 静态存储区
- 栈区
- 堆区
- 单例引起内存泄漏 ,例如getApplicationContext()
- 匿名内部类持有Activity引用,不能正常释放,改成静态匿名内部类
- 静态Handler类 + 弱引用 解决handler泄漏
- 避免使用static变量(生命周期和APP一样)
- 资源未关闭
- AysncTask造成的泄漏,onDestory时cancel即可
- java内存分配
- 内存管理
- 内存管理机制
- 操作系统会为每一个进程合理分配内存资源,从而保证每一个进程正常的运行
- 系统内存不足时,会有一个回收再分配资源的机制,从而保证新的进程能正常运行,回收时会杀死正在占用内存的进程
- Android内存管理机制
- 采用了一个弹性机制,刚开始不会为APP分配太多的内存,会为每个APP分配小额的量,这个小额量是根据每个移动端设备的RAM尺寸大小来决定的
- 当前内存不够时,Android会杀死其他的进程,回收足够的内存从而开启新进程,回收时会按照进程的优先级,并按照LRU机制回收内存
- 前台进程(正在交互的进程)、可见进程(不在前台,但用户依然可见的进程,如输入法)、服务进程、后台进程、空进程(平衡效率的,例如按了返回键,到主界面,此时应用处于空进程)
- Android总是倾向于杀死能回收更多内存的进程,杀死进程越少对用户体验也越好
- 更少占用内存
- 在合适的时候,合理的释放系统资源
- 在合理的特殊生命周期中,保存或者还原重要数据
- 内存管理机制
- UI卡顿
- 人为在UI线程中做轻微耗时操作,导致卡顿
- 布局Layout过于复杂,无法在16ms内完成渲染
- 同一时间动画执行次数过多导致CPU或者GPU负载过重
- View过度绘制,导致某些像素在同一帧事件内被多次绘制,从而使CPU或者GPU负载过重
- View频繁的触发measure、layout,导致measure、layout累计耗时过多及整个View频繁的重新渲染
- 内存频繁触发GC过多,导致暂时阻塞渲染操作
- 冗余资源及逻辑等导致加载和执行缓慢
- ANR
- 内存优化
- Service完成任务后,尽量停止它
- UI不可见的时候,释放一些只有UI使用的资源
- 系统内存紧张时,尽可能多的释放一些非重要资源,例如会调用onTrimMemory通知,然后在该方法内释放内存
- 避免滥用Bitmap导致的内存浪费
- 使用针对内存优化过的数据容器,或者Android特有的数据结构,如SparseArray和Pair等
- 使用ZIP对齐APK
- 冷启动
- 冷启动就是在启动应用前,系统中没有该应用的任何进程信息
- 热启动:用户使用返回键退出应用,然后马上又重新启动应用
- 区别,冷启动,系统没有该进程,需要创建一个新的进程分配给应用,所以会先创建和初始化Appliation类,再创建和初始化MainActivity类,包括测量、布局、绘制,最后显示再界面上。热启动只需要创建创建和初始化MainActivity。
- 冷启动事件计算:冷启动时间=应用启动(创建进程) 到 完成视图第一次绘制(Activity内容对用户可见)
- 简单流程:Zyote进程fork创建一个新进程,创建和初始化APPlication类,attachBaseContext -->> onCreate -->>Activity的构造方法 -->> onCreate -->> 配置主题中背景等属性 -->> onStart – >> onResume -->> 测量布局绘制显示在界面上
- android 不用静态变量存储数据
- 有关SP的安全问题 不能跨进程同步
- 内存对象的序列化问题
- Serializeble 产生大量临时变量
- Parcelable 内存
五、Bitmap的加载和缓存
-
由于Bitmap的特殊性以及Android对单个进程应用只分配16M的内存,这导致加载Bitmap的时候很容易出现内存泄漏。为了解决这个问题,引入了缓存策略
- 网络加载,不优先加载,速度慢,浪费流量
- 本地缓存,次优先加载,速度快
- 内存缓存,优先加载,速度最快
-
接下来直接给出Bitmap的解压缩和内存缓存LruCache和磁盘缓存DiskLruCache的代码
-
ImageResizer.java
import java.io.FileDescriptor; import android.content.res.Resources; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.util.Log; public class ImageResizer { private static final String TAG = "ImageResizer"; public ImageResizer() { } //压缩图片 public Bitmap decodeSampledBitmapFromResource(Resources res, int resId, int reqWidth, int reqHeight) { // First decode with inJustDecodeBounds=true to check dimensions final BitmapFactory.Options options = new BitmapFactory.Options(); options.inJustDecodeBounds = true; BitmapFactory.decodeResource(res, resId, options); // Calculate inSampleSize options.inSampleSize = calculateInSampleSize(options, reqWidth, reqHeight); // Decode bitmap with inSampleSize set options.inJustDecodeBounds = false; return BitmapFactory.decodeResource(res, resId, options); } //通过文件描述符解压缩图片 public Bitmap decodeSampledBitmapFromFileDescriptor(FileDescriptor fd, int reqWidth, int reqHeight) { // First decode with inJustDecodeBounds=true to check dimensions final BitmapFactory.Options options = new BitmapFactory.Options(); options.inJustDecodeBounds = true; BitmapFactory.decodeFileDescriptor(fd, null, options); // Calculate inSampleSize options.inSampleSize = calculateInSampleSize(options, reqWidth, reqHeight); // Decode bitmap with inSampleSize set options.inJustDecodeBounds = false; return BitmapFactory.decodeFileDescriptor(fd, null, options); } //计算采样率 public int calculateInSampleSize(BitmapFactory.Options options, int reqWidth, int reqHeight) { if (reqWidth == 0 || reqHeight == 0) { return 1; } // Raw height and width of image final int height = options.outHeight; final int width = options.outWidth; Log.d(TAG, "origin, w= " + width + " h=" + height); int inSampleSize = 1; if (height > reqHeight || width > reqWidth) { final int halfHeight = height / 2; final int halfWidth = width / 2; // Calculate the largest inSampleSize value that is a power of 2 and // keeps both // height and width larger than the requested height and width. while ((halfHeight / inSampleSize) >= reqHeight && (halfWidth / inSampleSize) >= reqWidth) { inSampleSize *= 2; } } Log.d(TAG, "sampleSize:" + inSampleSize); return inSampleSize; } }
-
DiskLruCache.java 这个类和LruCache.java不同,它不是Andorid Sdk api
/* * Copyright (C) 2011 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package libcore.io; import java.io.BufferedInputStream; import java.io.BufferedWriter; import java.io.Closeable; import java.io.EOFException; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.FileWriter; import java.io.FilterOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStream; import java.io.OutputStreamWriter; import java.io.Reader; import java.io.StringWriter; import java.io.Writer; import java.lang.reflect.Array; import java.nio.charset.Charset; import java.util.ArrayList; import java.util.Arrays; import java.util.Iterator; import java.util.LinkedHashMap; import java.util.Map; import java.util.concurrent.Callable; import java.util.concurrent.ExecutorService; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; /** ****************************************************************************** * Taken from the JB source code, can be found in: * libcore/luni/src/main/java/libcore/io/DiskLruCache.java * or direct link: * https://android.googlesource.com/platform/libcore/+/android-4.1.1_r1/luni/src/main/java/libcore/io/DiskLruCache.java ****************************************************************************** * * A cache that uses a bounded amount of space on a filesystem. Each cache * entry has a string key and a fixed number of values. Values are byte * sequences, accessible as streams or files. Each value must be between {@code * 0} and {@code Integer.MAX_VALUE} bytes in length. * * <p>The cache stores its data in a directory on the filesystem. This * directory must be exclusive to the cache; the cache may delete or overwrite * files from its directory. It is an error for multiple processes to use the * same cache directory at the same time. * * <p>This cache limits the number of bytes that it will store on the * filesystem. When the number of stored bytes exceeds the limit, the cache will * remove entries in the background until the limit is satisfied. The limit is * not strict: the cache may temporarily exceed it while waiting for files to be * deleted. The limit does not include filesystem overhead or the cache * journal so space-sensitive applications should set a conservative limit. * * <p>Clients call {@link #edit} to create or update the values of an entry. An * entry may have only one editor at one time; if a value is not available to be * edited then {@link #edit} will return null. * <ul> * <li>When an entry is being <strong>created</strong> it is necessary to * supply a full set of values; the empty value should be used as a * placeholder if necessary. * <li>When an entry is being <strong>edited</strong>, it is not necessary * to supply data for every value; values default to their previous * value. * </ul> * Every {@link #edit} call must be matched by a call to {@link Editor#commit} * or {@link Editor#abort}. Committing is atomic: a read observes the full set * of values as they were before or after the commit, but never a mix of values. * * <p>Clients call {@link #get} to read a snapshot of an entry. The read will * observe the value at the time that {@link #get} was called. Updates and * removals after the call do not impact ongoing reads. * * <p>This class is tolerant of some I/O errors. If files are missing from the * filesystem, the corresponding entries will be dropped from the cache. If * an error occurs while writing a cache value, the edit will fail silently. * Callers should handle other problems by catching {@code IOException} and * responding appropriately. */ public final class DiskLruCache implements Closeable { static final String JOURNAL_FILE = "journal"; static final String JOURNAL_FILE_TMP = "journal.tmp"; static final String MAGIC = "libcore.io.DiskLruCache"; static final String VERSION_1 = "1"; static final long ANY_SEQUENCE_NUMBER = -1; private static final String CLEAN = "CLEAN"; private static final String DIRTY = "DIRTY"; private static final String REMOVE = "REMOVE"; private static final String READ = "READ"; private static final Charset UTF_8 = Charset.forName("UTF-8"); private static final int IO_BUFFER_SIZE = 8 * 1024; /* * This cache uses a journal file named "journal". A typical journal file * looks like this: * libcore.io.DiskLruCache * 1 * 100 * 2 * * CLEAN 3400330d1dfc7f3f7f4b8d4d803dfcf6 832 21054 * DIRTY 335c4c6028171cfddfbaae1a9c313c52 * CLEAN 335c4c6028171cfddfbaae1a9c313c52 3934 2342 * REMOVE 335c4c6028171cfddfbaae1a9c313c52 * DIRTY 1ab96a171faeeee38496d8b330771a7a * CLEAN 1ab96a171faeeee38496d8b330771a7a 1600 234 * READ 335c4c6028171cfddfbaae1a9c313c52 * READ 3400330d1dfc7f3f7f4b8d4d803dfcf6 * * The first five lines of the journal form its header. They are the * constant string "libcore.io.DiskLruCache", the disk cache's version, * the application's version, the value count, and a blank line. * * Each of the subsequent lines in the file is a record of the state of a * cache entry. Each line contains space-separated values: a state, a key, * and optional state-specific values. * o DIRTY lines track that an entry is actively being created or updated. * Every successful DIRTY action should be followed by a CLEAN or REMOVE * action. DIRTY lines without a matching CLEAN or REMOVE indicate that * temporary files may need to be deleted. * o CLEAN lines track a cache entry that has been successfully published * and may be read. A publish line is followed by the lengths of each of * its values. * o READ lines track accesses for LRU. * o REMOVE lines track entries that have been deleted. * * The journal file is appended to as cache operations occur. The journal may * occasionally be compacted by dropping redundant lines. A temporary file named * "journal.tmp" will be used during compaction; that file should be deleted if * it exists when the cache is opened. */ private final File directory; private final File journalFile; private final File journalFileTmp; private final int appVersion; private final long maxSize; private final int valueCount; private long size = 0; private Writer journalWriter; private final LinkedHashMap<String, Entry> lruEntries = new LinkedHashMap<String, Entry>(0, 0.75f, true); private int redundantOpCount; /** * To differentiate between old and current snapshots, each entry is given * a sequence number each time an edit is committed. A snapshot is stale if * its sequence number is not equal to its entry's sequence number. */ private long nextSequenceNumber = 0; /* From java.util.Arrays */ @SuppressWarnings("unchecked") private static <T> T[] copyOfRange(T[] original, int start, int end) { final int originalLength = original.length; // For exception priority compatibility. if (start > end) { throw new IllegalArgumentException(); } if (start < 0 || start > originalLength) { throw new ArrayIndexOutOfBoundsException(); } final int resultLength = end - start; final int copyLength = Math.min(resultLength, originalLength - start); final T[] result = (T[]) Array .newInstance(original.getClass().getComponentType(), resultLength); System.arraycopy(original, start, result, 0, copyLength); return result; } /** * Returns the remainder of 'reader' as a string, closing it when done. */ public static String readFully(Reader reader) throws IOException { try { StringWriter writer = new StringWriter(); char[] buffer = new char[1024]; int count; while ((count = reader.read(buffer)) != -1) { writer.write(buffer, 0, count); } return writer.toString(); } finally { reader.close(); } } /** * Returns the ASCII characters up to but not including the next "\r\n", or * "\n". * * @throws java.io.EOFException if the stream is exhausted before the next newline * character. */ public static String readAsciiLine(InputStream in) throws IOException { // TODO: support UTF-8 here instead StringBuilder result = new StringBuilder(80); while (true) { int c = in.read(); if (c == -1) { throw new EOFException(); } else if (c == '\n') { break; } result.append((char) c); } int length = result.length(); if (length > 0 && result.charAt(length - 1) == '\r') { result.setLength(length - 1); } return result.toString(); } /** * Closes 'closeable', ignoring any checked exceptions. Does nothing if 'closeable' is null. */ public static void closeQuietly(Closeable closeable) { if (closeable != null) { try { closeable.close(); } catch (RuntimeException rethrown) { throw rethrown; } catch (Exception ignored) { } } } /** * Recursively delete everything in {@code dir}. */ // TODO: this should specify paths as Strings rather than as Files public static void deleteContents(File dir) throws IOException { File[] files = dir.listFiles(); if (files == null) { throw new IllegalArgumentException("not a directory: " + dir); } for (File file : files) { if (file.isDirectory()) { deleteContents(file); } if (!file.delete()) { throw new IOException("failed to delete file: " + file); } } } /** This cache uses a single background thread to evict entries. */ private final ExecutorService executorService = new ThreadPoolExecutor(0, 1, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>()); private final Callable<Void> cleanupCallable = new Callable<Void>() { @Override public Void call() throws Exception { synchronized (DiskLruCache.this) { if (journalWriter == null) { return null; // closed } trimToSize(); if (journalRebuildRequired()) { rebuildJournal(); redundantOpCount = 0; } } return null; } }; private DiskLruCache(File directory, int appVersion, int valueCount, long maxSize) { this.directory = directory; this.appVersion = appVersion; this.journalFile = new File(directory, JOURNAL_FILE); this.journalFileTmp = new File(directory, JOURNAL_FILE_TMP); this.valueCount = valueCount; this.maxSize = maxSize; } /** * Opens the cache in {@code directory}, creating a cache if none exists * there. * * @param directory a writable directory * @param appVersion * @param valueCount the number of values per cache entry. Must be positive. * @param maxSize the maximum number of bytes this cache should use to store * @throws java.io.IOException if reading or writing the cache directory fails */ public static DiskLruCache open(File directory, int appVersion, int valueCount, long maxSize) throws IOException { if (maxSize <= 0) { throw new IllegalArgumentException("maxSize <= 0"); } if (valueCount <= 0) { throw new IllegalArgumentException("valueCount <= 0"); } // prefer to pick up where we left off DiskLruCache cache = new DiskLruCache(directory, appVersion, valueCount, maxSize); if (cache.journalFile.exists()) { try { cache.readJournal(); cache.processJournal(); cache.journalWriter = new BufferedWriter(new FileWriter(cache.journalFile, true), IO_BUFFER_SIZE); return cache; } catch (IOException journalIsCorrupt) { // System.logW("DiskLruCache " + directory + " is corrupt: " // + journalIsCorrupt.getMessage() + ", removing"); cache.delete(); } } // create a new empty cache directory.mkdirs(); cache = new DiskLruCache(directory, appVersion, valueCount, maxSize); cache.rebuildJournal(); return cache; } private void readJournal() throws IOException { InputStream in = new BufferedInputStream(new FileInputStream(journalFile), IO_BUFFER_SIZE); try { String magic = readAsciiLine(in); String version = readAsciiLine(in); String appVersionString = readAsciiLine(in); String valueCountString = readAsciiLine(in); String blank = readAsciiLine(in); if (!MAGIC.equals(magic) || !VERSION_1.equals(version) || !Integer.toString(appVersion).equals(appVersionString) || !Integer.toString(valueCount).equals(valueCountString) || !"".equals(blank)) { throw new IOException("unexpected journal header: [" + magic + ", " + version + ", " + valueCountString + ", " + blank + "]"); } while (true) { try { readJournalLine(readAsciiLine(in)); } catch (EOFException endOfJournal) { break; } } } finally { closeQuietly(in); } } private void readJournalLine(String line) throws IOException { String[] parts = line.split(" "); if (parts.length < 2) { throw new IOException("unexpected journal line: " + line); } String key = parts[1]; if (parts[0].equals(REMOVE) && parts.length == 2) { lruEntries.remove(key); return; } Entry entry = lruEntries.get(key); if (entry == null) { entry = new Entry(key); lruEntries.put(key, entry); } if (parts[0].equals(CLEAN) && parts.length == 2 + valueCount) { entry.readable = true; entry.currentEditor = null; entry.setLengths(copyOfRange(parts, 2, parts.length)); } else if (parts[0].equals(DIRTY) && parts.length == 2) { entry.currentEditor = new Editor(entry); } else if (parts[0].equals(READ) && parts.length == 2) { // this work was already done by calling lruEntries.get() } else { throw new IOException("unexpected journal line: " + line); } } /** * Computes the initial size and collects garbage as a part of opening the * cache. Dirty entries are assumed to be inconsistent and will be deleted. */ private void processJournal() throws IOException { deleteIfExists(journalFileTmp); for (Iterator<Entry> i = lruEntries.values().iterator(); i.hasNext(); ) { Entry entry = i.next(); if (entry.currentEditor == null) { for (int t = 0; t < valueCount; t++) { size += entry.lengths[t]; } } else { entry.currentEditor = null; for (int t = 0; t < valueCount; t++) { deleteIfExists(entry.getCleanFile(t)); deleteIfExists(entry.getDirtyFile(t)); } i.remove(); } } } /** * Creates a new journal that omits redundant information. This replaces the * current journal if it exists. */ private synchronized void rebuildJournal() throws IOException { if (journalWriter != null) { journalWriter.close(); } Writer writer = new BufferedWriter(new FileWriter(journalFileTmp), IO_BUFFER_SIZE); writer.write(MAGIC); writer.write("\n"); writer.write(VERSION_1); writer.write("\n"); writer.write(Integer.toString(appVersion)); writer.write("\n"); writer.write(Integer.toString(valueCount)); writer.write("\n"); writer.write("\n"); for (Entry entry : lruEntries.values()) { if (entry.currentEditor != null) { writer.write(DIRTY + ' ' + entry.key + '\n'); } else { writer.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n'); } } writer.close(); journalFileTmp.renameTo(journalFile); journalWriter = new BufferedWriter(new FileWriter(journalFile, true), IO_BUFFER_SIZE); } private static void deleteIfExists(File file) throws IOException { // try { // Libcore.os.remove(file.getPath()); // } catch (ErrnoException errnoException) { // if (errnoException.errno != OsConstants.ENOENT) { // throw errnoException.rethrowAsIOException(); // } // } if (file.exists() && !file.delete()) { throw new IOException(); } } /** * Returns a snapshot of the entry named {@code key}, or null if it doesn't * exist is not currently readable. If a value is returned, it is moved to * the head of the LRU queue. */ public synchronized Snapshot get(String key) throws IOException { checkNotClosed(); validateKey(key); Entry entry = lruEntries.get(key); if (entry == null) { return null; } if (!entry.readable) { return null; } /* * Open all streams eagerly to guarantee that we see a single published * snapshot. If we opened streams lazily then the streams could come * from different edits. */ InputStream[] ins = new InputStream[valueCount]; try { for (int i = 0; i < valueCount; i++) { ins[i] = new FileInputStream(entry.getCleanFile(i)); } } catch (FileNotFoundException e) { // a file must have been deleted manually! return null; } redundantOpCount++; journalWriter.append(READ + ' ' + key + '\n'); if (journalRebuildRequired()) { executorService.submit(cleanupCallable); } return new Snapshot(key, entry.sequenceNumber, ins); } /** * Returns an editor for the entry named {@code key}, or null if another * edit is in progress. */ public Editor edit(String key) throws IOException { return edit(key, ANY_SEQUENCE_NUMBER); } private synchronized Editor edit(String key, long expectedSequenceNumber) throws IOException { checkNotClosed(); validateKey(key); Entry entry = lruEntries.get(key); if (expectedSequenceNumber != ANY_SEQUENCE_NUMBER && (entry == null || entry.sequenceNumber != expectedSequenceNumber)) { return null; // snapshot is stale } if (entry == null) { entry = new Entry(key); lruEntries.put(key, entry); } else if (entry.currentEditor != null) { return null; // another edit is in progress } Editor editor = new Editor(entry); entry.currentEditor = editor; // flush the journal before creating files to prevent file leaks journalWriter.write(DIRTY + ' ' + key + '\n'); journalWriter.flush(); return editor; } /** * Returns the directory where this cache stores its data. */ public File getDirectory() { return directory; } /** * Returns the maximum number of bytes that this cache should use to store * its data. */ public long maxSize() { return maxSize; } /** * Returns the number of bytes currently being used to store the values in * this cache. This may be greater than the max size if a background * deletion is pending. */ public synchronized long size() { return size; } private synchronized void completeEdit(Editor editor, boolean success) throws IOException { Entry entry = editor.entry; if (entry.currentEditor != editor) { throw new IllegalStateException(); } // if this edit is creating the entry for the first time, every index must have a value if (success && !entry.readable) { for (int i = 0; i < valueCount; i++) { if (!entry.getDirtyFile(i).exists()) { editor.abort(); throw new IllegalStateException("edit didn't create file " + i); } } } for (int i = 0; i < valueCount; i++) { File dirty = entry.getDirtyFile(i); if (success) { if (dirty.exists()) { File clean = entry.getCleanFile(i); dirty.renameTo(clean); long oldLength = entry.lengths[i]; long newLength = clean.length(); entry.lengths[i] = newLength; size = size - oldLength + newLength; } } else { deleteIfExists(dirty); } } redundantOpCount++; entry.currentEditor = null; if (entry.readable | success) { entry.readable = true; journalWriter.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n'); if (success) { entry.sequenceNumber = nextSequenceNumber++; } } else { lruEntries.remove(entry.key); journalWriter.write(REMOVE + ' ' + entry.key + '\n'); } if (size > maxSize || journalRebuildRequired()) { executorService.submit(cleanupCallable); } } /** * We only rebuild the journal when it will halve the size of the journal * and eliminate at least 2000 ops. */ private boolean journalRebuildRequired() { final int REDUNDANT_OP_COMPACT_THRESHOLD = 2000; return redundantOpCount >= REDUNDANT_OP_COMPACT_THRESHOLD && redundantOpCount >= lruEntries.size(); } /** * Drops the entry for {@code key} if it exists and can be removed. Entries * actively being edited cannot be removed. * * @return true if an entry was removed. */ public synchronized boolean remove(String key) throws IOException { checkNotClosed(); validateKey(key); Entry entry = lruEntries.get(key); if (entry == null || entry.currentEditor != null) { return false; } for (int i = 0; i < valueCount; i++) { File file = entry.getCleanFile(i); if (!file.delete()) { throw new IOException("failed to delete " + file); } size -= entry.lengths[i]; entry.lengths[i] = 0; } redundantOpCount++; journalWriter.append(REMOVE + ' ' + key + '\n'); lruEntries.remove(key); if (journalRebuildRequired()) { executorService.submit(cleanupCallable); } return true; } /** * Returns true if this cache has been closed. */ public boolean isClosed() { return journalWriter == null; } private void checkNotClosed() { if (journalWriter == null) { throw new IllegalStateException("cache is closed"); } } /** * Force buffered operations to the filesystem. */ public synchronized void flush() throws IOException { checkNotClosed(); trimToSize(); journalWriter.flush(); } /** * Closes this cache. Stored values will remain on the filesystem. */ public synchronized void close() throws IOException { if (journalWriter == null) { return; // already closed } for (Entry entry : new ArrayList<Entry>(lruEntries.values())) { if (entry.currentEditor != null) { entry.currentEditor.abort(); } } trimToSize(); journalWriter.close(); journalWriter = null; } private void trimToSize() throws IOException { while (size > maxSize) { // Map.Entry<String, Entry> toEvict = lruEntries.eldest(); final Map.Entry<String, Entry> toEvict = lruEntries.entrySet().iterator().next(); remove(toEvict.getKey()); } } /** * Closes the cache and deletes all of its stored values. This will delete * all files in the cache directory including files that weren't created by * the cache. */ public void delete() throws IOException { close(); deleteContents(directory); } private void validateKey(String key) { if (key.contains(" ") || key.contains("\n") || key.contains("\r")) { throw new IllegalArgumentException( "keys must not contain spaces or newlines: \"" + key + "\""); } } private static String inputStreamToString(InputStream in) throws IOException { return readFully(new InputStreamReader(in, UTF_8)); } /** * A snapshot of the values for an entry. */ public final class Snapshot implements Closeable { private final String key; private final long sequenceNumber; private final InputStream[] ins; private Snapshot(String key, long sequenceNumber, InputStream[] ins) { this.key = key; this.sequenceNumber = sequenceNumber; this.ins = ins; } /** * Returns an editor for this snapshot's entry, or null if either the * entry has changed since this snapshot was created or if another edit * is in progress. */ public Editor edit() throws IOException { return DiskLruCache.this.edit(key, sequenceNumber); } /** * Returns the unbuffered stream with the value for {@code index}. */ public InputStream getInputStream(int index) { return ins[index]; } /** * Returns the string value for {@code index}. */ public String getString(int index) throws IOException { return inputStreamToString(getInputStream(index)); } @Override public void close() { for (InputStream in : ins) { closeQuietly(in); } } } /** * Edits the values for an entry. */ public final class Editor { private final Entry entry; private boolean hasErrors; private Editor(Entry entry) { this.entry = entry; } /** * Returns an unbuffered input stream to read the last committed value, * or null if no value has been committed. */ public InputStream newInputStream(int index) throws IOException { synchronized (DiskLruCache.this) { if (entry.currentEditor != this) { throw new IllegalStateException(); } if (!entry.readable) { return null; } return new FileInputStream(entry.getCleanFile(index)); } } /** * Returns the last committed value as a string, or null if no value * has been committed. */ public String getString(int index) throws IOException { InputStream in = newInputStream(index); return in != null ? inputStreamToString(in) : null; } /** * Returns a new unbuffered output stream to write the value at * {@code index}. If the underlying output stream encounters errors * when writing to the filesystem, this edit will be aborted when * {@link #commit} is called. The returned output stream does not throw * IOExceptions. */ public OutputStream newOutputStream(int index) throws IOException { synchronized (DiskLruCache.this) { if (entry.currentEditor != this) { throw new IllegalStateException(); } return new FaultHidingOutputStream(new FileOutputStream(entry.getDirtyFile(index))); } } /** * Sets the value at {@code index} to {@code value}. */ public void set(int index, String value) throws IOException { Writer writer = null; try { writer = new OutputStreamWriter(newOutputStream(index), UTF_8); writer.write(value); } finally { closeQuietly(writer); } } /** * Commits this edit so it is visible to readers. This releases the * edit lock so another edit may be started on the same key. */ public void commit() throws IOException { if (hasErrors) { completeEdit(this, false); remove(entry.key); // the previous entry is stale } else { completeEdit(this, true); } } /** * Aborts this edit. This releases the edit lock so another edit may be * started on the same key. */ public void abort() throws IOException { completeEdit(this, false); } private class FaultHidingOutputStream extends FilterOutputStream { private FaultHidingOutputStream(OutputStream out) { super(out); } @Override public void write(int oneByte) { try { out.write(oneByte); } catch (IOException e) { hasErrors = true; } } @Override public void write(byte[] buffer, int offset, int length) { try { out.write(buffer, offset, length); } catch (IOException e) { hasErrors = true; } } @Override public void close() { try { out.close(); } catch (IOException e) { hasErrors = true; } } @Override public void flush() { try { out.flush(); } catch (IOException e) { hasErrors = true; } } } } private final class Entry { private final String key; /** Lengths of this entry's files. */ private final long[] lengths; /** True if this entry has ever been published */ private boolean readable; /** The ongoing edit or null if this entry is not being edited. */ private Editor currentEditor; /** The sequence number of the most recently committed edit to this entry. */ private long sequenceNumber; private Entry(String key) { this.key = key; this.lengths = new long[valueCount]; } public String getLengths() throws IOException { StringBuilder result = new StringBuilder(); for (long size : lengths) { result.append(' ').append(size); } return result.toString(); } /** * Set lengths using decimal numbers like "10123". */ private void setLengths(String[] strings) throws IOException { if (strings.length != valueCount) { throw invalidLengths(strings); } try { for (int i = 0; i < strings.length; i++) { lengths[i] = Long.parseLong(strings[i]); } } catch (NumberFormatException e) { throw invalidLengths(strings); } } private IOException invalidLengths(String[] strings) throws IOException { throw new IOException("unexpected journal line: " + Arrays.toString(strings)); } public File getCleanFile(int i) { return new File(directory, key + "." + i); } public File getDirtyFile(int i) { return new File(directory, key + "." + i + ".tmp"); } } }
-
最后贴一个ImageLoader.java 内部使用handler和线程池实现并发,AsyncTask无法实现并发的效果
import java.io.BufferedInputStream; import java.io.BufferedOutputStream; import java.io.File; import java.io.FileDescriptor; import java.io.FileInputStream; import java.io.IOException; import java.io.OutputStream; import java.net.HttpURLConnection; import java.net.URL; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.concurrent.Executor; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.ThreadFactory; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import com.ryg.chapter_12.R; import com.ryg.chapter_12.utils.MyUtils; import android.annotation.TargetApi; import android.content.Context; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.os.Build; import android.os.Build.VERSION_CODES; import android.os.Environment; import android.os.Handler; import android.os.Looper; import android.os.Message; import android.os.StatFs; import android.support.v4.util.LruCache; import android.util.Log; import android.widget.ImageView; public class ImageLoader { private static final String TAG = "ImageLoader"; public static final int MESSAGE_POST_RESULT = 1; private static final int CPU_COUNT = Runtime.getRuntime() .availableProcessors(); private static final int CORE_POOL_SIZE = CPU_COUNT + 1; private static final int MAXIMUM_POOL_SIZE = CPU_COUNT * 2 + 1; private static final long KEEP_ALIVE = 10L; private static final int TAG_KEY_URI = R.id.imageloader_uri; private static final long DISK_CACHE_SIZE = 1024 * 1024 * 50; private static final int IO_BUFFER_SIZE = 8 * 1024; private static final int DISK_CACHE_INDEX = 0; private boolean mIsDiskLruCacheCreated = false; private static final ThreadFactory sThreadFactory = new ThreadFactory() { private final AtomicInteger mCount = new AtomicInteger(1); public Thread newThread(Runnable r) { return new Thread(r, "ImageLoader#" + mCount.getAndIncrement()); } }; public static final Executor THREAD_POOL_EXECUTOR = new ThreadPoolExecutor( CORE_POOL_SIZE, MAXIMUM_POOL_SIZE, KEEP_ALIVE, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(), sThreadFactory); private Handler mMainHandler = new Handler(Looper.getMainLooper()) { @Override public void handleMessage(Message msg) { LoaderResult result = (LoaderResult) msg.obj; ImageView imageView = result.imageView; String uri = (String) imageView.getTag(TAG_KEY_URI); if (uri.equals(result.uri)) { imageView.setImageBitmap(result.bitmap); } else { Log.w(TAG, "set image bitmap,but url has changed, ignored!"); } }; }; private Context mContext; private ImageResizer mImageResizer = new ImageResizer(); private LruCache<String, Bitmap> mMemoryCache; private DiskLruCache mDiskLruCache; private ImageLoader(Context context) { mContext = context.getApplicationContext(); int maxMemory = (int) (Runtime.getRuntime().maxMemory() / 1024); int cacheSize = maxMemory / 8; mMemoryCache = new LruCache<String, Bitmap>(cacheSize) { @Override protected int sizeOf(String key, Bitmap bitmap) { return bitmap.getRowBytes() * bitmap.getHeight() / 1024; } }; File diskCacheDir = getDiskCacheDir(mContext, "bitmap"); if (!diskCacheDir.exists()) { diskCacheDir.mkdirs(); } if (getUsableSpace(diskCacheDir) > DISK_CACHE_SIZE) { try { mDiskLruCache = DiskLruCache.open(diskCacheDir, 1, 1, DISK_CACHE_SIZE); mIsDiskLruCacheCreated = true; } catch (IOException e) { e.printStackTrace(); } } } /** * build a new instance of ImageLoader * @param context * @return a new instance of ImageLoader */ public static ImageLoader build(Context context) { return new ImageLoader(context); } private void addBitmapToMemoryCache(String key, Bitmap bitmap) { if (getBitmapFromMemCache(key) == null) { mMemoryCache.put(key, bitmap); } } private Bitmap getBitmapFromMemCache(String key) { return mMemoryCache.get(key); } /** * load bitmap from memory cache or disk cache or network async, then bind imageView and bitmap. * NOTE THAT: should run in UI Thread * @param uri http url * @param imageView bitmap's bind object */ public void bindBitmap(final String uri, final ImageView imageView) { bindBitmap(uri, imageView, 0, 0); } public void bindBitmap(final String uri, final ImageView imageView, final int reqWidth, final int reqHeight) { imageView.setTag(TAG_KEY_URI, uri); Bitmap bitmap = loadBitmapFromMemCache(uri); if (bitmap != null) { imageView.setImageBitmap(bitmap); return; } Runnable loadBitmapTask = new Runnable() { @Override public void run() { Bitmap bitmap = loadBitmap(uri, reqWidth, reqHeight); if (bitmap != null) { LoaderResult result = new LoaderResult(imageView, uri, bitmap); mMainHandler.obtainMessage(MESSAGE_POST_RESULT, result).sendToTarget(); } } }; THREAD_POOL_EXECUTOR.execute(loadBitmapTask); } /** * load bitmap from memory cache or disk cache or network. * @param uri http url * @param reqWidth the width ImageView desired * @param reqHeight the height ImageView desired * @return bitmap, maybe null. */ public Bitmap loadBitmap(String uri, int reqWidth, int reqHeight) { Bitmap bitmap = loadBitmapFromMemCache(uri); if (bitmap != null) { Log.d(TAG, "loadBitmapFromMemCache,url:" + uri); return bitmap; } try { bitmap = loadBitmapFromDiskCache(uri, reqWidth, reqHeight); if (bitmap != null) { Log.d(TAG, "loadBitmapFromDisk,url:" + uri); return bitmap; } bitmap = loadBitmapFromHttp(uri, reqWidth, reqHeight); Log.d(TAG, "loadBitmapFromHttp,url:" + uri); } catch (IOException e) { e.printStackTrace(); } if (bitmap == null && !mIsDiskLruCacheCreated) { Log.w(TAG, "encounter error, DiskLruCache is not created."); bitmap = downloadBitmapFromUrl(uri); } return bitmap; } private Bitmap loadBitmapFromMemCache(String url) { final String key = hashKeyFormUrl(url); Bitmap bitmap = getBitmapFromMemCache(key); return bitmap; } private Bitmap loadBitmapFromHttp(String url, int reqWidth, int reqHeight) throws IOException { if (Looper.myLooper() == Looper.getMainLooper()) { throw new RuntimeException("can not visit network from UI Thread."); } if (mDiskLruCache == null) { return null; } String key = hashKeyFormUrl(url); DiskLruCache.Editor editor = mDiskLruCache.edit(key); if (editor != null) { OutputStream outputStream = editor.newOutputStream(DISK_CACHE_INDEX); if (downloadUrlToStream(url, outputStream)) { editor.commit(); } else { editor.abort(); } mDiskLruCache.flush(); } return loadBitmapFromDiskCache(url, reqWidth, reqHeight); } private Bitmap loadBitmapFromDiskCache(String url, int reqWidth, int reqHeight) throws IOException { if (Looper.myLooper() == Looper.getMainLooper()) { Log.w(TAG, "load bitmap from UI Thread, it's not recommended!"); } if (mDiskLruCache == null) { return null; } Bitmap bitmap = null; String key = hashKeyFormUrl(url); DiskLruCache.Snapshot snapShot = mDiskLruCache.get(key); if (snapShot != null) { FileInputStream fileInputStream = (FileInputStream)snapShot.getInputStream(DISK_CACHE_INDEX); FileDescriptor fileDescriptor = fileInputStream.getFD(); bitmap = mImageResizer.decodeSampledBitmapFromFileDescriptor(fileDescriptor, reqWidth, reqHeight); if (bitmap != null) { addBitmapToMemoryCache(key, bitmap); } } return bitmap; } public boolean downloadUrlToStream(String urlString, OutputStream outputStream) { HttpURLConnection urlConnection = null; BufferedOutputStream out = null; BufferedInputStream in = null; try { final URL url = new URL(urlString); urlConnection = (HttpURLConnection) url.openConnection(); in = new BufferedInputStream(urlConnection.getInputStream(), IO_BUFFER_SIZE); out = new BufferedOutputStream(outputStream, IO_BUFFER_SIZE); int b; while ((b = in.read()) != -1) { out.write(b); } return true; } catch (IOException e) { Log.e(TAG, "downloadBitmap failed." + e); } finally { if (urlConnection != null) { urlConnection.disconnect(); } MyUtils.close(out); MyUtils.close(in); } return false; } private Bitmap downloadBitmapFromUrl(String urlString) { Bitmap bitmap = null; HttpURLConnection urlConnection = null; BufferedInputStream in = null; try { final URL url = new URL(urlString); urlConnection = (HttpURLConnection) url.openConnection(); in = new BufferedInputStream(urlConnection.getInputStream(), IO_BUFFER_SIZE); bitmap = BitmapFactory.decodeStream(in); } catch (final IOException e) { Log.e(TAG, "Error in downloadBitmap: " + e); } finally { if (urlConnection != null) { urlConnection.disconnect(); } MyUtils.close(in); } return bitmap; } private String hashKeyFormUrl(String url) { String cacheKey; try { final MessageDigest mDigest = MessageDigest.getInstance("MD5"); mDigest.update(url.getBytes()); cacheKey = bytesToHexString(mDigest.digest()); } catch (NoSuchAlgorithmException e) { cacheKey = String.valueOf(url.hashCode()); } return cacheKey; } private String bytesToHexString(byte[] bytes) { StringBuilder sb = new StringBuilder(); for (int i = 0; i < bytes.length; i++) { String hex = Integer.toHexString(0xFF & bytes[i]); if (hex.length() == 1) { sb.append('0'); } sb.append(hex); } return sb.toString(); } public File getDiskCacheDir(Context context, String uniqueName) { boolean externalStorageAvailable = Environment .getExternalStorageState().equals(Environment.MEDIA_MOUNTED); final String cachePath; if (externalStorageAvailable) { cachePath = context.getExternalCacheDir().getPath(); } else { cachePath = context.getCacheDir().getPath(); } return new File(cachePath + File.separator + uniqueName); } @TargetApi(VERSION_CODES.GINGERBREAD) private long getUsableSpace(File path) { if (Build.VERSION.SDK_INT >= VERSION_CODES.GINGERBREAD) { return path.getUsableSpace(); } final StatFs stats = new StatFs(path.getPath()); return (long) stats.getBlockSize() * (long) stats.getAvailableBlocks(); } private static class LoaderResult { public ImageView imageView; public String uri; public Bitmap bitmap; public LoaderResult(ImageView imageView, String uri, Bitmap bitmap) { this.imageView = imageView; this.uri = uri; this.bitmap = bitmap; } } }
-
为了防止加载图片错乱,可以利用ImageView.setTag,然后判断getTag和URL是否相同
-
为了优化卡顿,可以为ListView等添加滚动监听,在滚动时停止加载图片,部分代码如下
@Override public View getView(int position, View convertView, ViewGroup parent) { ViewHolder holder = null; if (convertView == null) { convertView = mInflater.inflate(R.layout.image_list_item,parent, false); holder = new ViewHolder(); holder.imageView = (ImageView) convertView.findViewById(R.id.image); convertView.setTag(holder); } else { holder = (ViewHolder) convertView.getTag(); } ImageView imageView = holder.imageView; final String tag = (String)imageView.getTag(); final String uri = getItem(position); if (!uri.equals(tag)) { imageView.setImageDrawable(mDefaultBitmapDrawable); } //询问用户是否加载图片 和 是否在滚动状态 if (mIsGridViewIdle && mCanGetBitmapFromNetWork) { imageView.setTag(uri); mImageLoader.bindBitmap(uri, imageView, mImageWidth, mImageWidth); } return convertView; } @Override public void onScrollStateChanged(AbsListView view, int scrollState) { if (scrollState == OnScrollListener.SCROLL_STATE_IDLE) { mIsGridViewIdle = true; mImageAdapter.notifyDataSetChanged(); } else { mIsGridViewIdle = false; } } @Override public void onScroll(AbsListView view, int firstVisibleItem, int visibleItemCount, int totalItemCount) { // ignored }
-