Graphics

http://source.android.com/devices/graphics.html

 

Graphics

  The Android framework has a variety of graphics rendering APIs for 2D and 3D that interact with  your HAL implementations and graphics drivers, so it is important to have a good understanding of  how they work at a higher level. There are two general ways that app developers can draw things  to the screen: with Canvas or OpenGL.

android.graphics.Canvas  is a 2D graphics API and is the most widely used graphics API by  developers. Canvas operations draw all the stock android.view.Views  and custom android.view.Views in Android. Prior to Android 3.0, Canvas always  used the non-hardware accelerated Skia 2D drawing library to draw.

  Introduced in Android 3.0, hardware acceleration for Canvas APIs uses a new drawing library  called OpenGLRenderer that translates Canvas operations to OpenGL operations so that they can  execute on the GPU. Developers had to opt-in to this feature previously, but beginning in Android  4.0, hardware-accelerated Canvas is enabled by default. Consequently, a hardware GPU that  supports OpenGL ES 2.0 is mandatory for Android 4.0 devices.

  Additionally, the Hardware Acceleration guide  explains how the hardware-accelerated drawing path works and identifies the differences in behavior from the software drawing path.

  The other main way that developers render graphics is by using OpenGL ES 1.x or 2.0 to directly  render to a surface.  Android provides OpenGL ES interfaces in the   android.opengl package  that a developer can use to call into your GL implementation with the SDK or with native APIs  provided in the Android NDK.   

Note:A third option, Renderscript, was introduced in Android 3.0 to  serve as a platform-agnostic graphics rendering API (it used OpenGL ES 2.0 under the hood), but  will be deprecated starting in the Android 4.1 release.

  How Android Renders Graphics


  No matter what rendering API developers use, everything is rendered onto a buffer of pixel data  called a "surface." Every window that is created on the Android platform is backed by a surface.  All of the visible surfaces that are rendered to are composited onto the display  by the SurfaceFlinger, Android's system service that manages composition of surfaces.  Of course, there are more components that are involved in graphics rendering, and the  main ones are described below:

Image Stream Producers  
Image stream producers can be things such as an OpenGL ES game, video buffers from the media server,      a Canvas 2D application, or basically anything that produces graphic buffers for consumption.    
Image Stream Consumers  
The most common consumer of image streams is SurfaceFlinger, the system service that consumes    the currently visible surfaces and composites them onto the display using    information provided by the Window Manager. SurfaceFlinger is the only service that can    modify the content of the display. SurfaceFlinger uses OpenGL and the    hardware composer to compose a group of surfaces. Other OpenGL ES apps can consume image    streams as well, such as the camera app consuming a camera preview image stream.  
SurfaceTexture  
SurfaceTexture contains the logic that ties image stream producers and image stream consumers together    and is made of three parts: SurfaceTextureClient, ISurfaceTexture, and     SurfaceTexture (in this case, SurfaceTexture is the actual C++ class and not    the name of the overall component). These three parts facilitate the producer ( SurfaceTextureClient),    binder ( ISurfaceTexture), and consumer ( SurfaceTexture)    components of SurfaceTexture in processes such as requesting memory from Gralloc,    sharing memory across process boundaries, synchronizing access to buffers, and pairing the appropriate consumer with the producer.    SurfaceTexture can operate in both asynchronous (producer never blocks waiting for consumer and drops frames) and    synchronous (producer waits for consumer to process textures) modes. Some examples of image    producers are the camera preview produced by the camera HAL or an OpenGL ES game. Some examples    of image consumers are SurfaceFlinger or another app that wants to display an OpenGL ES stream    such as the camera app displaying the camera viewfinder.  
Window Manager  
    The Android system service that controls window lifecycles, input and focus events, screen    orientation, transitions, animations, position, transforms, z-order, and many other aspects of    a window (a container for views). A window is always backed by a surface. The Window Manager    sends all of the window metadata to SurfaceFlinger, so SurfaceFlinger can use that data    to figure out how to composite surfaces on the display.  
Hardware Composer  
    The hardware abstraction for the display subsystem. SurfaceFlinger can delegate certain    composition work to the hardware composer to offload work from the OpenGL and the GPU. This makes    compositing faster than having SurfaceFlinger do all the work. Starting with Jellybean MR1,    new versions of the hardware composer have been introduced. See the hardware/libhardware/include/hardware/gralloc.h Hardware composer section    for more information.  
Gralloc  
Allocates memory for graphics buffers. See the  If you    are using version 1.1 or later of the hardware composer, this HAL is no longer needed.

  The following diagram shows how these components work together:

Figure 1. How surfaces are rendered

  What You Need to Provide


The following list and sections describe what you need to provide to support graphics in your product:

  • OpenGL ES 1.x Driver  
  • OpenGL ES 2.0 Driver  
  • EGL Driver  
  • Gralloc HAL implementation  
  • Hardware Composer HAL implementation  
  • Framebuffer HAL implementation  

  OpenGL and EGL drivers

  You must provide drivers for OpenGL ES 1.x, OpenGL ES 2.0, and EGL. Some key things to keep in  mind are:

  • The GL driver needs to be robust and conformant to OpenGL ES standards.  
  • Do not limit the number of GL contexts. Because Android allows apps in the background and  tries to keep GL contexts alive, you should not limit the number of contexts in your driver. It  is not uncommon to have 20-30 active GL contexts at once, so you should also be careful with the  amount of memory allocated for each context.  
  • Support the YV12 image format and any other YUV image formats that come from other    components in the system such as media codecs or the camera.  
  • Support the mandatory extensions: GL_OES_texture_external,   EGL_ANDROID_image_native_buffer, and EGL_ANDROID_recordable. We highly  recommend supporting EGL_ANDROID_blob_cache and EGL_KHR_fence_sync as  well.

  Note that the OpenGL API exposed to app developers is different from the OpenGL interface that  you are implementing. Apps do not have access to the GL driver layer, and must go through the  interface provided by the APIs.

  Pre-rotation

Many times, hardware overlays do not support rotation, so the solution is to pre-transform the buffer before  it reaches SurfaceFlinger. A query hint in ANativeWindow was added (NATIVE_WINDOW_TRANSFORM_HINT)  that represents the most likely transform to be be applied to the buffer by SurfaceFlinger.  Your GL driver can use this hint to pre-transform the buffer before it reaches SurfaceFlinger, so when the buffer  actually reaches SurfaceFlinger, it is correctly transformed. See the ANativeWindow  interface defined in system/core/include/system/window.h for more details. The following  is some pseudo-code that implements this in the hardware composer:

ANativeWindow->query(ANativeWindow, NATIVE_WINDOW_DEFAULT_WIDTH, &w);
ANativeWindow->query(ANativeWindow, NATIVE_WINDOW_DEFAULT_HEIGHT, &h);
ANativeWindow->query(ANativeWindow, NATIVE_WINDOW_TRANSFORM_HINT, &hintTransform);
if (hintTransform & HAL_TRANSFORM_ROT_90)
swap(w, h);

native_window_set_buffers_dimensions(anw, w, h);
ANativeWindow->dequeueBuffer(...);

// here GL driver renders content transformed by " hintTransform "

int inverseTransform;
inverseTransform = hintTransform;
if (hintTransform & HAL_TRANSFORM_ROT_90)
   inverseTransform ^= HAL_TRANSFORM_ROT_180;

native_window_set_buffers_transform(anw, inverseTransform);

ANativeWindow->queueBuffer(...);

  Gralloc HAL

  The graphics memory allocator is needed to allocate memory that is requested by  SurfaceTextureClient in image producers. You can find a stub implementation of the HAL at   hardware/libhardware/modules/gralloc.h

  Protected buffers

  There is a gralloc usage flag GRALLOC_USAGE_PROTECTED that allows  the graphics buffer to be displayed only through a hardware protected path.

  Hardware Composer HAL

  The hardware composer is used by SurfaceFlinger to composite surfaces to the screen. The hardware  composer abstracts things like overlays and 2D blitters and helps offload some things that would  normally be done with OpenGL. 

Jellybean MR1 introduces a new version of the HAL. We recommend that you start using version 1.1 of the hardware  composer HAL as it will provide support for the newest features (explicit synchronization, external displays, etc).  Keep in mind that in addition to 1.1 version, there is also a 1.0 version of the HAL that we used for internal  compatibility reasons and a 1.2 draft mode of the hardware composer HAL. We recommend that you implement  version 1.1 until 1.2 is out of draft mode.

Because the physical display hardware behind the hardware composer  abstraction layer can vary from device to device, it is difficult to define recommended features, but  here is some guidance:

  • The hardware composer should support at least 4 overlays (status bar, system bar, application,  and live wallpaper) for phones and 3 overlays for tablets (no status bar).
  • Layers can be bigger than the screen, so the hardware composer should be able to handle layers    that are larger than the display (For example, a wallpaper).
  • Pre-multiplied per-pixel alpha blending and per-plane alpha blending should be supported at the same time.
  • The hardware composer should be able to consume the same buffers that the GPU, camera, video decoder, and Skia buffers are producing,    so supporting some of the following properties is helpful:   
    • RGBA packing order
    • YUV formats
    • Tiling, swizzling, and stride properties
  • A hardware path for protected video playback must be present if you want to support protected content.

  The general recommendation when implementing your hardware composer is to implement a no-op  hardware composer first. Once you have the structure done, implement a simple algorithm to  delegate composition to the hardware composer. For example, just delegate the first three or four  surfaces to the overlay hardware of the hardware composer. After that focus on common use cases,  such as:

  • Full-screen games in portrait and landscape mode  
  • Full-screen video with closed captioning and playback control  
  • The home screen (compositing the status bar, system bar, application window, and live  wallpapers)  
  • Protected video playback  
  • Multiple display support  

  After implementing the common use cases, you can focus on optimizations such as intelligently  selecting the surfaces to send to the overlay hardware that maximizes the load taken off of the  GPU. Another optimization is to detect whether the screen is updating. If not, delegate composition  to OpenGL instead of the hardware composer to save power. When the screen updates again, contin`ue to  offload composition to the hardware composer.

  You can find the HAL for the hardware composer in the   hardware/libhardware/include/hardware/hwcomposer.h and hardware/libhardware/include/hardware/hwcomposer_defs.h  files. A stub implementation is available in the hardware/libhardware/modules/hwcomposer directory.

  VSYNC

  VSYNC synchronizes certain events to the refresh cycle of the display. Applications always  start drawing on a VSYNC boundary and SurfaceFlinger always composites on a VSYNC boundary.  This eliminates stutters and improves visual performance of graphics.  The hardware composer has a function pointer

int (waitForVsync*) (int64_t *timestamp)

that points to a function you must implement for VSYNC. This function blocks until    a VSYNC happens and returns the timestamp of the actual VSYNC.    A client can receive a VSYNC timestamps once, at specified intervals, or continously (interval of 1).     You must implement VSYNC to have no more than a 1ms lag at the maximum (1/2ms or less is recommended), and    the timestamps returned must be extremely accurate.

Explicit synchronization

Explicit synchronization is required in Jellybean MR1 and later and provides a mechanism for Gralloc buffers to be acquired and released in a synchronized way. Explicit synchronization allows producers and consumers of graphics buffers to signal when they are done with a buffer. This allows the Android system to asynchronously queue buffers to be read or written with the certainty that another consumer or producer does not currently need them.

This communication is facilitated with the use of synchronization fences, which are now required when requesting a buffer for consuming or producing. The synchronization framework consists of three main parts:

  • sync_timeline: a monotonically increasing timeline that should be implemented    for each driver instance. This basically is a counter of jobs submitted to the kernel for a particular piece of hardware.
  • sync_pt: a single value or point on a sync_timeline. A point      has three states: active, signaled, and error. Points start in the active state and transition      to the signaled or error states. For instance, when a buffer is no longer needed by an image      consumer, this sync_point is signaled so that image producers      know that it is okay to write into the buffer again.
  • sync_fence: a collection of sync_pts that often have different       sync_timeline parents (such as for the display controller and GPU). This allows      multiple consumers or producers to signal that      they are using a buffer and to allow this information to be communicated with one function parameter.      Fences are backed by a file descriptor and can be passed from kernel-space to user-space.      For instance, a fence can contain two sync_points that signify when two separate      image consumers are done reading a buffer. When the fence is signaled,      the image producers now know that both consumers are done consuming.

To implement explicit synchronization, you need to do provide the following:

  • A kernel-space driver that implements a synchronization timeline for a particular piece of hardware. Drivers that    need to be fence-aware are generally anything that accesses or communicates with the hardware composer.    See the system/core/include/sync/sync.h file for more implementation details. The     system/core/libsync directory includes a library to communicate with the kernel-space
  • A hardware composer HAL module (version 1.1 or later) that supports the new synchronization functionality. You will need to provide  the appropriate synchronization fences as parameters to the set() and prepare() functions in the HAL. As a last resort, you can pass in -1 for the file descriptor parameters if you cannot support explicit synchronization for some reason. This is not recommended, however.
  • Two GL specific extensions related to fences, EGL_ANDROID_native_fence_sync and EGL_ANDROID_wait_sync,    along with incorporating fence support into your graphics drivers.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值