一个大神的实验。
http://mer-project.blogspot.se/2013/04/wayland-utilizing-android-gpu-drivers.html
In this blog series, I will be presenting a solution that I've developed that enables the use of Wayland on top of Android hardware adaptations, specifically the GPU drivers, but without actually requiring the OS to be Bionic based. This is part 1.
This work was and is done as part of my job as Chief Research Engineer in Jolla, which develop Sailfish OS, a mobile-optimized operating system that has the flexibility, ubiquity and stability of the Linux core with a cutting edge user experience built with the renowned Qt platform.
The views and opinions expressed in this blog series are my own and not that of my employer.
At the end of the series, the aim is to have finished cleaning up the proof of concept code and published it under a "LGPLv2.1 only" license, for the benefit of many different communities and projects (Sailfish, OpenWebOS, Qt Project, KDE, GNOME, Hawaii, Nemo Mobile, Mer Core based projects, EFL, etc).
![]() |
QML compositor, libhybris, Wayland on top of Qualcomm GPU Android drivers |
The blog series seeks to explain and document the solution and many aspects about non-Android systems, Wayland and Android GPU drivers that are not widely known.
This work is done with the hope that it will attract more contribution and collaboration to bring this solution and Wayland in general into wider use across the open source ecosystem and use a large selection of reference device designs for their OS'es.
Why am I not releasing code today? Because that code alone doesn't foster collaboration. There's more to involving contributors into development - such as explaining reasons why things are the way they are. It's also my own way to make sure I document the code and clean it up, to make it easier for people to get involved.
Now, let's get to it..
The grim situation in mobile hardware adaptation availability
One of the first thing somebody with a traditional Linux background realizes as he tries to make a mobile device today when he meets with an ODM is that 99% of chipset vendors and hence ODMs - will only offer Android hardware adaptations to go along with the device designs.
When you ask about X11 support within the GPU drivers or even Wayland they'll often look blankly at you and wonder why anybody would want to do anything else than Android based systems. And when you go into details they'll either tell you it can't be done - or charge you a massive cost to have it done.
This means that OS'es and devices that are non-Android will be not able to take into usage the huge availability of (often low cost) device designs that are out there, increasing the time to market and R&D cost massively.
Libhybris
In August 2012, I published my initial prototype for 'libhybris'. What is libhybris? Libhybris is a solution that allows non-Android systems such as glibc-based systems (like most non-Android systems are) to utilize shared objects (libraries) built for Android. In practice this means that you can leverage things like OpenGL ES 2.0 and other hardware interfacing provided within Android hardware adaptations.
I had developed libhybris initially in my idle hours at home and the big question you might have is: Why did I open source it instead of keeping it to myself and earn on it as it obviously was the holy grail for non-Android systems?
The simple answer is this: by working together on open source code, it would help accelerate the development of libhybris and testing of the software for everybody's mutual benefit.
I didn't feel good about libhybris initially, it's not the most perfect solution to the problem: many around me in the open source community were and are fighting to have chipset vendors provide Wayland or X11 adaptations for mobile chipsets or even GPU drivers for non-Android systems in the first place.
But I felt that this was the required road that had to be taken before non-Android systems turned completely irrelevant in the bigger picture. When we again have volume of non-Android devices, we can have our own dedicated HW adaptations again
Open sourcing worked quite well - a small group of people got together, tested it, improved it, got it running on a lot of multiple chipsets - thanks to OpenWebOS, Florian Haenel (heeen), Thomas Perl (thp), Simon Busch (morphis) and others. It turned the project from a late night hacking project into a viable solution for building device OS'es on top of. Or even running Android NDK applications using.
Now for a few words on my story with Wayland..
![]() |
Screenshot from http://blog.qt.digia.com/blog/2011/03/18/multi-process-lighthouse/ |
To get things working with Wayland, what I needed to do was figure out:
- How to render an image with OpenGL ES 2.0 into GPU buffers that I had under my control
- Share that GPU buffer with another process (the compositor)
- Include that GPU buffer as part of a OpenGL scenegraph, a texture - and display this to the screen (in the compositor)
- And for performance, flip a full screen GPU buffer straight to the screen, bypassing the need to do OpenGL rendering
To be able to render into a specific GPU buffer under your own control, you usually need to get inside the EGL/OpenGL ES implementation. On some chipsets, it's possible to use specific EGL operations that allows shared (across two processes) images to be rendered into - such as on Raspberry Pi.
In the EGL implementation, you should be able to follow the path of the buffer, it's attributes (size, stride, bpp/format) and when the client has requested to do eglSwapBuffers.
Even if it was a custom protocol for buffer handling (creation, modification, etc), the same operations for handling buffers in Wayland still applied to it. I didn't need to do anything extra for compositor or client for the buffers in particular - I could piggyback on existing infrastructure available in Wayland protocol.
Wayland made it easy for me to take existing methods for the techniques/needs listed above into use and made it possible to quickly and easily implement Wayland support for the chipset.
Now, to something a little different, but quite related:
Android and it's ANativeWindow
When you use eglCreateWindowSurface, as in, creating a EGL window surface for later GL rendering with Android drivers, you have to provide a reference to the native window you want to do it within. In Android, the native window type is ANativeWindow.
As you know, Android's graphics stack is roughly application -> libEGL that sends GPU buffers to SurfaceFlinger that either flings the buffer to the screen or composites it with OpenGL again with libEGL.
Why not just include all the functionality and code in the EGL stack which communicates with SurfaceFlinger? The answer is that you need to sometimes target multiple types of outputs - be it video/camera streaming to another process, framebuffer rendering, output to a HW composer or communication with SurfaceFlinger.
One of the good things about Android graphics architecture is that through the use of ANativeWindow, they have made it possible to flexibly keep the code that does this work outside the EGL drivers - that is, open source and available for customization for each purpose. That means that EGL/OpenGL drivers are less tied to the Android version itself (sometimes API versions of ANativeWindow changes) and can be reused in binary form easily across upgrades.
ANativeWindow provides handy hooks for a windowing system to be managing GPU buffers (queueBuffer - send a buffer, I'm done rendering, dequeueBuffer - I need a buffer for rendering, cancelBuffer - woops, I didn't need it anyway, etc) - and it gives the methods you need to accomplish things, like I did on PowerVR SGX.
This is the entry point used to implement Wayland on top of Android GPU drivers on glibc based systems. Some fantastic work in this area has already been done by Pekka Paalanen (pq) as part of his work for Collabora Ltd. (Telepathy, GStreamer, WebKit, X11 experts) which proved that this is possible. Parts of the solution I will publish is based on their work - their work was groundbreaking in this field and made all this possible.
A note on gralloc and native handles
If you do a little bit of detective work, you'll find out that buffer_handle_t is actually defined as a native_handle_t* .. and what are native handles?
The structure is practically this: number of integers and a number of file descriptors plus the actual integers and file descriptors. How do you share a buffer across two processes then?
You have to employ something as obscure as "file descriptor passing". This page describes it as "socket magic" which it truly is. It takes a file descriptor from one process and makes it available in another.
The android GPU buffers are typically consisting of GPU buffer metadata (handle, size, bpp, usage hints, GPU buffer handle) and then file descriptors mapping GPU memory or otherwise shared memory into memory. To make the buffer appear in two processes, you pass the handle information and the related file descriptors.
The good news however is that Wayland already supports file descriptor passing so you don't have to write obscure code handling it yourself for your custom Wayland compositor.
Conclusion
This concludes the first blog post in this series, to give a bit of background about how Wayland, libhybris and GPU drivers for Android can work together. Next blog post will talk more about the actual implementation of this work. Last blog post will talk about direction of the future work on it - and what you can do with it today and how.
If you'd like to use, discuss and participate in the development of this solution, #libhybris on irc.freenode.net is the best place to be. A neutral place for development across different OS efforts.