A single Android device can have multiple cameras. Each camera is a CameraDevice, and a CameraDevice
can output more than one stream simultaneously.
Note: This page uses camera2 classes. We recommend using the CameraX Jetpack library except when your use case requires access to specific features available only in Camera2. Both CameraX and Camera2 work on Android 5.0 (API level 21) and higher.
One reason to do this is so that one stream is optimized for a specific use-case, such as displaying a viewfinder, while others might be used to take a photo or to make a video recording. The streams act as parallel pipelines that process raw frames coming out of the camera, one frame at a time:
Figure 1. Illustration from Building a Universal Camera App (Google I/O ‘18)
Parallel processing implies that there could be performance limits depending on the available processing power from the CPU, GPU, or something else. If a pipeline can’t keep up with the incoming frames, it starts dropping them.
Note that each pipeline has its own output format. The raw data coming in is automatically transformed into the appropriate output format by implicit logic associated with each pipeline.
You can use the CameraDevice
to create a CameraCaptureSession, which will be specific to that CameraDevice
. A CameraDevice
must receive a frame configuration for each raw frame via the CameraCaptureSession
. The configuration specifies camera attributes such as autofocus, aperture, effects, and exposure. Due to hardware constraints, only a single configuration can be active in the camera sensor at any given time; this configuration is called the active configuration.
A CameraCaptureSession
describes all the possible pipelines bound to the CameraDevice
. Once a session is created, you cannot add or remove pipelines. The CameraCaptureSession
maintains a queue of CaptureRequests, which become the active configuration.
A Captu