一、CameraService端(libcameraservice.so)
1.CameraService 相关源码
/frameworks/av/services/camera/libcameraservice/CameraService.h
class CameraService :
public BinderService<CameraService>,
public virtual ::android::hardware::BnCameraService,
public virtual IBinder::DeathRecipient,
public camera_module_callbacks_t,
public virtual CameraProviderManager::StatusListener
{
...
static char const* getServiceName() { return "media.camera"; }
class Client : public hardware::BnCamera, public BasicClient
{
...
}
class Client : public hardware::BnCamera, public BasicClient
{
...
int mCameraId; // All API1 clients use integer camera IDs
}; // class Client
}
frameworks/av/services/camera/libcameraservice/CameraService.cpp
Status CameraService::connect(
const sp<ICameraClient>& cameraClient,
int cameraId,
const String16& clientPackageName,
int clientUid,
int clientPid,
/*out*/
sp<ICamera>* device) {
}
如上可知CameraService继承 BnCameraService,BnCameraService的相关代码在源码中是找不到的,因为它是通过C++层的AIDL自动生成的,
查找BnCameraService代码需要在产物中去找。路径:/frameworks/av/camera/aidl/android/hardware/ICameraService.aidl
ICameraService.aidl会生成三个.h文件,分别是 ICameraService.h、BnCameraService.h、BpCameraService.h
ICameraService.h有一个ICameraService类,该类是一个接口类,其中所有定义的接口、枚举等都是aidl中所定义的内容,显然它用于定义是Binder机制中的业务接口的,它属于的Binder机制中的接口属性。
class BnCameraService : public ::android::BnInterface<ICameraService>
class ICameraService : public ::android::IInterface
/system/libhwbinder/include/hwbinder/IInterface.h
class IInterface : public virtual RefBase
如上继承关系可知,CameraService-》 BnCameraService-》 ICameraService-》 IInterface-》RefBase,即CameraService 是RefBase子类,所以CameraService 可以被智能指针引用。
同时如上继承关系还有如下继承关系CameraService-》 BnCameraService-》BBinder -》IBinder
BnCameraService类
// out/soong/.intermediates/frameworks/av/camera/libcamera_client/android_x86_64_shared/gen/aidl/android/hardware/BnCameraService.h
class BnCameraService : public ::android::BnInterface<ICameraService> {
public:
explicit BnCameraService();
::android::status_t onTransact(uint32_t _aidl_code, const ::android::Parcel& _aidl_data, ::android::Parcel* _aidl_reply, uint32_t _aidl_flags) override;
}; // class BnCameraService
BnInterface 类
template<typename INTERFACE>
class BnInterface : public INTERFACE, public BBinder
{
public:
virtual sp<IInterface> queryLocalInterface(const String16& _descriptor);
virtual const String16& getInterfaceDescriptor() const;
protected:
typedef INTERFACE BaseInterface;
virtual IBinder* onAsBinder();
};
/frameworks/native/libs/binder/include/binder/Binder.h
class BBinder : public IBinder
{...
}
/frameworks/native/libs/binder/include/binder/IBinder.h
class IBinder : public virtual RefBase
{...
}
ICameraService.aidl 生成的ICameraService类
// out/soong/.intermediates/frameworks/av/camera/libcamera_client/android_x86_64_shared/gen/aidl/android/hardware/ICameraService.h
DENIED = 1,
ERROR_ALREADY_EXISTS = 2,
ERROR_ILLEGAL_ARGUMENT = 3,
ERROR_DISCONNECTED = 4,
ERROR_TIMED_OUT = 5,
ERROR_DISABLED = 6,
……
virtual ::android::binder::Status getNumberOfCameras(int32_t type, int32_t* _aidl_return) = 0;
virtual ::android::binder::Status getCameraInfo(int32_t cameraId, ::android::hardware::CameraInfo* _aidl_return) = 0;
virtual ::android::binder::Status connect(const ::android::sp<::android::hardware::ICameraClient>& client, int32_t cameraId, const ::android::String16& opPackageName, int32_t clientUid, int32_t clientPid, ::android::sp<::android::hardware::ICamera>* _aidl_return) = 0;
……
virtual ::android::binder::Status setTorchMode(const ::android::String16& cameraId, bool enabled, const ::android::sp<::android::IBinder>& clientBinder) = 0;
virtual ::android::binder::Status notifySystemEvent(int32_t eventId, const ::std::vector<int32_t>& args) = 0;
virtual ::android::binder::Status notifyDeviceStateChange(int64_t newState) = 0;
}; // class ICameraService
IInterface 类
class IInterface : public virtual RefBase
{
public:
IInterface();
static sp<IBinder> asBinder(const IInterface*);
static sp<IBinder> asBinder(const sp<IInterface>&);
protected:
virtual ~IInterface();
virtual IBinder* onAsBinder() = 0;
};
sp<IBinder> IInterface::asBinder(const IInterface* iface)
{
if (iface == nullptr) return nullptr;
return const_cast<IInterface*>(iface)->onAsBinder();
}
// static
sp<IBinder> IInterface::asBinder(const sp<IInterface>& iface)
{
if (iface == nullptr) return nullptr;
return iface->onAsBinder();
}
从上面 BnCameraService类 和 BnInterface 类 定义可知 BnCameraService(CameraService )同时继承ICameraService和 BBinder,
即:CameraService 实现了camera相关接口功能,以及Binder通信功能。
Binder的设计思想,一个Binder客户端/服务端,必须带有接口属性与Binder通信属性,且这两者的类型可以互相转换,
IInterface 类中的 IInterface::asBinder 提供了接口转换为binder的实现
2.CameraService 注册到系统服务
camera服务端进程名称:cameraserver,对应的可执行程序: /system/bin/cameraserver
从如下Android.mk 中:LOCAL_INIT_RC := cameraserver.rc 配置cameraserver.rc会被编译到 /system/etc/init/cameraserver.rc中,
同时编译bin程序 cameraserver,Android开机时 init进程会遍历/system/etc/init/目录中的rc文件创建cameraserver进程并启动cameraserver程序,
/frameworks/av/camera/cameraserver/Android.mk
LOCAL_SRC_FILES:= \
main_cameraserver.cpp
LOCAL_MODULE:= cameraserver
LOCAL_INIT_RC := cameraserver.rc
...
include $(BUILD_EXECUTABLE)
/frameworks/av/camera/cameraserver/cameraserver.rc
service cameraserver /system/bin/cameraserver
class main
user cameraserver
group audio camera input drmrpc
ioprio rt 4
writepid /dev/cpuset/camera-daemon/tasks /dev/stune/top-app/tasks
CameraService 是被 cameraserver 进程实例化并注册到系统的servicemanager的,cameraserver进程又被系统配置为开机自启动。
CameraService 调用 CameraService::instantiate(),因为CameraService继承BinderService 即调用BinderService的instantiate方法,
然后调用publish方法, publish 方法new SERVICE()对应new CameraService(), addService 会将CameraService对象sp强引用,
最终将sp<CameraService> 对象注册到了 IServiceManager。这里的SERVICE::getServiceName() 对应CameraService.h 中 CameraService类中的getServiceName方法返回的:media.camera
如下代码所示
/frameworks/av/camera/cameraserver/main_cameraserver.cpp
int main(int argc __unused, char** argv __unused)
{
signal(SIGPIPE, SIG_IGN);
// Set 3 threads for HIDL calls
hardware::configureRpcThreadpool(3, /*willjoin*/ false);
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager();
ALOGI("ServiceManager: %p", sm.get());
CameraService::instantiate();
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}
/frameworks/native/include/binder/BinderService.h
namespace android {
template<typename SERVICE>
class BinderService
{
public:
static status_t publish(bool allowIsolated = false) {
sp<IServiceManager> sm(defaultServiceManager());
return sm->addService(
String16(SERVICE::getServiceName()),
new SERVICE(), allowIsolated);
}
static void publishAndJoinThreadPool(bool allowIsolated = false) {
publish(allowIsolated);
joinThreadPool();
}
static void instantiate() { publish(); }
static status_t shutdown() { return NO_ERROR; }
private:
static void joinThreadPool() {
sp<ProcessState> ps(ProcessState::self());
ps->startThreadPool();
ps->giveThreadPoolName();
IPCThreadState::self()->joinThreadPool();
}
};
}; // namespace android
/frameworks/native/libs/binder/IServiceManager.cpp
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated)
{...}
3.CameraService 初始化
3.1如2所示 系统开机会将 sp<CameraService> 对象注册到了 IServiceManager,由 Android中的智能指针流程 一文可知
强引用sp新增引用计数+1时,会调用RefBase子类的 onFirstRef 方法,即CameraService被引用时候会调用onFirstRef方法,onFirstRef方法调用enumerateProviders
void CameraService::onFirstRef()
{
...
res = enumerateProviders();
...
}
3.2enumerateProviders方法先调用 mCameraProviderManager->initialize。然后获取 所有camera id,并调用onDeviceStatusChanged。
status_t CameraService::enumerateProviders() {
...
mCameraProviderManager = new CameraProviderManager();
res = mCameraProviderManager->initialize(this);
mNumberOfCameras = mCameraProviderManager->getCameraCount();//缓存摄像头数量
mNumberOfNormalCameras =
mCameraProviderManager->getAPI1CompatibleCameraCount(); //缓存向后兼容的摄像头数量
for (auto& cameraId : mCameraProviderManager->getCameraDeviceIds()) {
String8 id8 = String8(cameraId.c_str());
onDeviceStatusChanged(id8, CameraDeviceStatus::PRESENT);
..
}
}
3.3 CameraProviderManager::initialize 方法中 proxy即为ServiceInteractionProxy 结构体对象,
3.3.1调用mServiceProxy->registerForNotifications 即hardware::camera::provider::V2_4::ICameraProvider::registerForNotifications(serviceName, notification) notification即CameraProviderManager类
CameraProviderManager收到CameraProvider服务通知后再回调给CameraService
3.3.2 调用addProviderLocked(kLegacyProviderName, /*expected*/ false) 即:CameraProviderManager::addProviderLocked
该方法中 interface = mServiceProxy->getService(newProvider) 即调用HardwareServiceInteractionProxy结构体对象
中方法 hardware::camera::provider::V2_4::ICameraProvider::getService(serviceName), newProvider 为字符串"legacy/0",
interface 即是5中注册到系统服务的CameraProvider。
调用 sp<ProviderInfo> providerInfo =new ProviderInfo(newProvider, interface, this) 将interface即获取到sp<provider::V2_4::ICameraProvider> 对象以及字符串"legacy/0"封装到ProviderInfo对象中(且ProviderInfo包含CameraProvider中CameraDevice对象列表 mDevices) ,并添加到mProviders列表中。
CameraProviderManager::ProviderInfo::ProviderInfo 构造函数 将interface赋值给 mInterface变量,
这里还调用了 providerInfo->initialize(),CameraProviderManager::ProviderInfo::initialize()方法
先调用mInterface->getCameraIdList 获取 cameraid列表,调用devices.push_back(cameraDeviceNames[i]) 将camera设备名称添加到devies列表中。
mInterface->getCameraIdList即调用 /hardware/interfaces/camera/provider/2.4/default/CameraProvider.cpp中 CameraProvider::getCameraIdList 方法即最终调用hal接口。
然后调用遍历 devies列表 调用addDevice(device,hardware::camera::common::V1_0::CameraDeviceStatus::PRESENT, &id)。
CameraProviderManager::ProviderInfo::addDevice 方法 先从camera设备名称字符串解析出major,minor,type,id信息, camera设备名称字符串格式:// Format must be "device@<major>.<minor>/<type>/<id>"
然后如果是hal1调用 initializeDeviceInfo<DeviceInfo1>(name, mProviderTagid,id, minor);
如果是hal3调用initializeDeviceInfo<DeviceInfo3>(name, mProviderTagid,id, minor)
initializeDeviceInfo 先调用 auto cameraInterface =getDeviceInterface<typename DeviceInfoT::InterfaceT>(name),
*如果是hal1调用 CameraProviderManager::ProviderInfo::getDeviceInterface<device::V1_0::ICameraDevice>(const std::string &name)
CameraProviderManager::ProviderInfo::getDeviceInterface 方法调用mInterface->getCameraDeviceInterface_V1_x,即调用
/hardware/interfaces/camera/provider/2.4/default/CameraProvider.cpp 中的CameraProvider::getCameraDeviceInterface_V1_x
CameraProvider::getCameraDeviceInterface_V1_x 方法 调用sp<android::hardware::camera::device::V1_0::implementation::CameraDevice> device =
new android::hardware::camera::device::V1_0::implementation::CameraDevice(mModule, cameraId, mCameraDeviceNames)获取CameraProvider中 CameraDevice的实现类对象。即deviceInfo(cameraInterface) 是CameraProvider中 CameraDevice的实现类对象。hal1对应CameraDevice类文件 /hardware/interfaces/camera/device/1.0/default/CameraDevice.cpp
因为是hal1 这里DeviceInfoT对应的是 DeviceInfo1,DeviceInfoT::InterfaceT 对应 DeviceInfo1::InterfaceT,DeviceInfo1中InterfaceT宏定义hardware::camera::device::V1_0::ICameraDevice基于从hal中获取的camera设备名称初始化DeviceInfoT对象,new DeviceInfoT(name, tagId, id, minorVersion, resourceCost,cameraInterface)); 即new DeviceInfo1(name, tagId, id, minorVersion, resourceCost,cameraInterface));然后将DeviceInfo1 对象添加到 mDevices 列表中。
*如果是hal3调用 CameraProviderManager::ProviderInfo::getDeviceInterface<device::V3_2::ICameraDevice>(const std::string &name) CameraProviderManager::ProviderInfo::getDeviceInterface 方法调用mInterface->getCameraDeviceInterface_V3_x,即调用
/hardware/interfaces/camera/provider/2.4/default/CameraProvider.cpp 中的CameraProvider::getCameraDeviceInterface_V3_x CameraProvider::getCameraDeviceInterface_V3_x 方法 调用sp<android::hardware::camera::device::V3_2::implementation::CameraDevice> deviceImpl =
new android::hardware::camera::device::V3_2::implementation::CameraDevice(mModule, cameraId, mCameraDeviceNames)获取CameraProvider中 CameraDevice的实现类对象。即deviceInfo(cameraInterface) 是CameraProvider中 CameraDevice的实现类对象。hal3对应CameraDevice类文件 /hardware/interfaces/camera/device/3.2/default/CameraDevice.cpp
因为是hal3 这里DeviceInfoT对应的是 DeviceInfo3,DeviceInfoT::InterfaceT 对应 DeviceInfo3::InterfaceT,DeviceInfo3 中InterfaceT宏定义hardware::camera::device::V3_2::ICameraDevice基于从hal中获取的camera设备名称初始化DeviceInfoT对象,new DeviceInfoT(name, tagId, id, minorVersion, resourceCost,cameraInterface)); 即new DeviceInfo3(name, tagId, id, minorVersion, resourceCost,cameraInterface));然后将DeviceInfo3 对象添加到 mDevices 列表中。
ps:从CameraProviderManager::ProviderInfo::DeviceInfo1::DeviceInfo1 方法可知cameraInterface 被赋值给struct DeviceInfo1 中定义的mInterface,即 DeviceInfo1 中的的mInterface为CameraProvider中的 android::hardware::camera::device::V1_0::implementation::CameraDevice对象
同理:即 DeviceInfo3 中的的mInterface为CameraProvider中的android::hardware::camera::device::V3_2::implementation::CameraDevice对象
这里的 hardware::camera::provider::V2_4::ICameraProvider 对应 android.hardware.camera.provider@2.4 .so模块,CameraProvider就是与HAL层so的交互
/frameworks/av/services/camera/libcameraservice/common/CameraProviderManager.h
class CameraProviderManager : virtual public hidl::manager::V1_0::IServiceNotification {
struct HardwareServiceInteractionProxy : public ServiceInteractionProxy {
virtual bool registerForNotifications(
const std::string &serviceName,
const sp<hidl::manager::V1_0::IServiceNotification>
¬ification) override {
return hardware::camera::provider::V2_4::ICameraProvider::registerForNotifications(
serviceName, notification);
}
virtual sp<hardware::camera::provider::V2_4::ICameraProvider> getService(
const std::string &serviceName) override {
return hardware::camera::provider::V2_4::ICameraProvider::getService(serviceName);
}
};
status_t initialize(wp<StatusListener> listener,
ServiceInteractionProxy *proxy = &sHardwareServiceInteractionProxy);
static HardwareServiceInteractionProxy sHardwareServiceInteractionProxy;
struct ProviderInfo :
virtual public hardware::camera::provider::V2_4::ICameraProviderCallback,
virtual public hardware::hidl_death_recipient
{
const std::string mProviderName;
const sp<hardware::camera::provider::V2_4::ICameraProvider> mInterface;
...
struct DeviceInfo {
...
};
std::vector<std::unique_ptr<DeviceInfo>> mDevices;
struct DeviceInfo1 : public DeviceInfo {
typedef hardware::camera::device::V1_0::ICameraDevice InterfaceT;
const sp<InterfaceT> mInterface;
...
}
struct DeviceInfo3 : public DeviceInfo {
typedef hardware::camera::device::V3_2::ICameraDevice InterfaceT;
const sp<InterfaceT> mInterface;
...
}
}
...
std::vector<sp<ProviderInfo>> mProviders;
}
/frameworks/av/services/camera/libcameraservice/common/CameraProviderManager.cpp
const std::string kLegacyProviderName("legacy/0");
// Slash-separated list of provider types to consider for use via the old camera API
const std::string kStandardProviderTypes("internal/legacy");
status_t CameraProviderManager::initialize(wp<CameraProviderManager::StatusListener> listener,
ServiceInteractionProxy* proxy) {
mListener = listener;
mServiceProxy = proxy;
// Registering will trigger notifications for all already-known providers
bool success = mServiceProxy->registerForNotifications(
/* instance name, empty means no filter */ "",
this);
// See if there's a passthrough HAL, but let's not complain if there's not
addProviderLocked(kLegacyProviderName, /*expected*/ false);
}
status_t CameraProviderManager::addProviderLocked(const std::string& newProvider, bool expected) {
...
sp<provider::V2_4::ICameraProvider> interface;
interface = mServiceProxy->getService(newProvider);
sp<ProviderInfo> providerInfo =
new ProviderInfo(newProvider, interface, this);
status_t res = providerInfo->initialize();
if (res != OK) {
return res;
}
mProviders.push_back(providerInfo);
return OK;
}
CameraProviderManager::ProviderInfo::ProviderInfo(
const std::string &providerName,
sp<provider::V2_4::ICameraProvider>& interface,
CameraProviderManager *manager) :
mProviderName(providerName),
//此处的 mInterface 为struct ProviderInfo中定义的 sp<hardware::camera::provider::V2_4::ICameraProvider> mInterface
mInterface(interface),
mProviderTagid(generateVendorTagId(providerName)),
mUniqueDeviceCount(0),
mManager(manager) {
(void) mManager;
}
status_t CameraProviderManager::ProviderInfo::initialize() {
hardware::Return<Status> status = mInterface->setCallback(this);
// Get initial list of camera devices, if any
std::vector<std::string> devices;
hardware::Return<void> ret = mInterface->getCameraIdList([&status, &devices](
Status idStatus,
const hardware::hidl_vec<hardware::hidl_string>& cameraDeviceNames) {
status = idStatus;
if (status == Status::OK) {
for (size_t i = 0; i < cameraDeviceNames.size(); i++) {
devices.push_back(cameraDeviceNames[i]);
}
} });
...
sp<StatusListener> listener = mManager->getStatusListener();
for (auto& device : devices) {
std::string id;
status_t res = addDevice(device,
hardware::camera::common::V1_0::CameraDeviceStatus::PRESENT, &id);
if (res != OK) {
ALOGE("%s: Unable to enumerate camera device '%s': %s (%d)",
__FUNCTION__, device.c_str(), strerror(-res), res);
continue;
}
}
}
status_t CameraProviderManager::ProviderInfo::addDevice(const std::string& name,
CameraDeviceStatus initialStatus, /*out*/ std::string* parsedId) {
status_t res = parseDeviceName(name, &major, &minor, &type, &id);
switch (major) {
case 1:
deviceInfo = initializeDeviceInfo<DeviceInfo1>(name, mProviderTagid,
id, minor);
break;
...
}
mDevices.push_back(std::move(deviceInfo));
}
template<class DeviceInfoT>
std::unique_ptr<CameraProviderManager::ProviderInfo::DeviceInfo>
CameraProviderManager::ProviderInfo::initializeDeviceInfo(
const std::string &name, const metadata_vendor_id_t tagId,
const std::string &id, uint16_t minorVersion) const {
auto cameraInterface =
getDeviceInterface<typename DeviceInfoT::InterfaceT>(name);
return std::unique_ptr<DeviceInfo>(new DeviceInfoT(name, tagId, id, minorVersion, resourceCost,
cameraInterface));
}
sp<device::V1_0::ICameraDevice>
CameraProviderManager::ProviderInfo::getDeviceInterface
<device::V1_0::ICameraDevice>(const std::string &name) const {
sp<device::V1_0::ICameraDevice> cameraInterface;
hardware::Return<void> ret;
ret = mInterface->getCameraDeviceInterface_V1_x(name, [&status, &cameraInterface](
Status s, sp<device::V1_0::ICameraDevice> interface) {
status = s;
cameraInterface = interface;
});
return cameraInterface;
}
CameraProviderManager::ProviderInfo::DeviceInfo1::DeviceInfo1(const std::string& name,
const metadata_vendor_id_t tagId, const std::string &id,
uint16_t minorVersion,
const CameraResourceCost& resourceCost,
sp<InterfaceT> interface) :
DeviceInfo(name, tagId, id, hardware::hidl_version{1, minorVersion},
resourceCost),
//此处的 mInterface 为struct DeviceInfo1 中定义的 sp<InterfaceT> mInterface
//InterfaceT 宏定义hardware::camera::device::V1_0::ICameraDevice
mInterface(interface) {
// Get default parameters and initialize flash unit availability
// Requires powering on the camera device
hardware::Return<Status> status = mInterface->open(nullptr);
hardware::Return<void> ret;
ret = mInterface->getParameters([this](const hardware::hidl_string& parms) {
mDefaultParameters.unflatten(String8(parms.c_str()));
});
...
}
3.4 hardware::camera::provider::V2_4::ICameraProvider::getService(serviceName) 最终会通过HIDL通信调用到CameraProvider.cpp中的HIDL_FETCH_IcameraProvider函数中
/hardware/interfaces/camera/provider/2.4/default/CameraProvider.cpp
ICameraProvider* HIDL_FETCH_ICameraProvider(const char* name) {
if (strcmp(name, kLegacyProviderName) != 0) {
return nullptr;
}
CameraProvider* provider = new CameraProvider();
if (provider == nullptr) {
ALOGE("%s: cannot allocate camera provider!", __FUNCTION__);
return nullptr;
}
if (provider->isInitFailed()) {
ALOGE("%s: camera provider init failed!", __FUNCTION__);
delete provider;
return nullptr;
}
return provider;
}
4.CameraService 连接
4.1 连接并创建客户端
4.1.1 如果是camera1应用层调用API_1接口 Camera.open方法最终会调用CameraService::connect。
CameraService::connect(const sp<ICameraClient>& cameraClient,int cameraId,const String16& clientPackageName,int clientUid,int clientPid,/*out*/sp<ICamera>* device) 方法传递的参数device即客户端传递的Camera对象,然后调用
CameraService::connectHelper<ICameraClient,Client>(cameraClient, id,CAMERA_HAL_API_VERSION_UNSPECIFIED, clientPackageName, clientUid, clientPid, API_1,/*legacyMode*/ false, /*shimUpdateOnly*/ false,/*out*/client)
4.1.2 如果是camera2 应用层调用API_2接口 CameraManager.openCamera(cameraId, mDeviceStateCallback, mHandler) ,CameraManager不走cameraclient, 会直接获取cameraService aidl对象(ICameraService.aidl)
CameraManager.openCamera(String cameraId,final CameraDevice.StateCallback callback, Handler handler) 调用openCameraForUid(cameraId, callback, CameraDeviceImpl.checkAndWrapHandler(handler),USE_CALLING_UID)
CameraManager.openCameraForUid(String cameraId,final CameraDevice.StateCallback callback,..) 调用openCameraDeviceUserAsync(cameraId, callback, executor, clientUid, oomScoreOffset)
CameraManager.openCameraDeviceUserAsync(String cameraId,CameraDevice.StateCallback callback..) 该方法执行
#1.创建CameraDeviceImpl对象,
调用deviceImpl = new android.hardware.camera2.impl.CameraDeviceImpl(cameraId,callback,..);
CameraDeviceImpl.mDeviceCallback 为应用调用CameraManager.openCamera传递的CameraDevice.StateCallback监听对象
#2.获取JAVA层接CameraService回调的aidl 对象 callbacks ,
调用callbacks = deviceImpl.getCallbacks()获取CameraDeviceImpl.CameraDeviceCallback对象,
CameraDeviceImpl.CameraDeviceCallbacks(继承ICameraDeviceCallbacks.Stub) 对象,通过aidl 注册到CameraService,用来处理CameraService数据回调.
#3.连接cameraService,
调用ICameraDeviceUser cameraUser cameraUser = cameraService.connectDevice(callbacks, cameraId,mContext.getOpPackageName(), mContext.getAttributionTag(), uid),
cameraUser为cameraService的句柄 例如:当前是hal3创建的CameraDeviceClient(继承BnCameraDeviceUser) aidl对象。
然后调用 deviceImpl.setRemoteDevice(cameraUser) 即通过deviceImpl完成了java接口层和cameraService aidl双向通信.
connectDevice 方法会调用
CameraService::connectHelper<hardware::camera2::ICameraDeviceCallbacks,CameraDeviceClient>(cameraCb, id,CAMERA_HAL_API_VERSION_UNSPECIFIED, clientPackageName,clientUid, USE_CALLING_PID, API_2,/*legacyMode*/ false, /*shimUpdateOnly*/ false,/*out*/client)
CameraService::connectHelper 方法调用 makeClient(this, cameraCb, clientPackageName, cameraId, facing, clientPid,clientUid, getpid(), legacyMode, CAMERA_HAL_API_VERSION_UNSPECIFIED, deviceVersion, effectiveApiLevel,/*out*/&tmp)
获取 tmp(BasicClient 对象)转换成Client对象 赋值给变量 client ,其中deviceVersion是通过getDeviceVersion获取的版本设备hal版本,getDeviceVersion 方法 从 mCameraProviderManager 获取maxVersion,从HARDWARE_MODULE_API_VERSION 宏定义可知 ,如果是hal1 主版本major是 1,子版本minor是0则对应 CAMERA_MODULE_API_VERSION_1_0,getDeviceVersion 获取的major,minor是从CameraProvider::getCameraIdList 方法 最终调用hal 中 camera_module_t 的 get_camera_info 获取到camera信息 "device@<major>.<minor>/<type>/<id>"
CameraService::makeClient 方法根据deviceVersion 和apiLevel 初始化不同的 client 对象,
如果是hal1和API_1 初始化 CameraClient;
如果是hal1和API_2 不支持报错!
如果是hal3和API_1 初始化 Camera2Client;
如果是hal3和API_2 初始化 CameraDeviceClient;
4.1.3 如果当前是hal1和API_1 调用new CameraClient(cameraService, tmp, packageName, cameraIdToInt(cameraId),facing, clientPid, clientUid, servicePid, legacyMode)实例化CameraClient对象,由CameraClient.h可知CameraClient类继承CameraService::Client,由CameraService.h可知CameraService的内部类Client继承CameraService的内部类BasicClient,
所以实例化 CameraClient对象时除了会调用构造函数CameraClient::CameraClient,也会调用构造函数 CameraService::Client::Client 和CameraService::BasicClient::BasicClient。
CameraService::Client::Client 构造函数将 CameraService::connect传递过来 cameraId参数赋值给变量mCameraId,CameraClient用到的mCameraId变量是在 CameraService的内部类Client中定义的mCameraId。
4.1.4 如果当前是hal3和API_2调用new CameraDeviceClient(cameraService, tmp, packageName, cameraId,facing, clientPid, clientUid, servicePid)
CameraDeviceClient::CameraDeviceClient构造函数将 remoteCallback(即 tmp用来赋值给上层connectDevice 时的ICameraDeviceUser对象device ) 赋值给 mRemoteCallback
从继承关系 CameraDeviceClient=》Camera2ClientBase<CameraDeviceClientBase>=》CameraDeviceClientBase=》CameraService::BasicClient 可知,还会调用Camera2ClientBase<TClientBase>::Camera2ClientBase(...) 构造函数调用 mDevice = new Camera3Device(cameraId) 初始化变量 mDevice
并调用 client->initialize(mCameraProviderManager) 后,赋值给device,其中 mCameraProviderManager 是3.2描述的用来读取缓存摄像头信息的对象makeClient 中halVersion 对应CAMERA_HAL_API_VERSION_UNSPECIFIED 值为-1,
即将 client 赋值给了客户端connect 传递过来的变量 device。 后面客户端的请求(比如startPreview等)直接通过变量 device调用了CameraService中的CameraClient或CameraDeviceClient的方法
frameworks/av/services/camera/libcameraservice/CameraService.cpp
Status CameraService::connect(
const sp<ICameraClient>& cameraClient,
int cameraId,
const String16& clientPackageName,
int clientUid,
int clientPid,
/*out*/
sp<ICamera>* device) {
ret = connectHelper<ICameraClient,Client>(cameraClient, id,
CAMERA_HAL_API_VERSION_UNSPECIFIED, clientPackageName, clientUid, clientPid, API_1,
/*legacyMode*/ false, /*shimUpdateOnly*/ false,
/*out*/client);
}
template<class CALLBACK, class CLIENT>
Status CameraService::connectHelper(const sp<CALLBACK>& cameraCb, const String8& cameraId,
int halVersion, const String16& clientPackageName, int clientUid, int clientPid,
apiLevel effectiveApiLevel, bool legacyMode, bool shimUpdateOnly,
/*out*/sp<CLIENT>& device) {
...
sp<CLIENT> client = nullptr;
sp<BasicClient> tmp = nullptr;
int deviceVersion = getDeviceVersion(cameraId, /*out*/&facing);
if(!(ret = makeClient(this, cameraCb, clientPackageName, cameraId, facing, clientPid,
clientUid, getpid(), legacyMode, halVersion, deviceVersion, effectiveApiLevel,
/*out*/&tmp)).isOk()) {
return ret;
}
client = static_cast<CLIENT*>(tmp.get());
err = client->initialize(mCameraProviderManager);
device = client;
}
Status CameraService::makeClient(const sp<CameraService>& cameraService,
const sp<IInterface>& cameraCb, const String16& packageName, const String8& cameraId,
int facing, int clientPid, uid_t clientUid, int servicePid, bool legacyMode,
int halVersion, int deviceVersion, apiLevel effectiveApiLevel,
/*out*/sp<BasicClient>* client) {
if (halVersion < 0 || halVersion == deviceVersion) {
// Default path: HAL version is unspecified by caller, create CameraClient
// based on device version reported by the HAL.
switch(deviceVersion) {
case CAMERA_DEVICE_API_VERSION_1_0:
if (effectiveApiLevel == API_1) { // Camera1 API route
sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get());
*client = new CameraClient(cameraService, tmp, packageName, cameraIdToInt(cameraId),
facing, clientPid, clientUid, getpid(), legacyMode);
} else { // Camera2 API route
ALOGW("Camera using old HAL version: %d", deviceVersion);
return STATUS_ERROR_FMT(ERROR_DEPRECATED_HAL,
"Camera device \"%s\" HAL version %d does not support camera2 API",
cameraId.string(), deviceVersion);
}
...
}
}
}
int CameraService::getDeviceVersion(const String8& cameraId, int* facing) {
hardware::hidl_version maxVersion{0,0};
res = mCameraProviderManager->getHighestSupportedVersion(cameraId.string(),
&maxVersion);
if (res != OK) return -1;
deviceVersion = HARDWARE_DEVICE_API_VERSION(maxVersion.get_major(), maxVersion.get_minor());
return deviceVersion;
}
CameraService::Client::Client(const sp<CameraService>& cameraService,
const sp<ICameraClient>& cameraClient,
const String16& clientPackageName,
const String8& cameraIdStr, int cameraFacing,
int clientPid, uid_t clientUid,
int servicePid) :
CameraService::BasicClient(cameraService,
IInterface::asBinder(cameraClient),
clientPackageName,
cameraIdStr, cameraFacing,
clientPid, clientUid,
servicePid),
mCameraId(CameraService::cameraIdToInt(cameraIdStr))
{
int callingPid = getCallingPid();
LOG1("Client::Client E (pid %d, id %d)", callingPid, mCameraId);
mRemoteCallback = cameraClient;
cameraService->loadSound();
LOG1("Client::Client X (pid %d, id %d)", callingPid, mCameraId);
}
CameraService::BasicClient::BasicClient(const sp<CameraService>& cameraService,
const sp<IBinder>& remoteCallback,
const String16& clientPackageName,
const String8& cameraIdStr, int cameraFacing,
int clientPid, uid_t clientUid,
int servicePid):
mCameraIdStr(cameraIdStr), mCameraFacing(cameraFacing),
mClientPackageName(clientPackageName), mClientPid(clientPid), mClientUid(clientUid),
mServicePid(servicePid),
mDisconnected(false),
mRemoteBinder(remoteCallback)
{
if (sCameraService == nullptr) {
sCameraService = cameraService;
}
mOpsActive = false;
mDestructionStarted = false;
// In some cases the calling code has no access to the package it runs under.
// For example, NDK camera API.
// In this case we will get the packages for the calling UID and pick the first one
// for attributing the app op. This will work correctly for runtime permissions
// as for legacy apps we will toggle the app op for all packages in the UID.
// The caveat is that the operation may be attributed to the wrong package and
// stats based on app ops may be slightly off.
if (mClientPackageName.size() <= 0) {
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder = sm->getService(String16(kPermissionServiceName));
if (binder == 0) {
ALOGE("Cannot get permission service");
// Leave mClientPackageName unchanged (empty) and the further interaction
// with camera will fail in BasicClient::startCameraOps
return;
}
sp<IPermissionController> permCtrl = interface_cast<IPermissionController>(binder);
Vector<String16> packages;
permCtrl->getPackagesForUid(mClientUid, packages);
if (packages.isEmpty()) {
ALOGE("No packages for calling UID");
// Leave mClientPackageName unchanged (empty) and the further interaction
// with camera will fail in BasicClient::startCameraOps
return;
}
mClientPackageName = packages[0];
}
}
/frameworks/av/services/camera/libcameraservice/api1/CameraClient.h
class CameraClient : public CameraService::Client
{
}
/frameworks/av/services/camera/libcameraservice/api1/CameraClient.cpp
CameraClient::CameraClient(const sp<CameraService>& cameraService,
const sp<hardware::ICameraClient>& cameraClient,
const String16& clientPackageName,
int cameraId, int cameraFacing,
int clientPid, int clientUid,
int servicePid, bool legacyMode):
Client(cameraService, cameraClient, clientPackageName,
String8::format("%d", cameraId), cameraFacing, clientPid,
clientUid, servicePid)
{
}
/frameworks/av/camera/aidl/android/hardware/ICameraService.aidl
const int CAMERA_HAL_API_VERSION_UNSPECIFIED = -1;
/hardware/libhardware/include/hardware/camera_common.h
/**
* All module versions <= HARDWARE_MODULE_API_VERSION(1, 0xFF) must be treated
* as CAMERA_MODULE_API_VERSION_1_0
*/
#define CAMERA_MODULE_API_VERSION_1_0 HARDWARE_MODULE_API_VERSION(1, 0)
#define CAMERA_MODULE_API_VERSION_2_0 HARDWARE_MODULE_API_VERSION(2, 0)
#define CAMERA_MODULE_API_VERSION_2_1 HARDWARE_MODULE_API_VERSION(2, 1)
#define CAMERA_MODULE_API_VERSION_2_2 HARDWARE_MODULE_API_VERSION(2, 2)
#define CAMERA_MODULE_API_VERSION_2_3 HARDWARE_MODULE_API_VERSION(2, 3)
#define CAMERA_MODULE_API_VERSION_2_4 HARDWARE_MODULE_API_VERSION(2, 4)
CameraService.h中 Client : public hardware::BnCamera, public BasicClient
4.2 调用 client->initialize(mCameraProviderManager)
继承关系
CameraHardwareInterface -> V1_0::ICameraDeviceCallback | V1_0::ICameraDevicePreviewCallback
Camera3Device -> V3_5::ICameraDeviceCallback
4.2.1如果是hal1和API_1 调用
CameraClient::initialize(mCameraProviderManager) 方法先调用 startCameraOps()
CameraService::BasicClient::startCameraOps() 调用名称为'appops'服务,判断调用cameraservice的客户端应用的mClientUid,mClientPackageName是否有AppOpsManager::OP_CAMERA权限.如果没有权限则返回错误码。然后调用 new CameraHardwareInterface(camera_device_name) 实例化CameraHardwareInterface对象,这里的camera_device_name就是 mCameraId,CameraHardwareInterface 类中的mName 即为 mCameraId。
CameraHardwareInterface 是CameraService暴露给 平台hal层的接口,不同平台厂商要实现该camera hal接口。
然后调用 mHardware->initialize(manager) 将CameraProviderManager对象传递给 CameraHardwareInterface。
CameraHardwareInterface::initialize方法调用 manager->openSession(mName.string(), this, &mHidlDevice),获取到CameraProvider层对象mHidlDevice(即 CameraProvider中android::hardware::camera::device::V1_0::implementation::CameraDevice对象,而CameraDevice对象包含加载hal xx.so获取的camera_module_t对象,从mProviders中遍历获取, ICameraDevice添加到mProviders的过程参考3.3.2)后面 CameraService 层通过调用 mHidlDevice 对象来调用camera hal层接口。而 CameraHardwareInterface继承了V1_0::ICameraDeviceCallback接口,该接口声明了notifyCallback,dataCallback等回调,即CameraHardwareInterface注册了camera hal相关数据接口监听。
CameraProviderManager::openSession(mName.string(), this, &mHidlDevice) 将this CameraHardwareInterface对象传递给了CameraProviderManager;方法中参数 id即CameraHardwareInterface::initialize方法 传递的mName参数,即CameraClient中的 mCameraId,由于4.1可知CameraClient中的 mCameraId 就是CameraService::connect传递过来 cameraId参数。
#1 先调用 findDeviceInfoLocked(id,/*minVersion*/ {1,0}, /*maxVersion*/ {2,0})
CameraProviderManager::findDeviceInfoLocked 方法中 mProviders 是 3.3.2中描述的 存储new ProviderInfo(newProvider, interface, this) 对象,其中newProvider 为字符串"legacy/0",interface即 /hardware/interfaces/camera/provider/2.4/default/CameraProvider.cpp 中的new CameraProvider()对象。
CameraProviderManager::findDeviceInfoLocked 方法 遍历 mProviders,从provider 中获取mDevices列表,遍历mDevices列表判断如果当前的id 即cameraid 是否等于deviceInfo的mId,如果等于且 deviceInfo->mVersion在version范围内,则返回 DeviceInfo1 。
#2 然后调用auto *deviceInfo1 = static_cast<ProviderInfo::DeviceInfo1*>(deviceInfo)强转类型DeviceInfo1,调用deviceInfo1->mInterface->open(callback)这里的 deviceInfo1->mInterface 从CameraProviderManager.h 中 DeviceInfo1 定义可知 mInterface 即hardware::camera::device::V1_0::ICameraDevice,从3.3.2 可知 deviceInfo 是CameraProvider中 CameraDevice的实现类对象即deviceInfo1->mInterface->open(callback),调用CameraProvider中 CameraDevice的实现类android::hardware::camera::device::V1_0::implementation::CameraDevice对象的open(callback)方法。这个callback即CameraHardwareInterface对象
4.2.2如果是hal3和API_2调用
CameraDeviceClient::initialize(mCameraProviderManager) 方法调用 initializeImpl(mCameraProviderManager)
CameraDeviceClient::initializeImpl(providerPtr) 方法调用 Camera2ClientBase::initialize(providerPtr) 这里的providerPtr 就是上面传递的参数 mCameraProviderManager
Camera2ClientBase<TClientBase>::initialize(sp<CameraProviderManager> manager) 调用 initializeImpl(manager) 这里的 manager 就是上面传递的参数 mCameraProviderManager
Camera2ClientBase<TClientBase>::initializeImpl(TProviderPtr providerPtr) 主要调用如下3步.这里的providerPtr 就是上面传递的参数 mCameraProviderManager,TClientBase 为 CameraDeviceClient,
#1 调用TClientBase::startCameraOps() TClientBase 为 CameraDeviceClient, 从继承关系 CameraDeviceClient=》Camera2ClientBase<CameraDeviceClientBase>=》CameraDeviceClientBase=》CameraService::BasicClient可知,即调用
CameraService::BasicClient::startCameraOps() 调用名称为'appops'服务,判断调用cameraservice的客户端应用的mClientUid,mClientPackageName是否有AppOpsManager::OP_CAMERA权限.如果没有权限则返回错误码。
#2 mDevice->initialize(providerPtr) 从4.1.4可知 Camera2ClientBase.mDevice为 Camera3Device
Camera3Device::initialize(manager) 调用 manager->openSession(mId.string(), this,/*out*/ &session) ,获取到hal层对象session(即hardware::camera::device::V3_2::ICameraDeviceSession对象)调用 session->getCaptureRequestMetadataQueue 获取 std::shared_ptr<RequestMetadataQueue> 对象queue,
然后调用 mInterface = new HalInterface(session, queue) 将session封装到HalInterface.mHidlSession ,后面 CameraService 层通过调用 mInterface 对象来调用camera hal层接口。manager 为 mCameraProviderManager,这个mId 是应用调用 cameraService.connectDevice传递的 cameraid。后面 CameraService 层通过调用 session 对象来调用camera hal层接口。而 Camera3Device 继承了V3_5::ICameraDeviceCallback接口该接口声明了notifyCallback,dataCallback等回调,即Camera3Device注册了camera hal相关数据接口监听。
CameraProviderManager::openSession(mId.string(), this,/*out*/ &session) 将this Camera3Device对象传递给了 CameraProviderManager;
#2.1先调用 auto deviceInfo = findDeviceInfoLocked(id,/*minVersion*/ {3,0}, /*maxVersion*/ {4,0}) 流程同上面 从 mProviders中获取 DeviceInfo3,
DeviceInfo3中 mInterface 即 android::hardware::camera::device::V3_2::implementation::CameraDevice对象
#2.2然后调用auto *deviceInfo3 = static_cast<ProviderInfo::DeviceInfo3*>(deviceInfo) 调用
deviceInfo3->mInterface->open(callback,open_cb 匿名监听) callback 即Camera3Device对象 ,open_cb 从CameraProvider返回CameraDeviceSession.TrampolineSessionInterface_3_2 (继承device::V3_2::ICameraDeviceSession)对象赋值给 session.
CameraDevice::open(const sp<ICameraDeviceCallback>& callback, open_cb _hidl_cb)
1.调用 mModule->open(mCameraId.c_str(),reinterpret_cast<hw_device_t**>(&device))
CameraModule::open 调用mModule->common.methods->open(&mModule->common, id, device)即调用hal接口中的 camera_module_t->common.methods->open(&mModule->common, id, device)
2.调用createSession(device, info.static_camera_characteristics, callback)创建 CameraDeviceSession对象,
CameraDeviceSession::CameraDeviceSession 初始化 Camera3Device 对象给变量 mResultBatcher,将调用hal接口的open方法返回的hw_device_t(实际是camera3_device_t中首个hw_device_t对象common即camera3_device_t)对象device赋值给CameraDeviceSession.mDevice,调用 initialize()
CameraDeviceSession::initialize() 调用 mDevice->ops->initialize(mDevice, this) 注册hal 数据回调,this是 CameraDeviceSession(继承camera3_callback_ops,定义process_capture_result和 notify方法)
由CameraDeviceSession.h 可知 CameraDeviceSession 中camera3_callback_ops 接口实现 对应关系为:
process_capture_result 对应 CameraDeviceSession::sProcessCaptureResult,
notify对应 CameraDeviceSession::sNotify
Camera3Device::initialize(manager) 最后调用
Camera3Device::initializeCommonLocked()
1.开启StatusTracker线程 调用 mStatusTracker = new StatusTracker(this),mStatusTracker->run(String8::format("C3Dev-%s-Status", mId.string()).string())
2.创建Camera3BufferManager 调用 mBufferManager = new Camera3BufferManager()
3.启动Request线程 循环app层的request, 调用 mRequestThread = new RequestThread(this, mStatusTracker, mInterface) mRequestThread->run(String8::format("C3Dev-%s-ReqQueue", mId.string()).string())
Camera3Device::RequestThread::threadLoop() 循环阻塞从 mRepeatingRequests, mRequestQueue 取出消息处理
#3 调用 mDevice->setNotifyCallback(weakThis) 将CameraDeviceClient注册到 Camera3Device.mListener
及 Camera3Device.RequestThread.mListener 和 Camera3Device.PreparerThread.mListener
CameraDeviceClient::initializeImpl 最后调用mFrameProcessor = new FrameProcessorBase(mDevice)初始化 帧处理线程,调用mFrameProcessor->registerListener(..,this,/*sendPartials*/true) 将CameraDeviceClient注册到FrameProcessorBase.mRangeListeners.调用 mFrameProcessor->run(threadName.string()) 执行启动无限循环线程,线程会循环调用 FrameProcessorBase::threadLoop() 方法, 该方法调用 Camera3Device::waitForNextFrame 判断 mResultQueue为空 则阻塞,直到 mResultQueue不为空;
调用 processNewFrames(device) 回调 数据给CameraDeviceClient。
调用 mFrameProcessor->registerListener(FRAME_PROCESSOR_LISTENER_MIN_ID,FRAME_PROCESSOR_LISTENER_MAX_ID,/*listener*/this,/*sendPartials*/true)
这样就利用CameraHardwareInterface 或者Camera3Device 完成了 CameraService和 camera hal的双向通信。
mHardware->setCallbacks 将CameraClient 中的notifyCallback 等接口回调方法赋值给 CameraHardwareInterface。
如果是hal1和API_1 初始化 CameraClient;如果是hal3和API_1 初始化 Camera2Client;如果是hal3和API_2 初始化 CameraDeviceClient;
总结CameraService 连接主要调用流程如下:
CameraClient =》CameraHardwareInterface =》android::hardware::camera::device::V1_0::implementation::CameraDevice调用hal 中camera_module_t的open方法返回的 hw_device_t
Camera2Client =》Camera3Device =》android::hardware::camera::device::V3_2::implementation::CameraDevice/CameraDeviceSession 调用hal 中camera_module_t的open方法返回的 hw_device_t,和hal 中camera3_device_t->ops->initialize方法
CameraDeviceClient =》Camera3Device =》android::hardware::camera::device::V3_2::implementation::CameraDevice/CameraDeviceSession 调用hal 中camera_module_t的open方法返回的 hw_device_t,和hal 中camera3_device_t->ops->initialize方法
5.CameraService 预览
5.1 如果是hal1和API_1 由4.1可知 客户端会调用
CameraClient::startPreview()
CameraClient::startPreviewMode()
#1 mHardware->setPreviewWindow(mPreviewWindow);//
这里 mPreviewWindow,是从/frameworks/base/core/java/android/hardware/Camera.java
Camera.setPreviewDisplay(SurfaceHolder holder) 调用setPreviewSurface(holder.getSurface())
/frameworks/base/core/jni/android_hardware_Camera.cpp
android_hardware_Camera_setPreviewSurface(JNIEnv *env, jobject thiz, jobject jSurface) 方法调用
sp<IGraphicBufferProducer> gbp;
surface = android_view_Surface_getSurface(env, jSurface);
获取IGraphicBufferProducer本地代理gbp = surface->getIGraphicBufferProducer();
camera->setPreviewTarget(gbp)
frameworks/av/camera/Camera.cpp
Camera::setPreviewTarget(const sp<IGraphicBufferProducer>& bufferProducer) 方法调用
sp <::android::hardware::ICamera> c = mCamera;
return c->setPreviewTarget(bufferProducer);//这里 将bufferProducer设置到CameraService中的CameraClient接口中
/frameworks/av/services/camera/libcameraservice/api1/CameraClient.cpp
=》CameraClient::setPreviewTarget(const sp<IGraphicBufferProducer>& bufferProducer) 方法从IInterface::asBinder(bufferProducer) 获取 binder 对象,调用new Surface(bufferProducer, /*controlledByApp*/ true) 创建封装 bufferProducer 的Surface 对象 window,
从 /frameworks/native/libs/gui/include/gui/Surface.h 可知 Surface 继承 ANativeWindow
然后调用 setPreviewWindow(binder, window)
=》CameraClient::setPreviewWindow(const sp<IBinder>& binder,const sp<ANativeWindow>& window) 设置mSurface = binder,mPreviewWindow = window;
最终调用
=》CameraHardwareInterface::setPreviewWindow(const sp<ANativeWindow>& buf)
其中buf 为 CameraClient::setPreviewTarget 中初始化的 window ,而window为 new Surface(bufferProducer, /*controlledByApp*/ true) 创建封装 bufferProducer(从上层获取的IGraphicBufferProducer) 的Surface 对象 。
调用 mPreviewWindow = buf 将 buf 赋值给CameraHardwareInterface类中的全局变量 mPreviewWindow,
调用 mHidlDevice->setPreviewWindow(buf.get() ? this : nullptr) 其中 this 为 CameraHardwareInterface(实现ICameraDevicePreviewCallback接口)
最终调用hal实现接口 CameraDevice::setPreviewWindow(const sp<ICameraDevicePreviewCallback>& window)。
#2 result = mHardware->startPreview();
CameraHardwareInterface::startPreview
CameraProviderManager::mapToStatusT(mHidlDevice->startPreview())
这里的mHidlDevice 由3.3.2可知即hardware::camera::device::V1_0::ICameraDevice对象,调用hal实现接口startPreview
5.2 如果是hal3和API_2 用到的主要文件如下,其中同名的 .java类和native的.cpp文件是对应关系
/frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java
封装相机设备的操作接口与CameraService通信
CameraDeviceImpl.mRemoteDevice 为CameraService中的CameraDeviceClient对象.
CameraDeviceImpl.mCallbacks 为JAVA层CameraDeviceImpl.CameraDeviceCallbacks(继承ICameraDeviceCallbacks.Stub) 对象,通过aidl 注册到CameraService,用来处理CameraService数据回调.
CameraDeviceImpl.mDeviceCallback 为应用调用CameraManager.openCamera传递的CameraDevice.StateCallback监听对象
/frameworks/native/libs/gui/view/Surface.cpp //引用#include <gui/view/Surface.h>
/frameworks/base/core/java/android/view/Surface.java
/frameworks/base/core/jni/android/graphics/SurfaceTexture.cpp //引用 #include <gui/Surface.h>
/frameworks/base/graphics/java/android/graphics/SurfaceTexture.java
/frameworks/av/camera/camera2/OutputConfiguration.cpp
/frameworks/base/core/java/android/hardware/camera2/params/OutputConfiguration.java
/frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.cpp //引用 #include <gui/Surface.h>
camera2 应用层调用API_2接口 CameraManager.openCamera(cameraId, mDeviceStateCallback, mHandler) 该方法会调用
cameraUser = cameraService.connectDevice(callbacks, cameraId,mContext.getOpPackageName(), mContext.getAttributionTag(), uid)从cameraservice 接口获取到 ICameraDeviceUser.aidl 对象(CameraDeviceClient) cameraUser,
从继承关系 CameraDeviceClient=》Camera2ClientBase<CameraDeviceClientBase>=》CameraDeviceClientBase=》CameraService::BasicClient,hardware::camera2::BnCameraDeviceUser可知CameraDeviceClient最终实现 ICameraDeviceUser.aidl 接口
创建CameraDeviceImpl 对象deviceImpl, 然后调用 deviceImpl.setRemoteDevice(cameraUser) 将 CameraDeviceClient 的aidl 接口对象ICameraDeviceUser 赋值给 mRemoteDevice,即 CameraDeviceImpl类中持有 cameraservice 中 CameraDeviceClient 的binder引用。
调用 mDeviceCallback.onOpened(CameraDeviceImpl.this) 应用层通过 注册 mDeviceStateCallbackonOpened(CameraDeviceImpl.this) 监听 获取到 CameraDevice 对象mCameraDevice。
从 4.2.2 可知 CameraDeviceClient 初始化会创建 Camera3Device对象 mDevice,Camera3Device 初始化会 调用 CameraProviderManager::openSession ,然后调用 android::hardware::camera::device::V3_2::implementation::CameraDevice
CameraDevice::open 返回 CameraDeviceSession 对象,大致流程 CameraDevice::open 从初始化 时候 加载hal xx.so获取的 camera_module_t 对象 mModule ,然后调用mModule->common.methods->open(&mModule->common, id, device) 获取camera3_device_t对象,并将camera3_device_t 对象赋值给 CameraDeviceSession.mDevice.
总结:对 mCameraDevice 操作即调用 CameraDeviceImpl.java 类,调用 cameraservice 中的 CameraDeviceClient.cpp 类,最终调用 CameraProvider 中的 CameraDevice.cpp,CameraDeviceSession.cpp 来操作hal3接口
应用层调用顺序
#1调用 mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW) 创建 CaptureRequest.Builder对象 mPreviewBuilder,创建预览请求构建器对象,通过该对象可以设置曝光、对焦、预览方向等
CameraDeviceImpl.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW) 调用 templatedRequest = mRemoteDevice.createDefaultRequest(templateType)
然后通过new CaptureRequest.Builder(templatedRequest, /*reprocess*/false, CameraCaptureSession.SESSION_ID_NONE) 封装 templatedRequest 返回CaptureRequest.Builder给应用.即: CaptureRequest.mIsReprocess 为false,如果应用调用的是CameraDeviceImpl.createReprocessCaptureRequest 则CaptureRequest.mIsReprocess 为true
CameraDeviceClient::createDefaultRequest(int templateId,/*out*/hardware::camera2::impl::CameraMetadataNative* request) 调用 mDevice->createDefaultRequest(templateId, &metadata)
获取metadata后通过调用 request->swap(metadata) 返回给应用层中调用 mCameraDevice.createCaptureRequest通过 CaptureRequest.Builder创建 的 CaptureRequest.mSettings 变量中,应用中获取设置参数 例如:mPreviewBuilder.get(CaptureRequest.CONTROL_AE_MODE), 即调用mSettings.get(key).
Camera3Device::createDefaultRequest(int templateId,CameraMetadata *request) 调用 mInterface->constructDefaultRequestSettings((camera3_request_template_t) templateId, &rawRequest)
Camera3Device::HalInterface::constructDefaultRequestSettings(CameraDevice.TEMPLATE_PREVIEW,/*out*/ camera_metadata_t **requestTemplate) 调用 mHidlSession->constructDefaultRequestSettings(RequestTemplate::PREVIEW,ICameraDeviceSession::constructDefaultRequestSettings_cb 匿名内部类监听) 从 匿名内部类监听 获取 camera_metadata_t 赋值给 requestTemplate
CameraDeviceSession::constructDefaultRequestSettings(RequestTemplate::PREVIEW,ICameraDeviceSession::constructDefaultRequestSettings_cb _hidl_cb) 该方法
调用 mDevice->ops->construct_default_request_settings(mDevice, (int) type) CameraDeviceSession.mDevice 即加载hal xxx.so中的camera3_device_t 对象,
例如高通 camera.msm8998.so ,mDevice 实际为 mCameraDevice 对象(即加载hal xxx.so中的camera3_device_t 对象),QCamera3HardwareInterface::mCameraOps定义了ops 方法的实现,
.construct_default_request_settings = QCamera3HardwareInterface::construct_default_request_settings,
#2调用 mPreviewBuilder.addTarget(mSurface) 把纹理视图添加到预览目标,不添加没法预览,其中 mSurface 为 mSurface = new Surface(TextureView.getSurfaceTexture());
CaptureRequest.Builder.addTarget(mSurface) 调用 mRequest.mSurfaceSet.add(mSurface) 将 mSurface 保存到 mSurfaceSet中,而 /frameworks/base/core/java/android/hardware/camera2/CaptureRequest.java //实现 Parcelable接口,第二个参数将mSurfaceSet转换成Surface数组 写入Parcelable。
ps: Surface.java 构造方法 调用 mNativeObject = nativeCreateFromSurfaceTexture(surfaceTexture) native 层 调用 new Surface(SurfaceTexture获取IGraphicBufferProducer, true) 去创建 Surface,这个Surface是引用 #include <gui/Surface.h> (继承ANativeWindow) ,nativeCreateFromSurfaceTexture 调用SurfaceTexture_getProducer(env, surfaceTextureObj) 在SurfaceTexture.cpp定义,SurfaceTexture.SurfaceTexture_getProducer 获取 BufferQueueProducer对象(TextureView第一次触发draw时会创建SurfaceTexture对象,SurfaceTexture_init 方法 通过调用BufferQueue::createBufferQueue 创建BufferQueueProducer对象 并保存在SurfaceTexture.java 变量mProducer中) .
#3调用 mCameraDevice.createCaptureSession(Arrays.asList(mSurface, mImageReader.getSurface()),mSessionStateCallback, mHandler)
创建一个相片捕获会话.主要是为两个surface创建流并配置到hal中,然后通知应用.此时需要两个surface(texureview和ImageReader)该对象负责管理处理预览请求和拍照请求,收到 mSessionStateCallback.onConfigured(CameraCaptureSession session)回调后,调用 mCameraSession.setRepeatingRequest(mPreviewBuilder.build(), null, mHandler) 持续预览,流程见后面#4调用CameraDeviceImpl.createCaptureSession(List<Surface> outputs,CameraCaptureSession.StateCallback callback, Handler handler) 创建拍照session,会调用
CameraDeviceImpl.createCaptureSessionInternal(null, outConfigurations, callback, handler,/*operatingMode*/ICameraDeviceUser.NORMAL_MODE) 其中 outConfigurations 为封装 outputs(Surface )的 OutputConfiguration 对象列表,ps: 这里第一个参数 inputConfig 为null;如果应用调用 createReprocessableCaptureSession 则 inputConfig不为空,可通过 ImageWriter输入缓冲再处理请求,该方法
#4.1调用 configureSuccess = configureStreamsChecked(null, outputConfigurations,operatingMode) 配置流然后阻塞直到空闲,ps:如果 inputConfig 不为空,调用 mRemoteDevice.createInputStream(...) 创建输入流, 最终创建 Camera3InputStream对象,并将流信息存储在 CameraDeviceClient.mInputStream,
遍历 outputConfigurations 对象 (size为2,包含texureview和ImageReader的Surface OutputConfiguration对象 ),调用mRemoteDevice.createStream(outConfig)分别创建texureview和ImageReader的流信息,并调用mConfiguredOutputs.put(streamId, outConfig) 将创建的流id和OutputConfiguration信息存储到mConfiguredOutputs.
CameraDeviceClient::createStream(const hardware::camera2::params::OutputConfiguration &outputConfiguration,/*out*/int32_t* newStreamId) 返回 newStreamId(对应下面的mNextStreamId) 给java层 调用bufferProducers = outputConfiguration.getGraphicBufferProducers() ,bufferProducers size 为1 为texureview或ImageReader的IGraphicBufferProducer对象,从java层到c++层Parcelable传参流程如下:
/frameworks/base/core/java/android/hardware/camera2/params/OutputConfiguration.java 实现Parcelable接口,writeToParcel时将 Surface列表 mSurfaces(size 是1)写入最后一个参数
/frameworks/av/camera/camera2/OutputConfiguration.cpp 实现Parcelable接口 ,readFromParcel 读取最后一个参数列表中surface.graphicBufferProducer保存到IGraphicBufferProducer列表mGbps(size 是1)
遍历 bufferProducers 调用 createSurfaceFromGbp(streamInfo, isStreamInfoValid, surface, bufferProducer) 将创建的 surface 添加到 surfaces列表,将 bufferProducer 添加到binders列表,bufferProducer即 IGraphicBufferProducer 对象,
该方法调用new Surface(bufferProducer, useAsync) 创建的Surface,这个Surface是引用 #include <gui/Surface.h> (继承ANativeWindow) ,从ANativeWindow 读取宽高格式dataSpace等信息赋值给 streamInfo.因为surfaces列表size 为2 ,顺序texureview和ImageReader,且遍历surfaces列表时即从最后一个ImageReader的bufferProducer创建的ANativeWindow中获取 宽高格式dataSpace,consumerUsage信息给streamInfo.
调用mDevice->createStream(surfaces, deferredConsumer, streamInfo.width,streamInfo.height, streamInfo.format, streamInfo.dataSpace,static_cast<camera3_stream_rotation_t>(outputConfiguration.getRotation()),&streamId, outputConfiguration.getSurfaceSetID(), isShared) OutputConfiguration.java 设置的 isShared 为false,这个 streamId 对应上面的 newStreamId,
调用 mStreamInfoMap[streamId] = streamInfo 将 streamId 和streamInfo 信息赋值给 mStreamInfoMap 变量,遍历 binders列表 调用 mStreamMap.add(binder, StreamSurfaceId(streamId, i++)),其中 StreamSurfaceId.streamId 存储 createStream创建流id,StreamSurfaceId.surfaceId 为自增值 与 StreamSurfaceId.streamId 对应.
Camera3Device::createStream(const std::vector<sp<Surface>>& consumers,bool hasDeferredConsumer, uint32_t width, uint32_t height, int format,android_dataspace dataSpace, camera3_stream_rotation_t rotation, int *id,int streamSetId, bool isShared, uint64_t consumerUsage)
#4.1.1 调用 newStream = new Camera3OutputStream(mNextStreamId, consumers[0],width, height, format, dataSpace, rotation,mTimestampOffset, streamSetId) 创建流信息,这个 mNextStreamId 对应newStreamId 返回给java层,并调用 mOutputStreams.add(mNextStreamId, newStream) 将流id 和流添加到 mOutputStreams 键值对列表中, 参数 consumers[0] 对应 texureview或ImageReader 的IGraphicBufferProducer创建的 Surface对象.
Camera3OutputStream::Camera3OutputStream(int id,sp<Surface> consumer,uint32_t width, uint32_t height, int format,android_dataspace dataSpace, camera3_stream_rotation_t rotation,nsecs_t timestampOffset, int setId) :Camera3IOStreamBase(id, CAMERA3_STREAM_OUTPUT, width, height,/*maxSize*/0, format, dataSpace, rotation, setId),
Camera3OutputStream构造函数将 预览控件的 Surface 赋值给 mConsumer变量.
继承关系: Camera3OutputStream : Camera3IOStreamBase : Camera3Stream : camera3_stream,Camera3StreamInterface,
#4.1.2然后调用 configureStreamsLocked(mOperatingMode) 流程分为两步
第一步 调用 mInterface->configureStreams(&config) 配置流到hal
Camera3Device::configureStreamsLocked(int operatingMode) 调用 mInterface->configureStreams(&config),其中 config.streams为 缓存在mOutputStreams 中创建的Camera3OutputStream对象
Camera3Device::HalInterface::configureStreams(camera3_stream_configuration *config) 调用 mHidlSession->configureStreams(requestedConfiguration,_hidl_cb)
requestedConfiguration.streams[i]为 config->streams[i] 赋值,例如config->streams[i].dataSpace 赋值给 requestedConfiguration.streams[i].dataSpace
CameraDeviceSession::configureStreams(const StreamConfiguration& requestedConfiguration,ICameraDeviceSession::configureStreams_cb _hidl_cb)
将requestedConfiguration.streams[i] 赋值给 mStreamMap[id]和 stream_list.streams[i] ,stream_list.streams 是camera3_stream_t **类型,对应texureview和ImageReader 创建的 Camera3OutputStream(继承camera3_stream)
最后调用 mDevice->ops->configure_streams(mDevice, &stream_list) 调用hal接口配置stream
PS:两个mStreamMap关系
CameraDeviceClient.mStreamMap.StreamSurfaceId.mStreamId 为应用createCaptureSession 传递 两个surface创建的流(Camera3OutputStream)的id
CameraDeviceSession.mStreamMap.Camera3Stream 对应为应用createCaptureSession 传递 两个surface创建的流(Camera3OutputStream)
第二步调用outputStream->finishConfiguration()
Camera3Stream::finishConfiguration() 调用configureQueueLocked()
Camera3OutputStream::configureQueueLocked()
Camera3OutputStream::configureConsumerQueueLocked() 调用 mConsumer->connect(NATIVE_WINDOW_API_CAMERA,/*listener*/mBufferReleasedListener,/*reportBufferRemoval*/true)
Surface::connect(int api)
BufferQueueProducer::connect(...) 时BufferQueueCore.mAllowAllocation 被设置为true
#4.2 调用 newSession = new CameraCaptureSessionImpl(mNextSessionId++, input,callback, handler, this, mDeviceHandler,configureSuccess) 并将 newSession赋值给 mCurrentSession,
CameraCaptureSessionImpl.CameraCaptureSessionImpl(id, input,callback, Handler stateHandler,deviceImpl,deviceStateHandler, configureSuccess) 构造方法 将 callback 封装到 CallbackProxies.SessionStateCallbackProxy 类对象中,并赋值给 变量 mStateCallback,将 deviceImpl(CameraDeviceImpl) 赋值给 mDeviceImpl,3.1 返回的 configureSuccess为true, 调用 mStateCallback.onConfigured(this),应用层会收到 mSessionStateCallback.onConfigured,应用层通过该回调 获取的 mCameraSession 即 CameraCaptureSessionImpl.java对象,
#5 然后camera2应用调用 mCameraSession.setRepeatingRequest(mPreviewBuilder.build(), null, mHandler) 持续预览,应用层 对 CameraCaptureSession对象 mCameraSession操作即调用 CameraCaptureSessionImpl.java接口方法
#5.1 CameraCaptureSessionImpl.setRepeatingRequest(CaptureRequest request, CaptureCallback callback,Handler handler) 调用 mDeviceImpl.setRepeatingRequest(request,createCaptureCallbackProxy(handler, callback), mDeviceHandler)
CameraDeviceImpl.setRepeatingRequest(CaptureRequest request, CaptureCallback callback,Handler handler) 这个 request为 mPreviewBuilder.build() 将 request(CaptureRequest.java类,类中mSurfaceSet存储TextureView获取的Surface) 添加到 requestList 列表,这时 requestList 元素个数是1
调用 submitCaptureRequest(requestList, callback, handler, /*streaming*/true) 这里的 callback 值为null表示不监听预览数据状态回调
CameraDeviceImpl.submitCaptureRequest(List<CaptureRequest> requestList, CaptureCallback callback,Executor executor, boolean repeating) 先调用 mRemoteDevice.submitRequestList(requestArray, repeating) 将requestList转换成 requestArray 数组,requestArray 元素个数是1,这里通过调用mRemoteDevice.submitRequestList aidl 跨进程,参数 requestArray 传递 到c++层关系如下:
/frameworks/base/core/java/android/hardware/camera2/CaptureRequest.java //实现 Parcelable接口,第二个参数将mSurfaceSet转换成Surface数组 写入Parcelable,数组size是1,为预览 .addTarget(TextureView获取的Surface)
/frameworks/av/include/camera/camera2/CaptureRequest.h //实现 Parcelable接口,
/frameworks/av/camera/camera2/CaptureRequest.cpp //读取第二个参数Surface数组,写入 mSurfaceList(size是1) ,
CameraDeviceClient::submitRequestList( const std::vector<hardware::camera2::CaptureRequest>& requests,bool streaming, /*out*/hardware::camera2::utils::SubmitInfo *submitInfo)
该方法遍历 requests (size是1)初始化 metadataRequestList (size是1) 和 surfaceMapList (size是1) , 遍历 request.mSurfaceList(size是1,对应TextureView获取的Surface) 从 mStreamMap 获取 TextureView获取的Surface 对应创建的流id信息,并赋值给surfaceMap(size是1,对应TextureView对应创建的streamId和surfaceId,surfaceId 是自增值和streamId对应),将 surfaceMap添加到surfaceMapList(size是1)
调用metadata.update(ANDROID_REQUEST_OUTPUT_STREAMS, &outputStreamIds[0],outputStreamIds.size()) 将创建的流信息存储到metadata的 ANDROID_REQUEST_OUTPUT_STREAMS 节点,
如果request.mIsReprocess 为true 调用 metadata.update(ANDROID_REQUEST_INPUT_STREAMS, &mInputStream.id, 1) 将 mInputStream.id 存储到metadata的 ANDROID_REQUEST_INPUT_STREAMS 节点,
然后调用 mDevice->setStreamingRequestList(metadataRequestList, surfaceMapList,&(submitInfo->mLastFrameNumber))
Camera3Device::setStreamingRequestList(const List<const CameraMetadata> &requests,const std::list<const SurfaceMap> &surfaceMaps,int64_t *lastFrameNumber)
Camera3Device::submitRequestsHelper(const List<const CameraMetadata> &requests,const std::list<const SurfaceMap> &surfaceMaps,bool repeating, /*out*/int64_t *lastFrameNumber)
#5.1.1 调用 convertMetadataListToRequestListLocked(requests, surfaceMaps,repeating, /*out*/&requestList) 调用Camera3Device::createCaptureRequest 创建Camera3Device.CaptureRequest对象newRequest, 将CameraMetadata对象requests 赋值给创建的 CaptureRequest对象的 newRequest->mSettings,将 surfaceMaps (size是1)中SurfaceMap (size是1,存储对应TextureView对应创建的streamId和surfaceId) 赋值给 newRequest->mOutputSurfaces[0],并将newRequest添加到 requestList (size是1) , requestList->mBatchSize 值为1
#5.1.2 然后调用 mRequestThread->setRepeatingRequests(requestList, lastFrameNumber)
Camera3Device::RequestThread::setRepeatingRequests(const RequestList &requests,/*out*/int64_t *lastFrameNumber) 调用 mRepeatingRequests.insert(mRepeatingRequests.begin(),requests.begin(), requests.end())
将 requests(size是1) 信息 添加到 RequestList (宏定义 List<sp<Camera3Device.CaptureRequest类> >) 对象 mRepeatingRequests(size是1) , 调用 mRequestSignal.signal() 唤醒阻塞线程
Camera3Device::RequestThread::threadLoop()
@1循环阻塞从 mRepeatingRequests, mRequestQueue 取出参数赋值给创建的 Camera3Device.CaptureRequest 对象,并将 Camera3Device.CaptureRequest 对象 添加到 mNextRequests 列表(size是1).
@2然后调用 Camera3Device::RequestThread::prepareHalRequests() 准备一批 HAL 请求和输出缓冲区, 这里遍历 mNextRequests,将 NextRequest 中的 Camera3Device.CaptureRequest 信息赋值给 NextRequest 中的 halRequest对象。
因为 mNextRequests(size是1) ,mNextRequests[0].mOutputSurfaces(size是1),mNextRequests[0].mOutputSurfaces为 SurfaceMap (size是1,存储对应TextureView对应创建的streamId和surfaceId),
SurfaceMap 定义:typedef std::unordered_map<int, std::vector<size_t> > SurfaceMap
halRequest->output_buffers 初始化size为 nextRequest.outputBuffers>array() 大小 即 1, 然后调用 outputStream->getBuffer(&outputBuffers->editItemAt(j),captureRequest->mOutputSurfaces[0])
outputStream即 nextRequest.captureRequest->mOutputStreams(size是1) 是#3.1 texureview 创建的Camera3OutputStream流信息,
Camera3Stream::getBuffer(camera3_stream_buffer *buffer,const std::vector<size_t>& surface_ids) 这里的 surface_ids size是1 对应TextureView创建的StreamSurfaceId.surfaceId,调用getBufferLocked(buffer, surface_ids)
Camera3OutputStream::getBufferLocked(camera3_stream_buffer *buffer,const std::vector<size_t>&)
*1.调用getBufferLockedCommon(&anb, &fenceFd)
Camera3OutputStream::getBufferLockedCommon(ANativeWindowBuffer** anb, int* fenceFd)执行三步
步骤一 调用mBufferManager->getBufferForStream(getId(), getStreamSetId(), &gb, fenceFd) new GraphicBuffer(继承ANativeWindowBuffer结构体)创建 GraphicBuffer对象gb,将gb赋值给 anb
步骤二 调用mConsumer->attachBuffer(*anb) 将GraphicBuffer对象anb附加到Surface及BufferQueueProducer的 mSlots[*outSlot].mGraphicBuffer和mCore->mActiveBuffers中,这里的*outSlot 为调用waitForFreeSlotThenRelock 获取的GraphicBuffer的slot下标found(第一次为0).
步骤三调用mConsumer->dequeueBuffer(currentConsumer.get(), anb, fenceFd) 从预览控件的 Surface dequeueBuffer获取ANativeWindowBuffer(即GraphicBuffer) 并赋值给 anb 变量.
步骤三Surface->dequeueBuffer流程如下:
Surface.cpp中dequeueBuffer 指向 hook_dequeueBuffer即调用
Surface::hook_dequeueBuffer(ANativeWindow* window,ANativeWindowBuffer** buffer, int* fenceFd) 调用c->dequeueBuffer(buffer, fenceFd)
Surface::dequeueBuffer(android_native_buffer_t** buffer, int* fenceFd)
1.调用mGraphicBufferProducer->dequeueBuffer(...)获取mSlots数组的下标,然后从mSlots 获取GraphicBuffer 并赋值给 参数buffer.这里最终通过binder调用SF端的dequeueBuffer。
BufferQueueProducer::dequeueBuffer(int* outSlot,...)
1.1设置参数,检查否与BufferQueue建立连接
1.2.调用waitForFreeSlotThenRelock(FreeSlotCaller::Dequeue,&found) 此时 caller 为FreeSlotCaller::Dequeue ,
先从mCore->mFreeBuffers 获取一个已经绑定了GraphicBuffer的slot下标 赋值给found;
如果mFreeBuffers为空(第一次为空),则从mCore->mFreeSlots获取一个未绑定GraphicBuffer的slot下标赋值给found(第一次为1),
1.3 调用 buffer(mSlots[found].mGraphicBuffer) 基于slot下标从mSlots 获取 GraphicBuffer,mSlots为 BufferQueueCore->mSlots, 同时将found 插入到 mCore->mActiveBuffers中.
如果获取的buffer为空(第一次为空),则new GraphicBuffer(继承ANativeWindowBuffer结构体)创建 GraphicBuffer对象 赋值给mSlots[*outSlot].mGraphicBuffer,*outSlot为 found.
GraphicBuffer::GraphicBuffer(...) 构造函数 调用initWithSize(...) 。
GraphicBuffer::initWithSize(...) 调用 allocator.allocate(...) allocator为GraphicBufferAllocator。
GraphicBufferAllocator::allocate(...) 调用mAllocator->allocate(info, stride, handle) 申请 GraphicBuffer 内存,返回分配内存handle。
2 将创建GraphicBuffer对象 gbuf指向BufferQueueProducer获取到的BufferSlot绑定的GraphicBuffer.
*2.调用 handoutBufferLocked(*buffer, &(anb->handle), /*acquireFence*/fenceFd,/*releaseFence*/-1, CAMERA3_BUFFER_STATUS_OK, /*output*/true);
Camera3IOStreamBase::handoutBufferLocked(camera3_stream_buffer &buffer,buffer_handle_t *handle, int acquireFence,int releaseFence,camera3_buffer_status_t status,bool output)
该方法将 Surface获取 ANativeWindowBuffer->handle 赋值给 buffer.buffer,即赋值给 halRequest->output_buffers[0]->handle. .
@3然后调用 sendRequestsBatch()
Camera3Device::RequestThread::sendRequestsBatch() 调用mInterface->processBatchCaptureRequests(requests, &numRequestProcessed) 这里的 requests 是 camera3_capture_request_t 对象,从mNextRequests中的halRequest获取,
Camera3Device::HalInterface::processBatchCaptureRequests(std::vector<camera3_capture_request_t*>& requests,/*out*/uint32_t* numRequestProcessed) 最终通过4.2.2 从CameraProvider返回CameraDeviceSession.TrampolineSessionInterface_3_2 (继承device::V3_2::ICameraDeviceSession)对象(ICameraDeviceSession.hal),
调用 mHidlSession->processCaptureRequest(captureRequests, cachesToRemove,回调) ,其中 captureRequests 是调用wrapAsHidlRequest(requests[i], /*out*/&captureRequests[i], /*out*/&handlesCreated) 时从 requests 获取的数据,例如将 camera3_capture_request_t->output_buffers +i ->buffer 赋值给 CaptureRequest->outputBuffers[i].buffer.
wrapAsHidlRequest方法 同时会调用 pushInflightBufferLocked(captureRequest->frameNumber, streamId,src->buffer, src->acquire_fence)
将frameNumber和streamId放到 mInflightBufferMap的key,std::make_pair(buffer, acquireFence) 放到 mInflightBufferMap的value,这里的buffer为 camera3_capture_request_t->output_buffers +i ->buffer.
CameraDeviceSession::processCaptureRequest(captureRequests, cachesToRemove,回调) 遍历 captureRequests(size是1) 调用 processOneCaptureRequest(captureRequests[i])
CameraDeviceSession::processOneCaptureRequest(const CaptureRequest& request) 这里又将 device::V3_2::CaptureRequest转换成 camera3_capture_request_t对象 halRequest,调用hal接口方法 mDevice->ops->process_capture_request(mDevice, &halRequest) 提交请求信息到 HAL,其中mDevice为hal中的 camera3_device_t 对象
/hardware/libhardware/include/hardware/camera3.h
typedef struct camera3_capture_request {
uint32_t frame_number;
const camera_metadata_t *settings;
/**
* 输入流缓冲区用于此请求,如果有的话
*/
camera3_stream_buffer_t *input_buffer;
/**
* 此捕获请求的输出缓冲区的数量。至少是 1
*/
uint32_t num_output_buffers;
/**
* num_output_buffers流缓冲区数组,camera3_capture_request_t 对象halRequest
* 如果是CameraCaptureSessionImpl.setRepeatingRequest调用的预览请求 output_buffers size 是1,
* halRequest->output_buffers[0]->handle 为预览控件的 Surface获取 ANativeWindowBuffer->handle。
* 如果是CameraCaptureSessionImpl.setRepeatingRequest调用的拍照请求 output_buffers size 是1,
* halRequest->output_buffers[0]->handle 为ImageReader的 Surface获取 ANativeWindowBuffer->handle。
*/
const camera3_stream_buffer_t *output_buffers;
} camera3_capture_request_t;
typedef struct camera3_capture_result {
uint32_t frame_number;
const camera_metadata_t *result;
uint32_t num_output_buffers;
const camera3_stream_buffer_t *output_buffers;
const camera3_stream_buffer_t *input_buffer;
uint32_t partial_result;
} camera3_capture_result_t;
/frameworks/av/services/camera/libcameraservice/api1/CameraClient.cpp
status_t CameraClient::initialize(sp<CameraProviderManager> manager) {
char camera_device_name[10];
snprintf(camera_device_name, sizeof(camera_device_name), "%d", mCameraId);
mHardware = new CameraHardwareInterface(camera_device_name);
res = mHardware->initialize(manager);
if (res != OK) {
ALOGE("%s: Camera %d: unable to initialize device: %s (%d)",
__FUNCTION__, mCameraId, strerror(-res), res);
mHardware.clear();
return res;
}
mHardware->setCallbacks(notifyCallback,
dataCallback,
dataCallbackTimestamp,
handleCallbackTimestampBatch,
(void *)(uintptr_t)mCameraId);
}
/frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h
class CameraHardwareInterface :
public virtual RefBase,
public virtual hardware::camera::device::V1_0::ICameraDeviceCallback,
public virtual hardware::camera::device::V1_0::ICameraDevicePreviewCallback {
public:
explicit CameraHardwareInterface(const char *name):
mHidlDevice(nullptr),
mName(name),
mPreviewScalingMode(NOT_SET),
mPreviewTransform(NOT_SET),
mPreviewWidth(NOT_SET),
mPreviewHeight(NOT_SET),
mPreviewFormat(NOT_SET),
mPreviewUsage(0),
mPreviewSwapInterval(NOT_SET),
mPreviewCrop{NOT_SET,NOT_SET,NOT_SET,NOT_SET}
{
}
}
/frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.cpp
status_t CameraHardwareInterface::initialize(sp<CameraProviderManager> manager) {
ALOGI("Opening camera %s", mName.string());
status_t ret = manager->openSession(mName.string(), this, &mHidlDevice);
if (ret != OK) {
ALOGE("%s: openSession failed! %s (%d)", __FUNCTION__, strerror(-ret), ret);
}
return ret;
}
void CameraHardwareInterface::stopPreview()
{
ALOGV("%s(%s)", __FUNCTION__, mName.string());
if (CC_LIKELY(mHidlDevice != nullptr)) {
mHidlDevice->stopPreview();
}
}
status_t CameraHardwareInterface::startPreview()
{
ALOGV("%s(%s)", __FUNCTION__, mName.string());
if (CC_LIKELY(mHidlDevice != nullptr)) {
return CameraProviderManager::mapToStatusT(
mHidlDevice->startPreview());
}
return INVALID_OPERATION;
}
/frameworks/av/services/camera/libcameraservice/common/CameraProviderManager.cpp
status_t CameraProviderManager::openSession(const std::string &id,
const sp<hardware::camera::device::V1_0::ICameraDeviceCallback>& callback,
/*out*/
sp<hardware::camera::device::V1_0::ICameraDevice> *session) {
std::lock_guard<std::mutex> lock(mInterfaceMutex);
auto deviceInfo = findDeviceInfoLocked(id,
/*minVersion*/ {1,0}, /*maxVersion*/ {2,0});
if (deviceInfo == nullptr) return NAME_NOT_FOUND;
auto *deviceInfo1 = static_cast<ProviderInfo::DeviceInfo1*>(deviceInfo);
hardware::Return<Status> status = deviceInfo1->mInterface->open(callback);
if (!status.isOk()) {
ALOGE("%s: Transaction error opening a session for camera device %s: %s",
__FUNCTION__, id.c_str(), status.description().c_str());
return DEAD_OBJECT;
}
if (status == Status::OK) {
*session = deviceInfo1->mInterface;
}
return mapToStatusT(status);
}
CameraProviderManager::ProviderInfo::DeviceInfo* CameraProviderManager::findDeviceInfoLocked(
const std::string& id,
hardware::hidl_version minVersion, hardware::hidl_version maxVersion) const {
for (auto& provider : mProviders) {
for (auto& deviceInfo : provider->mDevices) {
if (deviceInfo->mId == id &&
minVersion <= deviceInfo->mVersion && maxVersion >= deviceInfo->mVersion) {
return deviceInfo.get();
}
}
}
return nullptr;
}
frameworks/av/services/camera/libcameraservice/device3/Camera3Device.h
class Camera3Device :
public CameraDeviceBase,
virtual public hardware::camera::device::V3_2::ICameraDeviceCallback,
private camera3_callback_ops {
status_t createStream(const std::vector<sp<Surface>>& consumers,
bool hasDeferredConsumer, uint32_t width, uint32_t height, int format,
android_dataspace dataSpace, camera3_stream_rotation_t rotation, int *id,
int streamSetId = camera3::CAMERA3_STREAM_SET_ID_INVALID,
bool isShared = false, uint64_t consumerUsage = 0) override;
status_t configureStreams(/*inout*/ camera3_stream_configuration *config);
status_t setStreamingRequestList(const List<const CameraMetadata> &requests,
const std::list<const SurfaceMap> &surfaceMaps,
int64_t *lastFrameNumber = NULL) override;
status_t processBatchCaptureRequests(
std::vector<camera3_capture_request_t*>& requests,
/*out*/uint32_t* numRequestProcessed);
}
frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp
Camera3Device::Camera3Device(const String8 &id):
mId(id),
mOperatingMode(NO_MODE),
mIsConstrainedHighSpeedConfiguration(false),
mStatus(STATUS_UNINITIALIZED),
mStatusWaiters(0),
mUsePartialResult(false),
mNumPartialResults(1),
mTimestampOffset(0),
mNextResultFrameNumber(0),
mNextReprocessResultFrameNumber(0),
mNextShutterFrameNumber(0),
mNextReprocessShutterFrameNumber(0),
mListener(NULL),
mVendorTagId(CAMERA_METADATA_INVALID_VENDOR_ID)
{
ATRACE_CALL();
camera3_callback_ops::notify = &sNotify;
camera3_callback_ops::process_capture_result = &sProcessCaptureResult;
ALOGV("%s: Created device for camera %s", __FUNCTION__, mId.string());
}
status_t Camera3Device::createStream(const std::vector<sp<Surface>>& consumers,
bool hasDeferredConsumer, uint32_t width, uint32_t height, int format,
android_dataspace dataSpace, camera3_stream_rotation_t rotation, int *id,
int streamSetId, bool isShared, uint64_t consumerUsage) {
...
newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
width, height, format, dataSpace, rotation,
mTimestampOffset, streamSetId);
newStream->setStatusTracker(mStatusTracker);
newStream->setBufferManager(mBufferManager);
res = mOutputStreams.add(mNextStreamId, newStream);
...
}
status_t Camera3Device::HalInterface::configureStreams(camera3_stream_configuration *config) {
...
if (hidlSession_3_3 != nullptr) {
// We do; use v3.3 for the call
ALOGV("%s: v3.3 device found", __FUNCTION__);
auto err = hidlSession_3_3->configureStreams_3_3(requestedConfiguration,
[&status, &finalConfiguration]
(common::V1_0::Status s, const device::V3_3::HalStreamConfiguration& halConfiguration) {
finalConfiguration = halConfiguration;
status = s;
});
} else {
// We don't; use v3.2 call and construct a v3.3 HalStreamConfiguration
ALOGV("%s: v3.2 device found", __FUNCTION__);
HalStreamConfiguration finalConfiguration_3_2;
auto err = mHidlSession->configureStreams(requestedConfiguration,
[&status, &finalConfiguration_3_2]
(common::V1_0::Status s, const HalStreamConfiguration& halConfiguration) {
finalConfiguration_3_2 = halConfiguration;
status = s;
});
finalConfiguration.streams.resize(finalConfiguration_3_2.streams.size());
for (size_t i = 0; i < finalConfiguration_3_2.streams.size(); i++) {
finalConfiguration.streams[i].v3_2 = finalConfiguration_3_2.streams[i];
finalConfiguration.streams[i].overrideDataSpace =
requestedConfiguration.streams[i].dataSpace;
}
}
...
}
status_t Camera3Device::HalInterface::processBatchCaptureRequests(
std::vector<camera3_capture_request_t*>& requests,/*out*/uint32_t* numRequestProcessed) {
...
auto err = mHidlSession->processCaptureRequest(captureRequests, cachesToRemove,
[&status, &numRequestProcessed] (auto s, uint32_t n) {
status = s;
*numRequestProcessed = n;
});
...
}
hardware/interfaces/camera/device/3.2/default/CameraDeviceSession.cpp
CameraDeviceSession::CameraDeviceSession(
camera3_device_t* device,
const camera_metadata_t* deviceInfo,
const sp<ICameraDeviceCallback>& callback) :
camera3_callback_ops({&sProcessCaptureResult, &sNotify}),
mDevice(device),
mDeviceVersion(device->common.version),
mIsAELockAvailable(false),
mDerivePostRawSensKey(false),
mNumPartialResults(1),
mResultBatcher(callback) {
mDeviceInfo = deviceInfo;
...
}
Return<void> CameraDeviceSession::configureStreams(
const StreamConfiguration& requestedConfiguration,
ICameraDeviceSession::configureStreams_cb _hidl_cb) {
...
//mDevice 实际为 mCameraDevice 对象(即加载hal xxx.so中的camera3_device_t 对象)
status_t ret = mDevice->ops->configure_streams(mDevice, &stream_list);
...
}
Return<void> CameraDeviceSession::processCaptureRequest(
const hidl_vec<CaptureRequest>& requests,
const hidl_vec<BufferCache>& cachesToRemove,
ICameraDeviceSession::processCaptureRequest_cb _hidl_cb) {
...
for (size_t i = 0; i < requests.size(); i++, numRequestProcessed++) {
s = processOneCaptureRequest(requests[i]);
if (s != Status::OK) {
break;
}
}
...
}
Status CameraDeviceSession::processOneCaptureRequest(const CaptureRequest& request) {
...
status_t ret = mDevice->ops->process_capture_request(mDevice, &halRequest);
...
}
6.CameraService 拍照
6.1 如果是hal1和API_1 客户端会调用
CameraClient::takePicture(int msgType)
#1 调用mHardware->enableMsgType(msgType)
CameraHardwareInterface::enableMsgType(int32_t msgType) 调用 mHidlDevice->enableMsgType(msgType)
CameraDevice::enableMsgType(msgType) 最终调用hal对象接口 mDevice->ops->enable_msg_type(mDevice, msgType) 其中mDevice为camera_device_t对象
#2 调用mHardware->takePicture()
CameraHardwareInterface::::takePicture() 调用 mHidlDevice->takePicture()
CameraDevice::takePicture() 最终调用hal对象接口 mDevice->ops->take_picture(mDevice) 其中mDevice为camera_device_t对象
用到的hidl文件
hardware/interfaces/camera/device/1.0/ICameraDeviceCallback.hal//定义hal1 ICameraDeviceCallback 接口,被CameraHardwareInterface继承
6.2 如果是camera2 使用API_2 用到的主要文件如下,其中同名的 .java类和native的.cpp文件是对应关系
/frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java
/frameworks/native/libs/gui/Surface.cpp //继承ANativeWindow,包含IGraphicBufferProducer对象
/frameworks/native/libs/gui/view/Surface.cpp // 继承Parcelable 和Surface.java对应 引用 #include <gui/view/Surface.h>
/frameworks/base/core/java/android/view/Surface.java
/frameworks/base/core/jni/android/graphics/SurfaceTexture.cpp //引用 #include <gui/Surface.h>
/frameworks/base/core/jni/android_view_Surface.cpp
/frameworks/base/graphics/java/android/graphics/SurfaceTexture.java
/frameworks/av/camera/camera2/OutputConfiguration.cpp
/frameworks/base/core/java/android/hardware/camera2/params/OutputConfiguration.java
/frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.cpp //引用 #include <gui/Surface.h>
用到的hidl文件
/hardware/interfaces/camera/device/3.2/types.hal //定义了CaptureRequest/CaptureResult 结构体
/hardware/interfaces/camera/device/3.2/ICameraDevice.hal //定义 ICameraDevice 接口
/hardware/interfaces/camera/device/3.2/ICameraDeviceCallback.hal //定义hal3 ICameraDeviceCallback 接口,被Camera3Device继承
/hardware/interfaces/camera/device/3.2/ICameraDeviceSession.hal //定义 ICameraDeviceSession 接口
从5.2可知 对 mCameraDevice 操作即调用 CameraDeviceImpl.java 类,调用 cameraservice 中的 CameraDeviceClient.cpp 类,最终调用 CameraProvider 中的 CameraDevice.cpp,CameraDeviceSession.cpp 来操作hal3接口
camera2拍照应用层调用流程主要包括三个步骤.
步骤一调用 mCaptureBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE) 创建 CaptureRequest.Builder对象 mCaptureBuilder,创建拍照请求构建器对象,该对象需要设置ImageReader的surface对象,设置图片方向等
CameraDeviceImpl.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE) 调用 templatedRequest = mRemoteDevice.createDefaultRequest(templateType)
然后通过new CaptureRequest.Builder(templatedRequest, /*reprocess*/false, CameraCaptureSession.SESSION_ID_NONE) 封装 templatedRequest 返回CaptureRequest.Builder给应用.
即: CaptureRequest.mIsReprocess 为false,如果应用调用的是CameraDeviceImpl.createReprocessCaptureRequest 则CaptureRequest.mIsReprocess 为true
CameraDeviceClient::createDefaultRequest(int templateId,/*out*/hardware::camera2::impl::CameraMetadataNative* request) 调用 mDevice->createDefaultRequest(templateId, &metadata)
获取metadata后通过调用 request->swap(metadata) 返回给应用层 templatedRequest
Camera3Device::createDefaultRequest(int templateId,CameraMetadata *request) 调用 mInterface->constructDefaultRequestSettings((camera3_request_template_t) templateId, &rawRequest)
Camera3Device::HalInterface::constructDefaultRequestSettings(CameraDevice.TEMPLATE_STILL_CAPTURE,/*out*/ camera_metadata_t **requestTemplate)
调用 mHidlSession->constructDefaultRequestSettings(RequestTemplate::STILL_CAPTURE,ICameraDeviceSession::constructDefaultRequestSettings_cb 匿名内部类监听) 从 匿名内部类监听 获取 camera_metadata_t 赋值给 requestTemplate
CameraDeviceSession::constructDefaultRequestSettings(RequestTemplate::STILL_CAPTURE,ICameraDeviceSession::constructDefaultRequestSettings_cb _hidl_cb) 该方法
调用 mDevice->ops->construct_default_request_settings(mDevice, (int) type) 加载hal xxx.so获取的camera_module_t中的第一个hw_module_t 结构体类型对象common,
例如高通 camera.msm8998.so ,mDevice 实际为 mCameraDevice 对象,QCamera3HardwareInterface::mCameraOps定义了ops 方法的实现,
.construct_default_request_settings = QCamera3HardwareInterface::construct_default_request_settings,
步骤二 mCaptureBuilder.addTarget(mImageReader.getSurface()) 设置ImageReader的 mSurface 对象,其中 mImageReader 为 mImageReader = ImageReader.newInstance(Config.SHOOT_PIC_WIDTH,Config.SHOOT_PIC_HEIGHT, ImageFormat.JPEG, 1);
同时 mImageReader.setOnImageAvailableListener(OnImageAvailableListener监听) 设置 OnImageAvailableListener监听 到ImageReader.mListener
CaptureRequest.Builder.addTarget(mImageReader.getSurface()) 调用 mRequest.mSurfaceSet.add(mImageReader.getSurface()) 将 mImageReader.getSurface() 保存到 mSurfaceSet 中,
ImageReader.newInstance 流程,调用new ImageReader(width, height, format, maxImages, usage)
ImageReader.ImageReader(...) 构造方法 流程主要包括*1、*2两个步骤.
*1.调用 nativeInit(new WeakReference<>(this), width, height, format, maxImages, usage)
android_media_ImageReader.ImageReader_init 调用 BufferQueue::createBufferQueue(&gbProducer, &gbConsumer) 创建了BufferQueueCore实例,并作为gbProducer/gbConsumer的参数,
BufferQueueCore::BufferQueueCore() 初始化mFreeSlots size 是2,mUnusedSlots size 是64-2,mSlots 为BufferSlot[64] 数组
创建 BufferQueueProducer(继承IGraphicBufferProducer) 对象 gbProducer,创建 BufferQueueConsumer(继承IGraphicBufferConsumer) 对象 gbConsumer,
调用bufferConsumer = new BufferItemConsumer(gbConsumer...) 创建BufferItemConsumer( 继承 ConsumerBase) 对象bufferConsumer,然后调用 new JNIImageReaderContext 创建 FrameAvailableListener 接口实现类,
调用bufferConsumer->setFrameAvailableListener(ctx) 即调用
ConsumerBase::setFrameAvailableListener(const wp<FrameAvailableListener>& listener)将 JNIImageReaderContext 设置到ConsumerBase.mFrameAvailableListener.
*2.调用 mSurface = nativeGetSurface(); 从native创建 Surface
android_media_ImageReader.ImageReader_getSurface 调用 android_view_Surface_createFromIGraphicBufferProducer(env, gbp) 其中 gbp 为 gbProducer,
android_view_Surface.android_view_Surface_createFromIGraphicBufferProducer(env, bufferProducer) 调用new Surface(bufferProducer, true) 创建包含 gbProducer 的Surface(引用 #include <gui/Surface.h>) 对象,
调用android_view_Surface_createFromSurface 通过jni new Surface(long nativeObject)创建 Surface.java对象,将nativeObject赋值给 mNativeObject ,其中 nativeObject 为上面native的Surface对象,
步骤三调用 mCameraSession.capture(mCaptureBuilder.build(), mCaptureCallback, mHandler)
执行拍照。CaptureCallback只是处理拍照状态回调,拍照数据从上面的ImageReader.onImageAvailable 回调获取.从5.2#3.2 StateCallback.onConfigured 返回CameraCaptureSessionImpl类对象赋值给mCameraSession .
CameraCaptureSessionImpl.capture(...) 调用mDeviceImpl.capture(request,createCaptureCallbackProxy(handler, callback), mDeviceHandler)
CameraDeviceImpl.capture(CaptureRequest request, CaptureCallback callback, Handler handler) 这个 request为 mCaptureBuilder.build() 将 request(CaptureRequest.java类,类中mSurfaceSet存储mImageReader.getSurface()) 添加到 requestList 列表,这时 requestList 元素个数是1
调用 submitCaptureRequest(requestList, callback, handler, /*streaming*/false) 这里的 callback 值不为空,
CameraDeviceImpl.submitCaptureRequest(requestList, callback, handler, /*streaming*/false) 调用 mRemoteDevice.submitRequestList(requestArray, false) 将requestList转换成 requestArray 数组,requestArray 元素个数是1
将callback 及预览和拍照请求的mRequestId封装到 mCaptureCallbackMap变量,判断如果应用初始化的 handler 为空则初始化当前线程的 handler。
CameraDeviceClient::submitRequestList( const std::vector<hardware::camera2::CaptureRequest>& requests,bool streaming, /*out*/hardware::camera2::utils::SubmitInfo *submitInfo)
后面流程同5.2#4 ,通过调用mRemoteDevice.submitRequestList aidl 跨进程调用,参数稍有差异
差异点1 参数 requestArray 传递 到c++层关系如下
/frameworks/base/core/java/android/hardware/camera2/CaptureRequest.java //实现 Parcelable接口,第二个参数将mSurfaceSet转换成Surface数组 写入Parcelable,
数组size是1,为拍照.addTarget(mImageReader.getSurface())
/frameworks/av/include/camera/camera2/CaptureRequest.h //实现 Parcelable接口,
/frameworks/av/camera/camera2/CaptureRequest.cpp //读取第二个参数Surface数组,写入 mSurfaceList(size是1) ,
差异点2是 halRequest->output_buffers[0]->handle 为ImageReader的 Surface 获取(ANativeWindowBuffer 通过Surface->dequeueBuffer获取) ANativeWindowBuffer->handle.
预览流程最终调用 mDevice->ops->process_capture_request(mDevice, &halRequest) 提交请求信息到 HAL,其中 mDevice 为hal中的 camera3_device_t 对象.
拍照数据回调流程。
由4.2.2可知CameraDeviceSession::initialize() 调用 mDevice->ops->initialize(mDevice, this) 注册hal 数据回调,拍照会回调
CameraDeviceSession::sProcessCaptureResult(const camera3_callback_ops *cb,const camera3_capture_result *hal_result) 调用d->mResultBatcher.processCaptureResult(result),mResultBatcher为 Camera3Device,result数据从hal_result获取
Camera3Device::processCaptureResult(const hardware::hidl_vec<hardware::camera::device::V3_2::CaptureResult>& results) 遍历results调用 processOneCaptureResultLocked(result)
Camera3Device::processOneCaptureResultLocked(const hardware::camera::device::V3_2::CaptureResult& result)
先创建camera3_capture_result对象r,调用 mInterface->popInflightBuffer(result.frameNumber, bSrc.streamId, &buffer) 基于streamId 从mInflightBufferMap 中取出buffer,
然后将 result 数据赋值给camera3_capture_result对象 r,例如 r.output_buffers = result.outputBuffers.data(),r.outputBuffers[i].stream 为 mOutputStreams缓存的Camera3OutputStream,
r.outputBuffers[i].buffer 为 mInflightBufferMap 中缓存的 camera3_capture_request_t->output_buffers +i ->buffer, r.outputBuffers[i].acquire_fence = -1
然后调用 processCaptureResult(&r).
Camera3Device::processCaptureResult(const camera3_capture_result *result) 主要执行了如下 两个步骤。
步骤一 先调用 returnOutputBuffers(result->output_buffers,result->num_output_buffers, shutterTimestamp) result->output_buffers 为 camera3_stream_buffer_t *
Camera3Device::returnOutputBuffers(const camera3_stream_buffer_t *outputBuffers, size_t numBuffers,nsecs_t timestamp) 调用stream->returnBuffer(outputBuffers[i], timestamp)
Camera3Stream::returnBuffer(const camera3_stream_buffer &buffer,nsecs_t timestamp) 调用 returnBufferLocked(buffer, timestamp)
Camera3OutputStream::returnBufferLocked(const camera3_stream_buffer &buffer,nsecs_t timestamp) 调用 returnAnyBufferLocked(buffer, timestamp, /*output*/true)
Camera3IOStreamBase::returnAnyBufferLocked(const camera3_stream_buffer &buffer,nsecs_t timestamp,bool output) 调用 returnBufferCheckedLocked(buffer, timestamp, output,&releaseFence)
Camera3OutputStream::returnBufferCheckedLocked(const camera3_stream_buffer &buffer,nsecs_t timestamp,bool output,/*out*/sp<Fence> *releaseFenceOut) 主要执行了如下 *1 *2两个步骤。
*1 调用ANativeWindowBuffer *anwBuffer = container_of(buffer.buffer, ANativeWindowBuffer, handle) 创建一个包含hal 数据回调 buffer.buffer 的ANativeWindowBuffer结构体对象,
buffer.buffer为 CaptureResult.outputBuffers.buffer (类型为 buffer_handle_t * ) .
*2 调用queueBufferToConsumer(currentConsumer, anwBuffer, anwReleaseFence)
Camera3OutputStream::queueBufferToConsumer(sp<ANativeWindow>& consumer,ANativeWindowBuffer* buffer, int anwReleaseFence)
该方法调用consumer->queueBuffer(consumer.get(), buffer, anwReleaseFence)完成预览控件刷新,其中consumer 为 Camera3OutputStream.mConsumer 预览控件的 Surface ,
Surface.cpp中queueBuffer 指向hook_queueBuffer
Surface::hook_queueBuffer(ANativeWindow* window,ANativeWindowBuffer* buffer, int fenceFd) 调用c->queueBuffer(buffer, fenceFd)
Surface::queueBuffer(android_native_buffer_t* buffer, int fenceFd) 调用 mGraphicBufferProducer->queueBuffer(i, input, &output)
input作为输入参数传递到BufferQueueProducer,input里面封装的这些信息很多都是外部应用程序设置的,如mDataSpace,mScalingMode,mStickyTransform这些都会应用到最终的显示,而output会作为BufferQueueProducer的返回值
BufferQueueProducer::queueBuffer(int slot,const QueueBufferInput &input, QueueBufferOutput *output) 将参数信息封装到 BufferItem对象 item,
其中mSlots[slot].mGraphicBuffer赋值给item.mGraphicBuffer,slot下标赋值给item.mSlot,然后将item存储在BufferQueueCore.mQueue 中,
并将BufferQueueCore.mConsumerListener 给到frameAvailableListener,然后调用 frameAvailableListener->onFrameAvailable(item).
待确认:而BufferQueueCore.mConsumerListener 是ConsumerBase构造函数 调用BufferQueue::ProxyConsumerListener,将ConsumerBase对象赋值给 BufferQueue.mConsumerListener,
至于BufferQueue::mConsumerListener 如果传递到 BufferQueueCore.mConsumerListener 待确认.
即调用ConsumerBase::onFrameAvailable(const BufferItem& item) 调用 mFrameAvailableListener->onFrameAvailable(item)
***如果是TextureView 从 SurfaceTexture_init 流程可知
JNISurfaceTextureContext类对象赋值给ConsumerBase.mFrameAvailableListener,
JNISurfaceTextureContext::onFrameAvailable(const BufferItem& /* item */) 通过jni调用
SurfaceTexture.java类中 postEventFromNative(WeakReference<SurfaceTexture> weakSelf) 方法调用 mOnFrameAvailableHandler,
TextureView 创建的SurfaceTexture时候会注册TextureView.mUpdateListenerSurfaceTexture到 mOnFrameAvailableHandler中.因此 postEventFromNative会回调TextureView.mUpdateListener.onFrameAvailable 会调用 updateLayer() 和 invalidate() 重新绘制.
***如果是ImageReader从ImageReader_init 流程可知
JNIImageReaderContext 类对象赋值给ConsumerBase.mFrameAvailableListener,
JNIImageReaderContext::onFrameAvailable 通过jni调用
ImageReader.java 类中postEventFromNative(Object selfRef) 方法调用 mListenerHandler,
应用调用ImageReader.setOnImageAvailableListener时候会注册OnImageAvailableListener到 mListenerHandler中.因此 postEventFromNative会回调应用OnImageAvailableListener监听
步骤二 然后调用sendCaptureResult(metadata, request.resultExtras,collectedPartialResult, frameNumber,hasInputBufferInRequest)
Camera3Device::sendCaptureResult
Camera3Device::insertResultLocked mResultQueue 队列插入数据,然后 mResultQueue 会被如下循环处理。
FrameProcessorBase::threadLoop()
#1 调用 device->waitForNextFrame(kWaitDuration) 10 ms
Camera3Device::waitForNextFrame(nsecs_t timeout) 检查mResultQueue 队列为空则阻塞,不为空返回
#2 mResultQueue 队列为空,调用 processNewFrames(device) 回调拍照状态
FrameProcessorBase::processNewFrames
FrameProcessorBase::processSingleFrame
FrameProcessorBase::processListeners 遍历mRangeListeners中的RangeListener(CameraDeviceClient) 对象 调用RangeListener.FilteredListener.onResultAvailable 方法
CameraDeviceClient::onResultAvailable 将JAVA层注册的aidl监听接口赋值给 remoteCb = mRemoteCallback,调用remoteCb->onResultReceived(result.mMetadata, result.mResultExtras,result.mPhysicalMetadatas)
CameraDeviceImpl.onResultReceived 执行如下两步
1.基于预览或拍照请求的requestId获取应用层创建的CameraCaptureSession.CaptureCallback对象持有容器holder,holder = CameraDeviceImpl.this.mCaptureCallbackMap.get(requestId)
mCaptureCallbackMap是应用层调用mCameraSession.setRepeatingRequest或者mCameraSession.capture注册的监听回调。
2.回调预览或拍照数据给应用.
调用holder.getCallback().onCaptureCompleted(CameraDeviceImpl.this,holder.getRequest(i),resultInBatch)回调 拍照完成状态给应用.
7.CameraProvider简介
CameraProvider进程名称:camera-provider-2-4 ,对应用的bin程序名称: /vendor/bin/hw/android.hardware.camera.provider@2.4-service
对应的so库:android.hardware.camera.provider@2.4-impl.so
cameraserver 通过HIDL 和 CameraProvider 跨进程通信,CameraProvider 中用到的HIDL接口文件如下,
不同于AIDL的binder驱动是/dev/binder,HIDL的hwbinder驱动是/dev/hwbinder,
通过编译ICameraProvider.hal会生成 CameraProvider.h和CameraProvider.cpp,编译ICameraProviderCallback.hal会生成 CameraProviderCallback.h和CameraProviderCallback.cpp.
/hardware/interfaces/camera/provider/2.4/ICameraProvider.hal
/hardware/interfaces/camera/provider/2.4/ICameraProviderCallback.hal
从如下Android.bp 中 编译出:android.hardware.camera.provider@2.4-impl.so 和 bin程序 android.hardware.camera.provider@2.4-service
init_rc: ["android.hardware.camera.provider@2.4-service.rc"], 配置android.hardware.camera.provider@2.4-service.rc 会被编译到 vendor/etc/init/android.hardware.camera.provider@2.4-service.rc中,
Android开机时 init进程会遍历vendor/etc/init/目录中的rc文件创建camera-provider-2-4进程并启动android.hardware.camera.provider@2.4-service程序,
hardware/interfaces/camera/provider/2.4/default/Android.bp
cc_library_shared {
name: "android.hardware.camera.provider@2.4-impl",
defaults: ["hidl_defaults"],
...
}
cc_binary {
name: "android.hardware.camera.provider@2.4-service",
defaults: ["hidl_defaults"],
init_rc: ["android.hardware.camera.provider@2.4-service.rc"],
...
}
hardware/interfaces/camera/provider/2.4/default/android.hardware.camera.provider@2.4-service.rc
service camera-provider-2-4 /vendor/bin/hw/android.hardware.camera.provider@2.4-service
class hal
user cameraserver
group audio camera input drmrpc
ioprio rt 4
capabilities SYS_NICE
writepid /dev/cpuset/camera-daemon/tasks /dev/stune/top-app/tasks
7.1 注册服务,如下defaultPassthroughServiceImplementation 直通式向ServiceManager中注册一个服务,这个服务是由
/vendor/lib/hw/android.hardware.camera.provider@2.4-impl.so
这个动态库的HIDL_FETCH_ICameraProvider()函数提供的new CameraProvider(),即将CameraProvider服务名称"legacy/0" 注册到ServiceManager
/hardware/interfaces/camera/provider/2.4/default/service.cpp
int main()
{
ALOGI("Camera provider Service is starting.");
// The camera HAL may communicate to other vendor components via
// /dev/vndbinder
android::ProcessState::initWithDriver("/dev/vndbinder");
return defaultPassthroughServiceImplementation<ICameraProvider>("legacy/0", /*maxThreads*/ 6);
}
我们还要把ICameraProvider服务注册到系统中去,否则client端是没法拿到ICameraProvider服务,这也是hidl区别于aidl的地方,
系统是使用manifest.xml来管理,例如device/qcom/xxx/manifest.xml中添加如下节点
<hal format="hidl">
<name>android.hardware.camera.provider</name>
<transport>hwbinder</transport>
<version>2.4</version>
<interface>
<name>ICameraProvider</name>
<instance>legacy/0</instance>
</interface>
</hal>
7.2 CameraProvider 初始化
CameraProvider::CameraProvider 构造函数调用CameraProvider::initialize,
7.2.1 CameraProvider 加载camera hal so库
CameraProvider::initialize 方法中调用 hw_get_module(CAMERA_HARDWARE_MODULE_ID,(const hw_module_t **)&rawModule),
实际调用hardware.c的hw_get_module方法,然后调用 hw_get_module_by_class 方法,这里主要是从
/system/lib/hw或者/vendor/lib/hw或者/odm/lib/hw 目录下找是否有camera.子名称.so 文件,
这个子名称从各个芯片平台设置的ro.hardware.camera系统属性值获取,例如属性值读取到是:v4l2,则是camera.v4l2.so,即hw_get_module方法 是从camera.v4l2.so 获取的 HAL_MODULE_INFO_SYM 方法返回的hw_module_t对象。
CameraProvider::initialize 方法中将获取的cameraIdStr,以及重新封装的 deviceName 赋值到 mCameraDeviceNames 。
ps:deviceName 格式:device@<major>.<minor>/<type>/<id> ,其中major和minor 从hal 中 camera_module_t 的 get_camera_info获取 的 deviceVersion解析获取的(device_version在6.1中赋值)
为3.3.2调用 getCameraDeviceInterface_V1_x 提供 数据。
ps:CameraProvider初始化时
如果是hal1会调用 CameraProvider::getCameraDeviceInterface_V1_x(..)
然后调用 device = new android::hardware::camera::device::V1_0::implementation::CameraDevice(mModule, cameraId, mCameraDeviceNames)
创建hal1的 CameraDevice对象最终会被存储在 CameraProviderManager.mDevices 列表,上层对 CameraDevice调用 最终是调用 mModule(加载hal xx.so获取的camera_module_t对象)
如果是hal3会调用 CameraProvider::getCameraDeviceInterface_V3_x(..)
然后调用 deviceImpl =new android::hardware::camera::device::V3_2::implementation::CameraDevice(mModule, cameraId, mCameraDeviceNames)
创建hal3的CameraDevice对象最终会被存储在 CameraProviderManager.mDevices 列表,上层对 CameraDevice调用 最终是调用 mModule(加载hal xx.so获取的camera_module_t对象)
ps :所有硬件模块的hal都会定义 HAL_MODULE_INFO_SYM
7.2.2 CameraProvider::initialize 方法中rawModule指向 HAL 中的 camera_module_t 类型结构体,
new CameraModule(rawModule),将rawModule传递到 CameraModule中,实际上所有mModule的调用都是对rawModule的调用,
mModule->init()最终调用rawModule->init()
mModule->setCallbacks(this)最终调用 rawModule->setCallbacks(this),注册监听到hal
mModule->getDeviceVersion(i) 最终调用rawModule->get_camera_info(cameraId, &rawInfo)
hardware/libhardware/include/hardware/hardware.h
typedef struct hw_module_t {
/** tag must be initialized to HARDWARE_MODULE_TAG */
uint32_t tag;
uint16_t module_api_version;
#define version_major module_api_version
uint16_t hal_api_version;
#define version_minor hal_api_version
/** Identifier of module */
const char *id;
/** Name of this module */
const char *name;
/** Author/owner/implementor of the module */
const char *author;
/** Modules methods */
struct hw_module_methods_t* methods;
/** module's dso */
void* dso;
#ifdef __LP64__
uint64_t reserved[32-7];
#else
/** padding to 128 bytes, reserved for future use */
uint32_t reserved[32-7];
#endif
} hw_module_t;
/hardware/libhardware/include/hardware/camera_common.h
#define CAMERA_HARDWARE_MODULE_ID "camera"
typedef struct camera_module {
//第一个对象是hw_module_t,即可以转换为hw_module_t
hw_module_t common;
int (*get_number_of_cameras)(void);
int (*get_camera_info)(int camera_id, struct camera_info *info);
int (*set_callbacks)(const camera_module_callbacks_t *callbacks);
void (*get_vendor_tag_ops)(vendor_tag_ops_t* ops);
int (*open_legacy)(const struct hw_module_t* module, const char* id,
uint32_t halVersion, struct hw_device_t** device);
int (*set_torch_mode)(const char* camera_id, bool enabled);
int (*init)();
/* reserved for future use */
void* reserved[5];
} camera_module_t;
/hardware/interfaces/camera/provider/2.4/default/CameraProvider.h
struct CameraProvider : public ICameraProvider, public camera_module_callbacks_t {
...
}
/hardware/interfaces/camera/provider/2.4/default/CameraProvider.cpp
CameraProvider::CameraProvider() :
camera_module_callbacks_t({sCameraDeviceStatusChange,
sTorchModeStatusChange}) {
mInitFailed = initialize();
}
bool CameraProvider::initialize() {
camera_module_t *rawModule;
int err = hw_get_module(CAMERA_HARDWARE_MODULE_ID,
(const hw_module_t **)&rawModule);
/* rawModule 将指向 HAL 中的 camera_module_t 类型结构体,
* 此时,CameraProvider 与 camera HAL 绑定成功,可以通过
* CameraProvider操作camera HAL
*/
if (err < 0) {
ALOGE("Could not load camera HAL module: %d (%s)", err, strerror(-err));
return true;
}
mModule = new CameraModule(rawModule);
err = mModule->init();
err = mModule->setCallbacks(this);
//通过属性读取 hal3的次版本
mPreferredHal3MinorVersion =
property_get_int32("ro.camera.wrapper.hal3TrebleMinorVersion", 3);
ALOGV("Preferred HAL 3 minor version is %d", mPreferredHal3MinorVersion);
switch(mPreferredHal3MinorVersion) {
case 2:
case 3:
// OK
break;
default:
ALOGW("Unknown minor camera device HAL version %d in property "
"'camera.wrapper.hal3TrebleMinorVersion', defaulting to 3", mPreferredHal3MinorVersion);
mPreferredHal3MinorVersion = 3;
}
mNumberOfLegacyCameras = mModule->getNumberOfCameras();
//摄像头数量,并遍历摄像头信息到列表变量mCameraIds、mCameraDeviceNames
for (int i = 0; i < mNumberOfLegacyCameras; i++) {
char cameraId[kMaxCameraIdLen];
snprintf(cameraId, sizeof(cameraId), "%d", i);
std::string cameraIdStr(cameraId);
mCameraStatusMap[cameraIdStr] = CAMERA_DEVICE_STATUS_PRESENT;
mCameraIds.add(cameraIdStr);
// initialize mCameraDeviceNames and mOpenLegacySupported
mOpenLegacySupported[cameraIdStr] = false;
int deviceVersion = mModule->getDeviceVersion(i);
mCameraDeviceNames.add(
std::make_pair(cameraIdStr,
getHidlDeviceName(cameraIdStr, deviceVersion)));
}
}
std::string CameraProvider::getHidlDeviceName(
std::string cameraId, int deviceVersion) {
// Maybe consider create a version check method and SortedVec to speed up?
if (deviceVersion != CAMERA_DEVICE_API_VERSION_1_0 &&
deviceVersion != CAMERA_DEVICE_API_VERSION_3_2 &&
deviceVersion != CAMERA_DEVICE_API_VERSION_3_3 &&
deviceVersion != CAMERA_DEVICE_API_VERSION_3_4 ) {
return hidl_string("");
}
//从deviceVersion 判断是否hal1, 如果不是hal1,则hal3的次版本 mPreferredHal3MinorVersion是从属性读取
//返回deviceName 字符串 格式:"device@主版本号.次版本号/legacy/cameraid"
bool isV1 = deviceVersion == CAMERA_DEVICE_API_VERSION_1_0;
int versionMajor = isV1 ? 1 : 3;
int versionMinor = isV1 ? 0 : mPreferredHal3MinorVersion;
char deviceName[kMaxCameraDeviceNameLen];
snprintf(deviceName, sizeof(deviceName), "device@%d.%d/legacy/%s",
versionMajor, versionMinor, cameraId.c_str());
return deviceName;
}
Return<void> CameraProvider::getCameraIdList(getCameraIdList_cb _hidl_cb) {
std::vector<hidl_string> deviceNameList;
for (auto const& deviceNamePair : mCameraDeviceNames) {
if (mCameraStatusMap[deviceNamePair.first] == CAMERA_DEVICE_STATUS_PRESENT) {
deviceNameList.push_back(deviceNamePair.second);
}
}
hidl_vec<hidl_string> hidlDeviceNameList(deviceNameList);
_hidl_cb(Status::OK, hidlDeviceNameList);
return Void();
}
Return<void> CameraProvider::getCameraDeviceInterface_V1_x(
const hidl_string& cameraDeviceName, getCameraDeviceInterface_V1_x_cb _hidl_cb) {
std::string cameraId, deviceVersion;
//检查cameraDeviceName格式是否正确
bool match = matchDeviceName(cameraDeviceName, &deviceVersion, &cameraId);
sp<android::hardware::camera::device::V1_0::implementation::CameraDevice> device =
new android::hardware::camera::device::V1_0::implementation::CameraDevice(mModule, cameraId, mCameraDeviceNames);
_hidl_cb (Status::OK, device);
return Void();
}
/hardware/interfaces/camera/common/1.0/default/CameraModule.cpp
CameraModule::CameraModule(camera_module_t *module) {
if (module == NULL) {
ALOGE("%s: camera hardware module must not be null", __FUNCTION__);
assert(0);
}
mModule = module;
}
int CameraModule::init() {
ATRACE_CALL();
int res = OK;
if (getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_4 &&
mModule->init != NULL) {
ATRACE_BEGIN("camera_module->init");
res = mModule->init();
ATRACE_END();
}
mCameraInfoMap.setCapacity(getNumberOfCameras());
return res;
}
int CameraModule::setCallbacks(const camera_module_callbacks_t *callbacks) {
int res = OK;
ATRACE_BEGIN("camera_module->set_callbacks");
if (getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_1) {
res = mModule->set_callbacks(callbacks);
}
ATRACE_END();
return res;
}
int CameraModule::getDeviceVersion(int cameraId) {
ssize_t index = mDeviceVersionMap.indexOfKey(cameraId);
if (index == NAME_NOT_FOUND) {
int deviceVersion;
if (getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_0) {
struct camera_info info;
getCameraInfo(cameraId, &info);
deviceVersion = info.device_version;
} else {
deviceVersion = CAMERA_DEVICE_API_VERSION_1_0;
}
index = mDeviceVersionMap.add(cameraId, deviceVersion);
}
assert(index != NAME_NOT_FOUND);
return mDeviceVersionMap[index];
}
uint16_t CameraModule::getModuleApiVersion() const {
return mModule->common.module_api_version;
}
int CameraModule::getCameraInfo(int cameraId, struct camera_info *info) {
...
int ret = mModule->get_camera_info(cameraId, &rawInfo);
*info = rawInfo;
...
}
/hardware/libhardware/include/hardware/hardware.h
#define HAL_MODULE_INFO_SYM HMI
#define HAL_MODULE_INFO_SYM_AS_STR "HMI"
/hardware/libhardware/hardware.c
#define HAL_LIBRARY_PATH1 "/system/lib/hw"
#define HAL_LIBRARY_PATH2 "/vendor/lib/hw"
#define HAL_LIBRARY_PATH3 "/odm/lib/hw"
int hw_get_module(const char *id, const struct hw_module_t **module)
{
return hw_get_module_by_class(id, NULL, module);
}
//这里id为"camera",inst为NULL,module为:camera_module_t
int hw_get_module_by_class(const char *class_id, const char *inst,
const struct hw_module_t **module)
{
...
char prop[PATH_MAX] = {0};
char path[PATH_MAX] = {0};
char name[PATH_MAX] = {0};
char prop_name[PATH_MAX] = {0};
strlcpy(name, class_id, PATH_MAX);//将 camera 字符串赋值给name
snprintf(prop_name, sizeof(prop_name), "ro.hardware.%s", name); //获取ro.hardware.camera对应的系统属性值
if (property_get(prop_name, prop, NULL) > 0) {
if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
goto found;
}
}
...
found:
/* load the module, if this fails, we're doomed, and we should not try
* to load a different variant. */
//如果存在则加载so库,例如path = /system/lib/hw/camera.sc8830.so
return load(class_id, path, module);
}
//检查完整路径对应的hal so库是否存在,如果存在返回完整 hal so库路径给path
static int hw_module_exists(char *path, size_t path_len, const char *name,
const char *subname)
{
snprintf(path, path_len, "%s/%s.%s.so",
HAL_LIBRARY_PATH3, name, subname);
if (access(path, R_OK) == 0)
return 0;
snprintf(path, path_len, "%s/%s.%s.so",
HAL_LIBRARY_PATH2, name, subname);
if (access(path, R_OK) == 0)
return 0;
snprintf(path, path_len, "%s/%s.%s.so",
HAL_LIBRARY_PATH1, name, subname);
if (access(path, R_OK) == 0)
return 0;
return -ENOENT;
}
static int load(const char *id,
const char *path,
const struct hw_module_t **pHmi)
{
...
void *handle = NULL;
handle = dlopen(path, RTLD_NOW);
const char *sym = HAL_MODULE_INFO_SYM_AS_STR;
hmi = (struct hw_module_t *)dlsym(handle, sym);
//即pHmi 为 从camera.v4l2.so 获取的HMI(HAL_MODULE_INFO_SYM) 方法返回的hw_module_t对象
*pHmi = hmi;
}
8. camera hal 简介(camera.v4l2.so)
库文件名称:camera.v4l2.so
源码目录:/hardware/libhardware/modules/camera/3_4/
主要文件:v4l2_camera_hal.cpp 定义了 HAL_MODULE_INFO_SYM,V4L2Camera 继承 default_camera_hal::Camera
从如下宏定义可知 虽然 v4l2_camera_hal.cpp 定义的是 HAL_MODULE_INFO_SYM,最终会被替换成HMI
/hardware/libhardware/include/hardware/hardware.h
#define HAL_MODULE_INFO_SYM HMI
且我们通过如下命令查询 so库中 能被 dlsym 函数找到 是HMI, 所以CameraProvider 中通过
调用 hmi = (struct hw_module_t *)dlsym(handle, "HMI") so库中获取hw_module_t结构体.
lilei@lilei-HP-Z4-G4-Workstation:~/$ readelf camera.ais.so -s
Symbol table '.dynsym' contains 381 entries:
Num: Value Size Type Bind Vis Ndx Name
...
228: 0002ffac 176 OBJECT GLOBAL DEFAULT 22 HMI
...
8.1 hal 初始化
一般各硬件厂商的 camera HAL会有个
v4l2_camera_hal.cpp 文件,在这个文件中向frameworks提供HAL的对外接口,该文件会通过
HAL_MODULE_INFO_SYM 修饰一个 camera_module_t 结构体;camera Provider服务就是
通过 HAL_MODULE_INFO_SYM 找到 camera_module_t,从而操作Camera HAL、操作camera设备
v4l2_camera_hal.cpp 是Android 定义的hal样例,每家芯片厂商会重写hal逻辑。
如v4l2_camera_hal.cpp中定义的 HAL_MODULE_INFO_SYM 初始化 camera_module_t对象,
module_api_version 为 CAMERA_MODULE_API_VERSION_2_4 ,从宏定义可知为 HARDWARE_MODULE_API_VERSION(2, 4) =》 HARDWARE_MAKE_API_VERSION(maj,min)
hal_api_version为 HARDWARE_HAL_API_VERSION,从宏定义可知为 HARDWARE_MAKE_API_VERSION(1, 0)
对象方法从v4l2_camera_hal读取,v4l2_camera_hal中的方法从gCameraHAL(V4L2CameraHAL类对象) 获取。
v4l2_camera_hal 加载时会创建静态类V4L2CameraHAL对象gCameraHAL,
V4L2CameraHAL类 构造方法 V4L2CameraHAL::V4L2CameraHAL
遍历 /dev/目录下面 video开头的节点(格式例如:/dev/video51) ,通过V4L2Camera::NewV4L2Camera 封装到 V4L2Camera 类对象中,添加到mCameras列表中。
V4L2Camera::NewV4L2Camera 方法
8.1.1 调用 v4l2_wrapper(V4L2Wrapper::NewV4L2Wrapper(path)) 构造 v4l2_wrapper 对象。具体流程如下:
=》V4L2Wrapper::NewV4L2Wrapper(path)
调用gralloc(V4L2Gralloc::NewV4L2Gralloc()) 创建V4L2Gralloc,然后调用new V4L2Wrapper(device_path, std::move(gralloc))
=》V4L2Wrapper::V4L2Wrapper(device_path, std::move(gralloc)) //初始化如下变量
device_path_(std::move(device_path)),
gralloc_(std::move(gralloc)),
connection_count_(0)
8.1.2 调用 GetV4L2Metadata(v4l2_wrapper, &metadata)获取该摄像头配置参数。调用GetV4L2Metadata 构造 默认的 摄像头支持的参数列表PartialMetadataSet对象 (从驱动获取)封装到 metadata,然后将metadata赋值给 metadata_,具体流程如下:
GetV4L2Metadata 调用流程:
=》 调用 V4L2Wrapper::Connection(device) 连接摄像头 。这里的device 是封装 path的 v4l2_wrapper 对象。
//打开例如:/dev/video51 设备驱动 返回文件描述符
int fd = TEMP_FAILURE_RETRY(open(device_path_.c_str(), O_RDWR | O_NONBLOCK));
//将驱动文件描述符 赋值给变量device_fd_
device_fd_.reset(fd);
//读取摄像头是否支持 VIDIOC_QUERY_EXT_CTRL 功能
extended_query_supported_ = (IoctlLocked(VIDIOC_QUERY_EXT_CTRL, &query) == 0)
=》查询摄像头参数、并配置缺省参数
例如:components.insert(V4L2ControlOrDefault<uint8_t>(
ControlType::kMenu,
ANDROID_CONTROL_AE_MODE,
ANDROID_CONTROL_AE_AVAILABLE_MODES,
device,
V4L2_CID_EXPOSURE_AUTO,
std::shared_ptr<ConverterInterface<uint8_t, int32_t>>(
new EnumConverter(ae_mode_mapping)),
ANDROID_CONTROL_AE_MODE_ON,
{{CAMERA3_TEMPLATE_MANUAL, ANDROID_CONTROL_AE_MODE_OFF},
{OTHER_TEMPLATES, ANDROID_CONTROL_AE_MODE_ON}}))
V4L2ControlOrDefault 调用 V4L2Wrapper::QueryControl 继而调用
IoctlLocked(VIDIOC_QUERYCTRL, &query) 最终调用ioctl(device_fd_.get(), request, data) 从驱动获取对应请求的参数配置
=》调用 AddFormatComponents(device, std::inserter(components, components.end())) 给设备配置属性内容
1.调用 GetHalFormats 获取硬件摄像头支持的格式
=》device->GetFormats(&v4l2_formats)=》V4L2Wrapper::GetFormat(&v4l2_formats)
1.调用while (IoctlLocked(VIDIOC_ENUM_FMT, &format_query) >= 0) 遍历从驱动读取到格式添加到 v4l2_formats
2.调用device->GetFormatFrameSizes(v4l2_format, &frame_sizes) 获取 Frame 尺寸
3. 获取所有的frame_size 支持的fps周期值
构造 v4l2_wrapper 对象
=》V4L2Wrapper::NewV4L2Wrapper(path)
调用gralloc(V4L2Gralloc::NewV4L2Gralloc()) 创建V4L2Gralloc,然后调用new V4L2Wrapper(device_path, std::move(gralloc))
=》V4L2Wrapper::V4L2Wrapper(device_path, std::move(gralloc)) //初始化如下变量
device_path_(std::move(device_path)),
gralloc_(std::move(gralloc)),
connection_count_(0)
8.1.3 调用new V4L2Camera(id, std::move(v4l2_wrapper), std::move(metadata))构造出 V4L2Camera 对象。具体流程如下:
V4L2Camera::NewV4L2Camera 方法。 然后会调用new V4L2Camera返回V4L2Camera实例,然后调用子类和父类的构造函数
V4L2Camera::V4L2Camera(int id, const std::string path) 参数id 从0 开始自增,path 即/dev/video*(例如:/dev/video51),
获取 metadata赋值给变量 metadata_ ,V4L2Wrapper到变量device_
Camera::Camera(int id) 初始化mId,初始化camera3_device_t对象mDevice。
从Camera::Camera构造方法可知,因为当前分析的是hal3的源码,mDevice.common.version 的值为CAMERA_DEVICE_API_VERSION_3_4,即CameraProvider 中获取的 camera_info.device_version 为CAMERA_DEVICE_API_VERSION_3_4,从宏定义可知为 HARDWARE_DEVICE_API_VERSION(3, 4) =》HARDWARE_MAKE_API_VERSION(maj,min)。
hal中的几个version用途总结:
module_api_version: 主要在CameraProvider 中用来判断 模块api版本是否满足要求否则报错,初始化的 CAMERA_MODULE_API_VERSION_2_4 满足要求。
hal_api_version: 上层暂未用到
camera_info.device_version: CameraService中判断deviceVersion 来初始化 hal1对象CameraClient还是hal3对象Camera2Client;
8.2 hal 打开摄像头
由4.2.2可知,应用层调用camera.open方法 会调用CameraProvider中 CameraDevice的实现类android::hardware::camera::device::V1_0::implementation::CameraDevice对象CameraDevice::open(const sp<ICameraDeviceCallback>& callback)方法,
即调用 CameraDevice.cpp 中的CameraDevice::open方法,该方法
8.2.1 先调用mModule->getCameraInfo(mCameraIdInt, &info)
这个mModule是3.3.2 中 创建CameraDevice对象时候传递的mModule(从HAL 中读取的 camera_module_t 类型结构体)
即调用v4l2_camera_hal.cpp中的V4L2CameraHAL::getCameraInfo
调用camera.cpp中的Camera::getInfo 获取 raw_metadata 和 faceing、orientation信息封装到camera_info结构体
参数传递过程:
Camera::getInfo(struct camera_info *info) 从metadata_ 获取相关信息给info
调用loadStaticInfo() 即Camera::loadStaticInfo()
调用initStaticInfo(static_metadata.get()) 将metadata_填充到static_metadata
V4L2Camera::initStaticInfo(android::CameraMetadata* out) metadata_->FillStaticMetadata(out) 将 metadata_ 填充到out
调用 mStaticInfo.reset(StaticProperties::NewStaticProperties(std::move(static_metadata))); 将metadata_数据存储到mStaticInfo
调用接口将 mStaticInfo 赋值给 info
info->static_camera_characteristics = mStaticInfo->raw_metadata();
info->facing = mStaticInfo->facing();
info->orientation = mStaticInfo->orientation();
8.2.2 在调用 如果是 hal3 mModule->open(mCameraId.c_str(), (hw_device_t **)&mDevice)
CameraModule::open(const char* id, struct hw_device_t** device) 调用 mModule->common.methods->open(&mModule->common, id, device)
=》
即调用 V4L2CameraHAL::openDevice,=》Camera::openDevice (module, device)方法中
调用connect() 连接到设备:打开dev节点,
将module 赋值给 mDevice.common.module,然后将mDevice.common 赋值给device。
CameraDevice::open 中 获取到 hw_device_t对象mDevice,cameraservice调用预览接口 CameraDevice::startPreview,也是通过这个mDevice对象
调用mDevice->ops->start_preview(mDevice) 完成预览的。
/hardware/libhardware/modules/camera/3_4/v4l2_camera_hal.cpp
namespace v4l2_camera_hal {
// Default global camera hal.
static V4L2CameraHAL gCameraHAL;
V4L2CameraHAL::V4L2CameraHAL() : mCameras(), mCallbacks(NULL) {
HAL_LOG_ENTER();
// Adds all available V4L2 devices.
// List /dev nodes.
DIR* dir = opendir("/dev");
if (dir == NULL) {
HAL_LOGE("Failed to open /dev");
return;
}
// Find /dev/video* nodes.
dirent* ent;
std::vector<std::string> nodes;
while ((ent = readdir(dir))) {
std::string desired = "video";
size_t len = desired.size();
if (strncmp(desired.c_str(), ent->d_name, len) == 0) {
if (strlen(ent->d_name) > len && isdigit(ent->d_name[len])) {
// ent is a numbered video node.
nodes.push_back(std::string("/dev/") + ent->d_name);
HAL_LOGV("Found video node %s.", nodes.back().c_str());
}
}
}
// Test each for V4L2 support and uniqueness.
std::unordered_set<std::string> buses;
std::string bus;
v4l2_capability cap;
int fd;
int id = 0;
for (const auto& node : nodes) {
// Read V4L2 capabilities.
if (TEMP_FAILURE_RETRY(ioctl(fd, VIDIOC_QUERYCAP, &cap)) != 0) {
HAL_LOGE(
"VIDIOC_QUERYCAP on %s fail: %s.", node.c_str(), strerror(errno));
} else if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)) {
HAL_LOGE("%s is not a V4L2 video capture device.", node.c_str());
} else {
// If the node is unique, add a camera for it.
bus = reinterpret_cast<char*>(cap.bus_info);
if (buses.insert(bus).second) {
HAL_LOGV("Found unique bus at %s.", node.c_str());
std::unique_ptr<V4L2Camera> cam(V4L2Camera::NewV4L2Camera(id++, node));
if (cam) {
mCameras.push_back(std::move(cam));
} else {
HAL_LOGE("Failed to initialize camera at %s.", node.c_str());
}
}
}
TEMP_FAILURE_RETRY(close(fd));
}
}
V4L2CameraHAL::~V4L2CameraHAL() {
}
int V4L2CameraHAL::getNumberOfCameras() {
return mCameras.size();
}
int V4L2CameraHAL::getCameraInfo(int id, camera_info_t* info) {
return mCameras[id]->getInfo(info);
}
int V4L2CameraHAL::setCallbacks(const camera_module_callbacks_t* callbacks) {
mCallbacks = callbacks;
return 0;
}
int V4L2CameraHAL::openDevice(const hw_module_t* module,
const char* name,
hw_device_t** device) {
return mCameras[id]->openDevice(module, device);
}
static int get_number_of_cameras() {
return gCameraHAL.getNumberOfCameras();
}
static int get_camera_info(int id, struct camera_info* info) {
return gCameraHAL.getCameraInfo(id, info);
}
static int set_callbacks(const camera_module_callbacks_t* callbacks) {
return gCameraHAL.setCallbacks(callbacks);
}
static int open_dev(const hw_module_t* module,
const char* name,
hw_device_t** device) {
return gCameraHAL.openDevice(module, name, device);
}
} // namespace v4l2_camera_hal
static hw_module_methods_t v4l2_module_methods = {
.open = v4l2_camera_hal::open_dev};
camera_module_t HAL_MODULE_INFO_SYM __attribute__((visibility("default"))) = {
.common =
{
.tag = HARDWARE_MODULE_TAG,
.module_api_version = CAMERA_MODULE_API_VERSION_2_4,
.hal_api_version = HARDWARE_HAL_API_VERSION,
.id = CAMERA_HARDWARE_MODULE_ID,
.name = "V4L2 Camera HAL v3",
.author = "The Android Open Source Project",
.methods = &v4l2_module_methods,
.dso = nullptr,
.reserved = {0},
},
.get_number_of_cameras = v4l2_camera_hal::get_number_of_cameras,
.get_camera_info = v4l2_camera_hal::get_camera_info,
.set_callbacks = v4l2_camera_hal::set_callbacks,
.get_vendor_tag_ops = v4l2_camera_hal::get_vendor_tag_ops,
.open_legacy = v4l2_camera_hal::open_legacy,
.set_torch_mode = v4l2_camera_hal::set_torch_mode,
.init = nullptr,
.reserved = {nullptr, nullptr, nullptr, nullptr, nullptr}};
hal1对应CameraDevice类文件
/hardware/interfaces/camera/device/1.0/default/CameraDevice.cpp
Return<Status> CameraDevice::open(const sp<ICameraDeviceCallback>& callback) {
camera_info info;
status_t res = mModule->getCameraInfo(mCameraIdInt, &info);
int rc = OK;
if (mModule->getModuleApiVersion() >= CAMERA_MODULE_API_VERSION_2_3 &&
info.device_version > CAMERA_DEVICE_API_VERSION_1_0) {
// Open higher version camera device as HAL1.0 device.
rc = mModule->openLegacy(mCameraId.c_str(),
CAMERA_DEVICE_API_VERSION_1_0,
(hw_device_t **)&mDevice);
} else {
rc = mModule->open(mCameraId.c_str(), (hw_device_t **)&mDevice);
}
if (rc != OK) {
mDevice = nullptr;
ALOGE("Could not open camera %s: %d", mCameraId.c_str(), rc);
return getHidlStatus(rc);
}
initHalPreviewWindow();
mDeviceCallback = callback;
if (mDevice->ops->set_callbacks) {
mDevice->ops->set_callbacks(mDevice,
sNotifyCb, sDataCb, sDataCbTimestamp, sGetMemory, this);
}
return getHidlStatus(rc);
}
Return<Status> CameraDevice::startPreview() {
ALOGV("%s(%s)", __FUNCTION__, mCameraId.c_str());
Mutex::Autolock _l(mLock);
if (!mDevice) {
ALOGE("%s called while camera is not opened", __FUNCTION__);
return Status::OPERATION_NOT_SUPPORTED;
}
if (mDevice->ops->start_preview) {
return getHidlStatus(mDevice->ops->start_preview(mDevice));
}
return Status::INTERNAL_ERROR; // HAL should provide start_preview
}
Return<Status> CameraDevice::takePicture() {
ALOGV("%s(%s)", __FUNCTION__, mCameraId.c_str());
Mutex::Autolock _l(mLock);
if (!mDevice) {
ALOGE("%s called while camera is not opened", __FUNCTION__);
return Status::OPERATION_NOT_SUPPORTED;
}
if (mDevice->ops->take_picture) {
return getHidlStatus(mDevice->ops->take_picture(mDevice));
}
return Status::ILLEGAL_ARGUMENT;
}
hardware/libhardware/modules/camera/3_4/v4l2_camera.cpp
namespace v4l2_camera_hal {
// Helper function for managing metadata.
static std::vector<int32_t> getMetadataKeys(const camera_metadata_t* metadata) {
std::vector<int32_t> keys;
size_t num_entries = get_camera_metadata_entry_count(metadata);
for (size_t i = 0; i < num_entries; ++i) {
camera_metadata_ro_entry_t entry;
get_camera_metadata_ro_entry(metadata, i, &entry);
keys.push_back(entry.tag);
}
return keys;
}
V4L2Camera* V4L2Camera::NewV4L2Camera(int id, const std::string path) {\
...
std::shared_ptr<V4L2Wrapper> v4l2_wrapper(V4L2Wrapper::NewV4L2Wrapper(path));
std::unique_ptr<Metadata> metadata;
int res = GetV4L2Metadata(v4l2_wrapper, &metadata);
return new V4L2Camera(id, std::move(v4l2_wrapper), std::move(metadata));
}
V4L2Camera::V4L2Camera(int id,
std::shared_ptr<V4L2Wrapper> v4l2_wrapper,
std::unique_ptr<Metadata> metadata)
: default_camera_hal::Camera(id),
device_(std::move(v4l2_wrapper)),
metadata_(std::move(metadata)),
max_input_streams_(0),
max_output_streams_({{0, 0, 0}}),
buffer_enqueuer_(new FunctionThread(
std::bind(&V4L2Camera::enqueueRequestBuffers, this))),
buffer_dequeuer_(new FunctionThread(
std::bind(&V4L2Camera::dequeueRequestBuffers, this))) {
HAL_LOG_ENTER();
}
V4L2Camera::~V4L2Camera() {
HAL_LOG_ENTER();
}
int V4L2Camera::connect() {
connection_.reset(new V4L2Wrapper::Connection(device_));
if (connection_->status()) {
HAL_LOGE("Failed to connect to device.");
return connection_->status();
}
// TODO(b/29185945): confirm this is a supported device.
// This is checked by the HAL, but the device at |device_|'s path may
// not be the same one that was there when the HAL was loaded.
// (Alternatively, better hotplugging support may make this unecessary
// by disabling cameras that get disconnected and checking newly connected
// cameras, so connect() is never called on an unsupported camera)
// TODO(b/29158098): Inform service of any flashes that are no longer
// available because this camera is in use.
return 0;
}
}
hardware/libhardware/modules/camera/3_4/camera.cpp
namespace default_camera_hal {
Camera::Camera(int id)
: mId(id),
mSettingsSet(false),
mBusy(false),
mCallbackOps(NULL),
mInFlightTracker(new RequestTracker)
{
memset(&mTemplates, 0, sizeof(mTemplates));
memset(&mDevice, 0, sizeof(mDevice));
mDevice.common.tag = HARDWARE_DEVICE_TAG;
mDevice.common.version = CAMERA_DEVICE_API_VERSION_3_4;
mDevice.common.close = close_device;
mDevice.ops = const_cast<camera3_device_ops_t*>(&sOps);
mDevice.priv = this;
}
int Camera::openDevice(const hw_module_t *module, hw_device_t **device)
{
ALOGI("%s:%d: Opening camera device", __func__, mId);
ATRACE_CALL();
android::Mutex::Autolock al(mDeviceLock);
if (mBusy) {
ALOGE("%s:%d: Error! Camera device already opened", __func__, mId);
return -EBUSY;
}
int connectResult = connect();
if (connectResult != 0) {
return connectResult;
}
mBusy = true;
mDevice.common.module = const_cast<hw_module_t*>(module);
*device = &mDevice.common;
return 0;
}
int Camera::getInfo(struct camera_info *info)
{
info->device_version = mDevice.common.version;
initDeviceInfo(info);
if (!mStaticInfo) {
int res = loadStaticInfo();
if (res) {
return res;
}
}
info->static_camera_characteristics = mStaticInfo->raw_metadata();
info->facing = mStaticInfo->facing();
info->orientation = mStaticInfo->orientation();
return 0;
}
}
hardware/libhardware/modules/camera/3_4/v4l2_wrapper.cpp
namespace v4l2_camera_hal {
V4L2Wrapper* V4L2Wrapper::NewV4L2Wrapper(const std::string device_path) {
std::unique_ptr<V4L2Gralloc> gralloc(V4L2Gralloc::NewV4L2Gralloc());
return new V4L2Wrapper(device_path, std::move(gralloc));
}
V4L2Wrapper::V4L2Wrapper(const std::string device_path,
std::unique_ptr<V4L2Gralloc> gralloc)
: device_path_(std::move(device_path)),
gralloc_(std::move(gralloc)),
connection_count_(0) {}
int V4L2Wrapper::Connect() {
std::lock_guard<std::mutex> lock(connection_lock_);
if (connected()) {
HAL_LOGV("Camera device %s is already connected.", device_path_.c_str());
++connection_count_;
return 0;
}
// Open in nonblocking mode (DQBUF may return EAGAIN).
int fd = TEMP_FAILURE_RETRY(open(device_path_.c_str(), O_RDWR | O_NONBLOCK));
device_fd_.reset(fd);
++connection_count_;
// Check if this connection has the extended control query capability.
v4l2_query_ext_ctrl query;
query.id = V4L2_CTRL_FLAG_NEXT_CTRL | V4L2_CTRL_FLAG_NEXT_COMPOUND;
extended_query_supported_ = (IoctlLocked(VIDIOC_QUERY_EXT_CTRL, &query) == 0);
return 0;
}
// Helper function. Should be used instead of ioctl throughout this class.
template <typename T>
int V4L2Wrapper::IoctlLocked(int request, T data) {
// Potentially called so many times logging entry is a bad idea.
std::lock_guard<std::mutex> lock(device_lock_);
return TEMP_FAILURE_RETRY(ioctl(device_fd_.get(), request, data));
}
int V4L2Wrapper::GetFormats(std::set<uint32_t>* v4l2_formats) {
v4l2_fmtdesc format_query;
memset(&format_query, 0, sizeof(format_query));
// TODO(b/30000211): multiplanar support.
format_query.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
while (IoctlLocked(VIDIOC_ENUM_FMT, &format_query) >= 0) {
v4l2_formats->insert(format_query.pixelformat);
++format_query.index;
}
return 0;
}
}
hardware/libhardware/modules/camera/3_4/v4l2_metadata_factory.cpp
namespace v4l2_camera_hal {
int GetV4L2Metadata(std::shared_ptr<V4L2Wrapper> device,
std::unique_ptr<Metadata>* result) {
V4L2Wrapper::Connection temp_connection = V4L2Wrapper::Connection(device);
components.insert(FixedState<uint8_t>(ANDROID_CONTROL_AF_STATE,
ANDROID_CONTROL_AF_STATE_INACTIVE));
components.insert(V4L2ControlOrDefault<uint8_t>(
ControlType::kMenu,
ANDROID_CONTROL_AE_MODE,
ANDROID_CONTROL_AE_AVAILABLE_MODES,
device,
V4L2_CID_EXPOSURE_AUTO,
std::shared_ptr<ConverterInterface<uint8_t, int32_t>>(
new EnumConverter(ae_mode_mapping)),
ANDROID_CONTROL_AE_MODE_ON,
{{CAMERA3_TEMPLATE_MANUAL, ANDROID_CONTROL_AE_MODE_OFF},
{OTHER_TEMPLATES, ANDROID_CONTROL_AE_MODE_ON}}));
...
//给设备配置属性内容
int res = AddFormatComponents(device, std::inserter(components, components.end()));
*result = std::make_unique<Metadata>(std::move(components));
}
}
hardware/libhardware/modules/camera/3_4/format_metadata_factory.cpp
int AddFormatComponents(
std::shared_ptr<V4L2Wrapper> device,
std::insert_iterator<PartialMetadataSet> insertion_point) {
//1. 获取硬件摄像头参数 Get all supported formats.
std::set<int32_t> hal_formats;
int res = GetHalFormats(device, &hal_formats);
// Requirements check: need to support YCbCr_420_888, JPEG,检查需要支持YCbCr_420_888, JPEG格式
//2. 获取 Frame 尺寸 Get the available sizes for this format.
std::set<std::array<int32_t, 2>> frame_sizes;
res = device->GetFormatFrameSizes(v4l2_format, &frame_sizes);
if (res) {
HAL_LOGE("Failed to get all frame sizes for format %d", v4l2_format);
return res;
}
//3. 获取所有的frame_size 支持的fps周期值
for (const auto& frame_size : frame_sizes) {
// Note the format and size combination in stream configs.
stream_configs.push_back(
{{hal_format,
frame_size[0],
frame_size[1],
ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS_OUTPUT}});
// Find the duration range for this format and size.
std::array<int64_t, 2> duration_range;
res = device->GetFormatFrameDurationRange(
v4l2_format, frame_size, &duration_range);
if (res) {
HAL_LOGE(
"Failed to get frame duration range for format %d, "
"size %u x %u",
v4l2_format,
frame_size[0],
frame_size[1]);
return res;
}
int64_t size_min_frame_duration = duration_range[0];
int64_t size_max_frame_duration = duration_range[1];
min_frame_durations.push_back({{hal_format,
frame_size[0],
frame_size[1],
size_min_frame_duration}});
// Note the stall duration for this format and size.
// Usually 0 for non-jpeg, non-zero for JPEG.
// Randomly choosing absurd 1 sec for JPEG. Unsure what this breaks.
int64_t stall_duration = 0;
if (hal_format == HAL_PIXEL_FORMAT_BLOB) {
stall_duration = 1000000000;
}
stall_durations.push_back(
{{hal_format, frame_size[0], frame_size[1], stall_duration}});
// Update our search for general min & max frame durations.
// In theory max frame duration (min frame rate) should be consistent
// between all formats, but we check and only advertise the smallest
// available max duration just in case.
if (size_max_frame_duration < min_max_frame_duration) {
min_max_frame_duration = size_max_frame_duration;
}
// We only care about the largest min frame duration
// (smallest max frame rate) for YUV sizes.
if (hal_format == HAL_PIXEL_FORMAT_YCbCr_420_888 &&
size_min_frame_duration > max_min_frame_duration_yuv) {
max_min_frame_duration_yuv = size_min_frame_duration;
}
}
}
// Convert from frame durations measured in ns.
// Min fps supported by all formats.
int32_t min_fps = 1000000000 / min_max_frame_duration;
if (min_fps > 15) {
HAL_LOGE("Minimum FPS %d is larger than HAL max allowable value of 15",
min_fps);
return -ENODEV;
}
// Max fps supported by all YUV formats.
int32_t max_yuv_fps = 1000000000 / max_min_frame_duration_yuv;
// ANDROID_CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES should be at minimum
// {mi, ma}, {ma, ma} where mi and ma are min and max frame rates for
// YUV_420_888. Min should be at most 15.
std::vector<std::array<int32_t, 2>> fps_ranges;
fps_ranges.push_back({{min_fps, max_yuv_fps}});
std::array<int32_t, 2> video_fps_range;
int32_t video_fps = 30;
if (video_fps >= max_yuv_fps) {
video_fps_range = {{max_yuv_fps, max_yuv_fps}};
} else {
video_fps_range = {{video_fps, video_fps}};
}
fps_ranges.push_back(video_fps_range);
// Construct the metadata components.
insertion_point = std::make_unique<Property<ArrayVector<int32_t, 4>>>(
ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS,
std::move(stream_configs));
insertion_point = std::make_unique<Property<ArrayVector<int64_t, 4>>>(
ANDROID_SCALER_AVAILABLE_MIN_FRAME_DURATIONS,
std::move(min_frame_durations));
insertion_point = std::make_unique<Property<ArrayVector<int64_t, 4>>>(
ANDROID_SCALER_AVAILABLE_STALL_DURATIONS, std::move(stall_durations));
insertion_point = std::make_unique<Property<int64_t>>(
ANDROID_SENSOR_INFO_MAX_FRAME_DURATION, min_max_frame_duration);
// TODO(b/31019725): This should probably not be a NoEffect control.
insertion_point = NoEffectMenuControl<std::array<int32_t, 2>>(
ANDROID_CONTROL_AE_TARGET_FPS_RANGE,
ANDROID_CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES,
fps_ranges,
{{CAMERA3_TEMPLATE_VIDEO_RECORD, video_fps_range},
{OTHER_TEMPLATES, fps_ranges[0]}});
return 0;
}
二、Camera client端(libcamera_client.so)
1.Camera client相关源码
/frameworks/av/camera/include/camera/Camera.h
template <>
struct CameraTraits<Camera>
{
typedef CameraListener TCamListener;
typedef ::android::hardware::ICamera TCamUser;
typedef ::android::hardware::ICameraClient TCamCallbacks;
typedef ::android::binder::Status(::android::hardware::ICameraService::*TCamConnectService)
(const sp<::android::hardware::ICameraClient>&,
int, const String16&, int, int,
/*out*/
sp<::android::hardware::ICamera>*);
static TCamConnectService fnConnectService;
};
class Camera :
public CameraBase<Camera>,
public ::android::hardware::BnCameraClient
/frameworks/av/camera/include/camera/CameraBase.h
template <typename TCam, typename TCamTraits = CameraTraits<TCam> >
class CameraBase : public IBinder::DeathRecipient
{
...
typedef typename TCamTraits::TCamUser TCamUser;
sp<TCamUser> mCamera;
...
}
从/frameworks/av/camera/Android.bp 客户端编译so库脚本可知Camera client端也引用了/frameworks/av/camera/aidl/android/hardware/ICameraService.aidl
+Camera::connect() //定义方法frameworks/av/camera/CameraBase.cpp
template <typename TCam, typename TCamTraits>
sp<TCam> CameraBase<TCam, TCamTraits>::connect(int cameraId,
const String16& clientPackageName,
int clientUid, int clientPid)
{
ALOGV("%s: connect", __FUNCTION__);
sp<TCam> c = new TCam(cameraId);
sp<TCamCallbacks> cl = c;
const sp<::android::hardware::ICameraService> cs = getCameraService();
binder::Status ret;
if (cs != nullptr) {
TCamConnectService fnConnectService = TCamTraits::fnConnectService;
ret = (cs.get()->*fnConnectService)(cl, cameraId, clientPackageName, clientUid,
clientPid, /*out*/ &c->mCamera);
}
if (ret.isOk() && c->mCamera != nullptr) {
IInterface::asBinder(c->mCamera)->linkToDeath(c);
c->mStatus = NO_ERROR;
} else {
ALOGW("An error occurred while connecting to camera %d: %s", cameraId,
(cs == nullptr) ? "Service not available" : ret.toString8().string());
c.clear();
}
return c;
}
template <typename TCam, typename TCamTraits>
const sp<::android::hardware::ICameraService> CameraBase<TCam, TCamTraits>::getCameraService()
{
Mutex::Autolock _l(gLock);
if (gCameraService.get() == 0) {
char value[PROPERTY_VALUE_MAX];
property_get("config.disable_cameraservice", value, "0");
if (strncmp(value, "0", 2) != 0 && strncasecmp(value, "false", 6) != 0) {
return gCameraService;
}
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
do {
binder = sm->getService(String16(kCameraServiceName));//根据camera服务字符串,获取对应的binder对象
if (binder != 0) {
break;
}
ALOGW("CameraService not published, waiting...");
usleep(kCameraServicePollDelay);
} while(true);
if (gDeathNotifier == NULL) {
gDeathNotifier = new DeathNotifier();
}
binder->linkToDeath(gDeathNotifier);
gCameraService = interface_cast<::android::hardware::ICameraService>(binder); //根据binder获取 ICameraService 对象()
}
ALOGE_IF(gCameraService == 0, "no CameraService!?");
return gCameraService;
}
TCamConnectService fnConnectService = TCamTraits::fnConnectService;
ret = (cs.get()->*fnConnectService)(cl, cameraId, clientPackageName, clientUid,
clientPid, /*out*/ &c->mCamera);
2.应用层打开相机
如果是hal1应用层调用 Camera.open方法最终会调用到Camera client端 CameraBase.cpp 的::connect方法
2.1首先调用 cs = getCameraService()方法从servicemanger 中找到camera service 注册的 binder,然后转换成ICameraService对象赋值给变量 cs 。
然后调用(cs.get()->*fnConnectService)(cl, cameraId, clientPackageName, clientUid, clientPid, /*out*/ &c->mCamera)
ICameraService的 fnConnectService 方法(即 TCamTraits::fnConnectService; ),从如下Camera.cpp定义可知调用&ICameraService::connect,
frameworks/av/camera/Camera.cpp
CameraTraits<Camera>::TCamConnectService CameraTraits<Camera>::fnConnectService = &ICameraService::connect; //调用ICameraService::connect方法
即调用服务端的 CameraService::connect 方法 ,这里的第一个参数 cl对应 new TCam(cameraId) 且属于 TCamCallbacks 类型,
由 Camera 继承 CameraBase<Camera> 及class CameraBase 定义可知 TCam 为 Camera , TCamTraits 为 CameraTraits<Camera>。
因为 typedef typename TCamTraits::TCamCallbacks TCamCallbacks; 即 TCamCallbacks 为 CameraTraits<Camera>::TCamCallbacks,
由于Camera.h struct CameraTraits定义可知CameraTraits<Camera>::TCamCallbacks为::android::hardware::ICameraClient,
最终 TCam为::android::hardware::ICameraClient。
因为 Camera 也继承 ::android::hardware::BnCameraClient 所以可以将 Camera 赋值给 TCamCallbacks(即 TCam为::android::hardware::ICameraClient)
所以调用CameraService::connect 将 客户端变量cl 即 Camera 对象(::android::hardware::ICameraClient)通过binder通信传递给cameraservice,Camera 即可获取cameraservice的事件回调。
2.2 同时 将cameraservice binder对象传递给 /*out*/ &c->mCamera 返回,&c->mCamera 即为cameraservice 的句柄,客户端可以通过mCamera 调用cameraservice 的相关接口,
CameraBase<TCam, TCamTraits>::connect方法中cl = c ,即使c为 Camera 对象,
CameraBase.h 中 sp<TCamUser> mCamera,即&c->mCamera对应sp<TCamUser>,
CameraBase.h 中 typedef typename TCamTraits::TCamUser TCamUser; sp<TCamUser>对应sp<TCamTraits::TCamUser>
CameraBase.h 中 template <typename TCam, typename TCamTraits = CameraTraits<TCam> >,sp<TCamTraits::TCamUser>对应sp<CameraTraits::TCamUser>
Camera.h中struct CameraTraits 中 typedef ::android::hardware::ICamera TCamUser;sp<CameraTraits::TCamUser>对应sp<::android::hardware::ICamera>
即:&c->mCamera 对应sp<::android::hardware::ICamera>,同时CameraService::Client: public hardware::BnCamera,
即 客户端可以通过 &c->mCamera binder请求服务端的CameraService::Client中的接口。
这样就完成了客户端camera client 和服务端CameraService的 binder双向通信。