camera 之 createCaptureSession

1、createCaptureSession 参数解析

1.1、结合上一篇 openCamera 成功之后就会通知回调到 CameraDevice.StateCallback 类的

        public void onOpened(CameraDevice camera) {
        	// 在这里调用 createCaptureSession();
        }
1.2、createCaptureSession() 第一个参数,是一个Surface的数组,这儿的一个Surface表示输出流,	
Surface表示有多个输出流,我们有几个显示载体,就需要几个输出流。
对于拍照而言,有两个输出流:一个用于预览、一个用于拍照。
对于录制视频而言,有两个输出流:一个用于预览、一个用于录制视频。
        public void onOpened(CameraDevice camera) {
            try {
                mCameraDevice = camera;
                SurfaceTexture surfaceTexture = mTextureView.getSurfaceTexture();
                surfaceTexture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());
                Surface previewSurface = new Surface(surfaceTexture);
                mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
                mPreviewBuilder.addTarget(previewSurface);
                mPreviewBuilder.addTarget(mImageReader.getSurface());
                // previewSurface 用于预览, mImageReader.getSurface() 用于拍照
                mCameraDevice.createCaptureSession(Arrays.asList(previewSurface, mImageReader.getSurface()), mStateCallBack, mCameraHandler);
            } catch (CameraAccessException e) {
                e.printStackTrace();
            }
            LogUtil.d("mStateCallback----onOpened---");
        }
1.3、createCaptureSession() 第二个参数,传入的是一个CameraCaptureSession.StateCallback 类型,		
	session创建成功后,通过回调接口的public abstract void onConfigured(@NonNull 
	CameraCaptureSession session)方法返回一个CameraCaptureSession对象给我们。
    private CameraCaptureSession.StateCallback mStateCallBack = new CameraCaptureSession.StateCallback() {
        @Override
        public void onConfigured(CameraCaptureSession session) {
            try {
//                session.capture(request, mSessionCaptureCallback, mCameraHandler);
                mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
                CaptureRequest request = mPreviewBuilder.build();
                // Finally, we start displaying the camera preview.
                session.setRepeatingRequest(request, null, mCameraHandler);
            } catch (CameraAccessException e) {
                e.printStackTrace();
            }
        }

        @Override
        public void onConfigureFailed(CameraCaptureSession session) {

        }
    };
1.4、createCaptureSession() 第三个参数,就是应用的主线程handler。

2、createCaptureSession 流程分析

frameworks\base\core\java\android\hardware\camera2\impl\CameraDeviceImpl.java

    @Override
    public void createCaptureSession(List<Surface> outputs,
            CameraCaptureSession.StateCallback callback, Handler handler)
            throws CameraAccessException {
        List<OutputConfiguration> outConfigurations = new ArrayList<>(outputs.size());
        // 将surface 转换成 OutputConfiguration类型,OutputConfiguration 继承自Parcelable
        // OutputConfiguration 主要用于进程间参数传递
        for (Surface surface : outputs) {
            outConfigurations.add(new OutputConfiguration(surface));
        }
        createCaptureSessionInternal(null, outConfigurations, callback,
                checkAndWrapHandler(handler), /*operatingMode*/ICameraDeviceUser.NORMAL_MODE,
                /*sessionParams*/ null);
    }


	
  private void createCaptureSessionInternal(InputConfiguration inputConfig,
            List<OutputConfiguration> outputConfigurations,
            CameraCaptureSession.StateCallback callback, Executor executor,
            int operatingMode, CaptureRequest sessionParams) throws CameraAccessException {
        synchronized(mInterfaceLock) {
            if (DEBUG) {
                Log.d(TAG, "createCaptureSessionInternal");
            }
			// 检测当前camera是否可用
            checkIfCameraClosedOrInError();
			.......省略部分代码.........
            // TODO: dont block for this
            boolean configureSuccess = true;
            CameraAccessException pendingException = null;
            Surface input = null;
            try {
                // configure streams and then block until IDLE
                // 这个是重点,配置输入输出,由于当前的	inputConfig为null,所以只配置输出流,就是从camera Hal留下应用的数据流。
                configureSuccess = configureStreamsChecked(inputConfig, outputConfigurations,
                        operatingMode, sessionParams);
               
            } catch (CameraAccessException e) {
              
            }
           	.......省略部分代码.........
                newSession = new CameraCaptureSessionImpl(mNextSessionId++, input,
                        callback, executor, this, mDeviceExecutor, configureSuccess);
           
			.......省略部分代码.........
            mSessionStateCallback = mCurrentSession.getDeviceStateCallback();
        }
    }


public boolean configureStreamsChecked(InputConfiguration inputConfig,
            List<OutputConfiguration> outputs, int operatingMode, CaptureRequest sessionParams)
    
			.......省略部分代码.........
            try {
            	/* beginConfigure() 到 endConfigure(operatingMode, null);中间的过程是	
            	* IPC通知service端告知当前正在处理输入输出流。执行完
            	* mRemoteDevice.endConfigure(operatingMode, null);返回success = true;
            	* 如果中间被终端了,那么success肯定不为true。
            	* mRemoteDevice就是CameraDeviceClient的远端代理,
            	* CameraDeviceClient的beginConfigure 是一个空实现。
                * mRemoteDevice.beginConfigure();
                */
 			.......省略部分代码.........
                // Add all new streams
                for (OutputConfiguration outConfig : outputs) {
                    if (addSet.contains(outConfig)) {
                    	// 创建输出流,我们传进来的Surface,有两个所以此处创建两个输出流
                    	//一个用于预览、一个用于拍照
                        int streamId = mRemoteDevice.createStream(outConfig);
                        mConfiguredOutputs.put(streamId, outConfig);
                    }
                }

                if (sessionParams != null) {
                    mRemoteDevice.endConfigure(operatingMode, sessionParams.getNativeCopy());
                } else {
                    mRemoteDevice.endConfigure(operatingMode, null);
                }

                success = true;
            } catch (IllegalArgumentException e) {
               	.......省略部分代码.........
            } finally {
                if (success && outputs.size() > 0) {
                    mDeviceExecutor.execute(mCallOnIdle);
                } else {
                    mDeviceExecutor.execute(mCallOnUnconfigured);
                }
            }
        }

        return success;
    }

frameworks\av\services\camera\libcameraservice\api2\CameraDeviceClient.cpp


binder::Status CameraDeviceClient::createStream(
        const hardware::camera2::params::OutputConfiguration &outputConfiguration,
        /*out*/
        int32_t* newStreamId) {
    ATRACE_CALL();
   .......省略部分代码.........

	/* outputConfiguration 是 Surface 转换过来的,Surface中持有IGraphicBufferProducer对象
	* 在使用ImageReader的构造函数中有 mSurface = nativeGetSurface(); 
	* 而nativeGetSurface() 就是 frameworks\base\media\jni\android_media_ImageReader.cpp 中的ImageReader_getSurface((JNIEnv* env, jobject thiz) 
	* 而ImageReader_getSurface 的内部实现
	* 
	* IGraphicBufferProducer* gbp = ImageReader_getProducer(env, thiz);
	* return android_view_Surface_createFromIGraphicBufferProducer(env, gbp);
	*
	*android_view_Surface_createFromIGraphicBufferProducer 的实现在	frameworks\base\core\jni\android_view_Surface.cpp
	
	* jobject android_view_Surface_createFromIGraphicBufferProducer(JNIEnv* env,
    *    const sp<IGraphicBufferProducer>& bufferProducer) {
    * sp<Surface> surface(new Surface(bufferProducer, true));
    * return android_view_Surface_createFromSurface(env, surface);
	* }

	* 从上可以看到surface 中是持有buffer生产者的对象
	*/
    const std::vector<sp<IGraphicBufferProducer>>& bufferProducers =
            outputConfiguration.getGraphicBufferProducers();
    size_t numBufferProducers = bufferProducers.size();
    
	.......省略部分代码.........
    for (auto& bufferProducer : bufferProducers) {
        // Don't create multiple streams for the same target surface
        sp<IBinder> binder = IInterface::asBinder(bufferProducer);
        ssize_t index = mStreamMap.indexOfKey(binder);
        if (index != NAME_NOT_FOUND) {
            String8 msg = String8::format("Camera %s: Surface already has a stream created for it "
                    "(ID %zd)", mCameraIdStr.string(), index);
            ALOGW("%s: %s", __FUNCTION__, msg.string());
            return STATUS_ERROR(CameraService::ERROR_ALREADY_EXISTS, msg.string());
        }

		// 这里又重新创建了一个surface,我纠结了好久,因为上面应用层已经创建了surface,这里怎么又重建,我去看了一下,原来java 层的 public final class OutputConfiguration implements Parcelable 和 c++层的 class OutputConfiguration : public android::Parcelable 实现不一样,c++层中没有保存private ArrayList<Surface> mSurfaces, 所以这里需要重新创建 surface 暂且这样理解把。
        sp<Surface> surface;
        res = createSurfaceFromGbp(streamInfo, isStreamInfoValid, surface, bufferProducer);

		.......省略部分代码.........
        binders.push_back(IInterface::asBinder(bufferProducer));
        surfaces.push_back(surface);
    }

    int streamId = camera3::CAMERA3_STREAM_ID_INVALID;
    std::vector<int> surfaceIds;
    // mDevice 就是 Camera3Device
    err = mDevice->createStream(surfaces, deferredConsumer, streamInfo.width,
            streamInfo.height, streamInfo.format, streamInfo.dataSpace,
            static_cast<camera3_stream_rotation_t>(outputConfiguration.getRotation()),
            &streamId, physicalCameraId, &surfaceIds, outputConfiguration.getSurfaceSetID(),
            isShared);

    if (err != OK) {
        res = STATUS_ERROR_FMT(CameraService::ERROR_INVALID_OPERATION,
                "Camera %s: Error creating output stream (%d x %d, fmt %x, dataSpace %x): %s (%d)",
                mCameraIdStr.string(), streamInfo.width, streamInfo.height, streamInfo.format,
                streamInfo.dataSpace, strerror(-err), err);
    } else {
        int i = 0;
        for (auto& binder : binders) {
            ALOGV("%s: mStreamMap add binder %p streamId %d, surfaceId %d",
                    __FUNCTION__, binder.get(), streamId, i);
            mStreamMap.add(binder, StreamSurfaceId(streamId, surfaceIds[i]));
            i++;
        }

		// mConfiguredOutputs keyMap 保存streamId 和 outputConfiguration
		// mStreamInfoMap  keyMap 保存streamId 和 streamInfo
        mConfiguredOutputs.add(streamId, outputConfiguration);
        mStreamInfoMap[streamId] = streamInfo;

        ALOGV("%s: Camera %s: Successfully created a new stream ID %d for output surface"
                    " (%d x %d) with format 0x%x.",
                  __FUNCTION__, mCameraIdStr.string(), streamId, streamInfo.width,
                  streamInfo.height, streamInfo.format);

        // Set transform flags to ensure preview to be rotated correctly.
        res = setStreamTransformLocked(streamId);

        *newStreamId = streamId;
    }

    return res;
}


//重新创建Surface,但是IGraphicBufferProducer对象和应用那边的是保持一直的
binder::Status CameraDeviceClient::createSurfaceFromGbp(
        OutputStreamInfo& streamInfo, bool isStreamInfoValid,
        sp<Surface>& surface, const sp<IGraphicBufferProducer>& gbp) {
	...... 省略部分代码...........
    surface = new Surface(gbp, useAsync);
    ANativeWindow *anw = surface.get();
    ...... 省略部分代码...........
    return binder::Status::ok();
}

frameworks\av\services\camera\libcameraservice\device3\Camera3Device.cpp


status_t Camera3Device::createStream(const std::vector<sp<Surface>>& consumers,
        bool hasDeferredConsumer, uint32_t width, uint32_t height, int format,
        android_dataspace dataSpace, camera3_stream_rotation_t rotation, int *id,
        const String8& physicalCameraId,        
        std::vector<int> *surfaceIds, int streamSetId, bool isShared, uint64_t consumerUsage) {
        
    	...... 省略部分代码...........
    	
    	// 下面会创建根据应用要求的数据格式,创建不同的数据流
    if (format == HAL_PIXEL_FORMAT_BLOB) {
        ssize_t blobBufferSize;
        if (dataSpace != HAL_DATASPACE_DEPTH) {
            blobBufferSize = getJpegBufferSize(width, height);
            if (blobBufferSize <= 0) {
                SET_ERR_L("Invalid jpeg buffer size %zd", blobBufferSize);
                return BAD_VALUE;
            }
        } else {
            blobBufferSize = getPointCloudBufferSize();
            if (blobBufferSize <= 0) {
                SET_ERR_L("Invalid point cloud buffer size %zd", blobBufferSize);
                return BAD_VALUE;
            }
        }
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, blobBufferSize, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, streamSetId);
    } else if (format == HAL_PIXEL_FORMAT_RAW_OPAQUE) {
        ssize_t rawOpaqueBufferSize = getRawOpaqueBufferSize(width, height);
        if (rawOpaqueBufferSize <= 0) {
            SET_ERR_L("Invalid RAW opaque buffer size %zd", rawOpaqueBufferSize);
            return BAD_VALUE;
        }
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, rawOpaqueBufferSize, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, streamSetId);
    } else if (isShared) {
        newStream = new Camera3SharedOutputStream(mNextStreamId, consumers,
                width, height, format, consumerUsage, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, streamSetId);
    } else if (consumers.size() == 0 && hasDeferredConsumer) {
        newStream = new Camera3OutputStream(mNextStreamId,
                width, height, format, consumerUsage, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, streamSetId);
    } else {
    	// 一般申请的YUV_420_888会走到这里,注意这里的consumers[0]就是上面传递进来的参数Surface, 注意这个Surfaces,后面camera的数据传递生产者和消费者都是建立在这个surface进行的。
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, streamSetId);
    }

    size_t consumerCount = consumers.size();
    for (size_t i = 0; i < consumerCount; i++) {
        int id = newStream->getSurfaceId(consumers[i]);
        if (id < 0) {
            SET_ERR_L("Invalid surface id");
            return BAD_VALUE;
        }
        if (surfaceIds != nullptr) {
            surfaceIds->push_back(id);
        }
    }

    newStream->setStatusTracker(mStatusTracker);

    newStream->setBufferManager(mBufferManager);

    res = mOutputStreams.add(mNextStreamId, newStream);
    if (res < 0) {
        SET_ERR_L("Can't add new stream to set: %s (%d)", strerror(-res), res);
        return res;
    }

    *id = mNextStreamId++;
    mNeedConfig = true;

    // Continue captures if active at start
    if (wasActive) {
        ALOGV("%s: Restarting activity to reconfigure streams", __FUNCTION__);
        // Reuse current operating mode and session parameters for new stream config
        res = configureStreamsLocked(mOperatingMode, mSessionParams);
        if (res != OK) {
            CLOGE("Can't reconfigure device for new stream %d: %s (%d)",
                    mNextStreamId, strerror(-res), res);
            return res;
        }
        internalResumeLocked();
    }
    ALOGV("Camera %s: Created new stream", mId.string());
    return OK;
}

frameworks\av\services\camera\libcameraservice\device3\Camera3OutputStream.cpp

Camera3OutputStream::Camera3OutputStream(int id,
        sp<Surface> consumer,
        uint32_t width, uint32_t height, int format,
        android_dataspace dataSpace, camera3_stream_rotation_t rotation,
        nsecs_t timestampOffset, const String8& physicalCameraId,
        int setId) :
        Camera3IOStreamBase(id, CAMERA3_STREAM_OUTPUT, width, height,
                            /*maxSize*/0, format, dataSpace, rotation,
                            physicalCameraId, setId),
        // Camera3OutputStream 中持有 mConsumer,mConsumer是个Surface,并且持有IGraphicBufferProducer
        mConsumer(consumer),
        mTransform(0),
        mTraceFirstBuffer(true),
        mUseBufferManager(false),
        mTimestampOffset(timestampOffset),
        mConsumerUsage(0),
        mDropBuffers(false),
        mDequeueBufferLatency(kDequeueLatencyBinSize) {

    if (mConsumer == NULL) {
        ALOGE("%s: Consumer is NULL!", __FUNCTION__);
        mState = STATE_ERROR;
    }

    if (setId > CAMERA3_STREAM_SET_ID_INVALID) {
        mBufferReleasedListener = new BufferReleasedListener(this);
    }
}

frameworks\av\services\camera\libcameraservice\api2\CameraDeviceClient.cpp


binder::Status CameraDeviceClient::endConfigure(int operatingMode,
        const hardware::camera2::impl::CameraMetadataNative& sessionParams) {
     ...... 省略部分代码...........
    status_t err = mDevice->configureStreams(sessionParams, operatingMode);
     ...... 省略部分代码...........
    return res;
}

frameworks\av\services\camera\libcameraservice\device3\Camera3Device.cpp


status_t Camera3Device::configureStreams(const CameraMetadata& sessionParams, int operatingMode) {
    ATRACE_CALL();
    ALOGV("%s: E", __FUNCTION__);

    Mutex::Autolock il(mInterfaceLock);
    Mutex::Autolock l(mLock);

    // In case the client doesn't include any session parameter, try a
    // speculative configuration using the values from the last cached
    // default request.
    if (sessionParams.isEmpty() &&
            ((mLastTemplateId > 0) && (mLastTemplateId < CAMERA3_TEMPLATE_COUNT)) &&
            (!mRequestTemplateCache[mLastTemplateId].isEmpty())) {
        ALOGV("%s: Speculative session param configuration with template id: %d", __func__,
                mLastTemplateId);
        return filterParamsAndConfigureLocked(mRequestTemplateCache[mLastTemplateId],
                operatingMode);
    }

    return filterParamsAndConfigureLocked(sessionParams, operatingMode);
}

status_t Camera3Device::filterParamsAndConfigureLocked(const CameraMetadata& sessionParams,
        int operatingMode) {
    //Filter out any incoming session parameters
    const CameraMetadata params(sessionParams);
    camera_metadata_entry_t availableSessionKeys = mDeviceInfo.find(
            ANDROID_REQUEST_AVAILABLE_SESSION_KEYS);
    CameraMetadata filteredParams(availableSessionKeys.count);
    camera_metadata_t *meta = const_cast<camera_metadata_t *>(
            filteredParams.getAndLock());
    set_camera_metadata_vendor_id(meta, mVendorTagId);
    filteredParams.unlock(meta);
    if (availableSessionKeys.count > 0) {
        for (size_t i = 0; i < availableSessionKeys.count; i++) {
            camera_metadata_ro_entry entry = params.find(
                    availableSessionKeys.data.i32[i]);
            if (entry.count > 0) {
                filteredParams.update(entry);
            }
        }
    }

    return configureStreamsLocked(operatingMode, filteredParams);
}

status_t Camera3Device::configureStreamsLocked(int operatingMode,
        const CameraMetadata& sessionParams, bool notifyRequestThread) {
             ...... 省略部分代码...........
			// 这里的上述的 mInterface 变量是在 Camera3Device::initialize() 时,保存 HalInterface 实例对象的,
			// 所以,mInterface->configureStreams() 将调用到 Camera3Device::HalInterface::configureStreams()

			// Camera3Device::HalInterface::configureStreams() 将stream config转换为 HIDL 接口使用的之后,将按照 HAL 版本调用不同的接口函数
			// 下面我们以 HAL 3.3 为例进行跟踪,将调用 mHidlSession_3_3->configureStreams_3_3()。注:mHidlSession_3_3 是在
			// HalInterface 构造函数中赋值的,而 mHidlSession_3_3 是 ICameraDeviceSession转换得来的,ICameraDeviceSession 则是Camera3Device::initialize() 中通过 manager->openSession() 获取的。

			// 在上述的 mHidlSession_3_3->configureStreams_3_3() 调用,将会调用到 BpHwCameraDeviceSession::configureStreams_3_3()
			// /out/soong/.intermediates/hardware/interfaces/camera/device/3.3/android.hardware.camera.device@3.3_genc++/gen/android/hardware/camera/device/3.3/CameraDeviceSessionAll.cpp]
			// 最终调用到 CameraProvider 中的 CameraDeviceSession::configureStreams_3_3()函数。具体怎么调用到这里的就不详细跟踪,
			// 知道是通过HIDL影响到这边即可,感兴趣是怎么调用到这边的可以继续跟踪下去。
			
			//所以,以上,在 Camera3Device::configureStreamsLocked() 中,将会通过 
			// mInterface->configureStreams(sessionBuffer, &config,bufferSizes) 调用到 CameraProvider,从而配置流操作进入 CameraProvider。
            res = mInterface->configureStreams(sessionBuffer, &config, bufferSizes);
            ...... 省略部分代码...........
        }

3、时序图

在这里插入图片描述

  • 2
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
以下是一个简单的 Camera2 API 创建多个 CaptureSession 的示例代码: ```java public class CameraFragment extends Fragment { private CameraDevice mCameraDevice; private CameraCaptureSession mPreviewSession; private CameraCaptureSession mImageSession; private Size mPreviewSize; private Size mImageSize; private ImageReader mImageReader; private final CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback() { @Override public void onOpened(@NonNull CameraDevice cameraDevice) { mCameraDevice = cameraDevice; createPreviewSession(); createImageSession(); } @Override public void onDisconnected(@NonNull CameraDevice cameraDevice) { cameraDevice.close(); mCameraDevice = null; } @Override public void onError(@NonNull CameraDevice cameraDevice, int error) { cameraDevice.close(); mCameraDevice = null; Activity activity = getActivity(); if (null != activity) { activity.finish(); } } }; private final CameraCaptureSession.StateCallback mPreviewSessionCallback = new CameraCaptureSession.StateCallback() { @Override public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) { mPreviewSession = cameraCaptureSession; updatePreview(); } @Override public void onConfigureFailed(@NonNull CameraCaptureSession cameraCaptureSession) { Activity activity = getActivity(); if (null != activity) { Toast.makeText(activity, "Failed", Toast.LENGTH_SHORT).show(); } } }; private final CameraCaptureSession.StateCallback mImageSessionCallback = new CameraCaptureSession.StateCallback() { @Override public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) { mImageSession = cameraCaptureSession; } @Override public void onConfigureFailed(@NonNull CameraCaptureSession cameraCaptureSession) { Activity activity = getActivity(); if (null != activity) { Toast.makeText(activity, "Failed", Toast.LENGTH_SHORT).show(); } } }; private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() { @Override public void onImageAvailable(ImageReader reader) { Image image = reader.acquireLatestImage(); // Process the captured image image.close(); } }; @Nullable @Override public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) { View view = inflater.inflate(R.layout.fragment_camera, container, false); return view; } @Override public void onViewCreated(View view, @Nullable Bundle savedInstanceState) { super.onViewCreated(view, savedInstanceState); // Initialize the camera openCamera(); } @Override public void onResume() { super.onResume(); // Start the preview session if (null != mCameraDevice) { createPreviewSession(); } } @Override public void onPause() { closeCamera(); super.onPause(); } private void openCamera() { Activity activity = getActivity(); if (null == activity || activity.isFinishing()) { return; } CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE); try { String cameraId = manager.getCameraIdList()[0]; CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraId); StreamConfigurationMap map = characteristics.get( CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP); mPreviewSize = map.getOutputSizes(SurfaceTexture.class)[0]; mImageSize = map.getOutputSizes(ImageFormat.JPEG)[0]; mImageReader = ImageReader.newInstance(mImageSize.getWidth(), mImageSize.getHeight(), ImageFormat.JPEG, /*maxImages*/2); mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, null); manager.openCamera(cameraId, mStateCallback, null); } catch (CameraAccessException e) { e.printStackTrace(); } } private void closeCamera() { if (null != mPreviewSession) { mPreviewSession.close(); mPreviewSession = null; } if (null != mImageSession) { mImageSession.close(); mImageSession = null; } if (null != mCameraDevice) { mCameraDevice.close(); mCameraDevice = null; } if (null != mImageReader) { mImageReader.close(); mImageReader = null; } } private void createPreviewSession() { try { SurfaceTexture texture = getSurfaceTexture(); texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight()); Surface surface = new Surface(texture); CaptureRequest.Builder builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); builder.addTarget(surface); mCameraDevice.createCaptureSession(Arrays.asList(surface), mPreviewSessionCallback, null); } catch (CameraAccessException e) { e.printStackTrace(); } } private void createImageSession() { try { Surface surface = mImageReader.getSurface(); CaptureRequest.Builder builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE); builder.addTarget(surface); mCameraDevice.createCaptureSession(Arrays.asList(surface), mImageSessionCallback, null); } catch (CameraAccessException e) { e.printStackTrace(); } } private void updatePreview() { if (null == mCameraDevice) { return; } try { CaptureRequest.Builder builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); SurfaceTexture texture = getSurfaceTexture(); texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight()); Surface surface = new Surface(texture); builder.addTarget(surface); mPreviewSession.setRepeatingRequest(builder.build(), null, null); } catch (CameraAccessException e) { e.printStackTrace(); } } private SurfaceTexture getSurfaceTexture() { Activity activity = getActivity(); if (null == activity) { return null; } TextureView textureView = activity.findViewById(R.id.texture_view); return textureView.getSurfaceTexture(); } } ``` 在此示例中,我们创建了两个 CaptureSession:一个用于预览,一个用于捕获图像。我们使用 ImageReader 来捕获 JPEG 图像,然后在 mOnImageAvailableListener 中处理捕获的图像。在 openCamera() 方法中,我们初始化了 ImageReader 并调用了 manager.openCamera() 来打开相机。在 mStateCallback 的 onOpened() 方法中,我们创建了两个 CaptureSession:一个用于预览,一个用于捕获图像。在 createPreviewSession() 方法中,我们首先获取 SurfaceTexture,然后创建一个 Surface 并将其添加到 CaptureRequest.Builder 中。然后,我们调用 mCameraDevice.createCaptureSession() 来创建预览 CaptureSession。在 createImageSession() 方法中,我们创建了一个与 ImageReader 相关联的 Surface,并将其添加到 CaptureRequest.Builder 中。然后,我们调用 mCameraDevice.createCaptureSession() 来创建捕获图像的 CaptureSession。在 updatePreview() 方法中,我们首先获取 SurfaceTexture,然后创建一个 Surface 并将其添加到 CaptureRequest.Builder 中。然后,我们调用 mPreviewSession.setRepeatingRequest() 来更新预览。
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值