Camera2 API 预览实现

目录

主要概念

基本框架

代码实现(Kotlin)

我写的Demo嘤嘤嘤


主要概念

        实现相机的预览是利用Camera2实现各种复杂功能的第一步,在这一步中,最关键的便是如何从摄像头获取数据。实现相机的预览有四个关键的要素:CameraDevice、CaptureSession、CameraManager、Surface。

        概括地说,CameraDevice 是摄像设备,CameraManager 是一个允许我们与 CameraDevice对象交互的系统服务,Surface 是相机服务输出的对象,而通过 CaptureSession 可以建立起 CameraDevice 到一个或多个 Surface 的输出。

Ⅰ CameraManager

        CameraManager是一项系统服务在使用之前获取即可。

//private lateinit var cameraManager: CameraManager
cameraManager = getSystemService(Context.CAMERA_SERVICE) as CameraManager

Ⅱ CameraDevice

        每个摄像头都是一个 CameraDevice ,一个 CameraDevice 可以同时输出多个流,即可以将相机的图像传到多个 Surface(这允许我们可以同时预览、拍照、摄像)。在通过 openCamera 函数连接摄像头后,我们即可获得一个 CameraDevice 对象。

private lateinit var cameraDevice: CameraDevice

private val temp=object : CameraDevice.StateCallback(){
    override fun onOpened(p0: CameraDevice) {
          cameraDevice=p0  
    }
    ...省略部分重写       
}

Ⅲ Surface

         Surface 是 CameraDevice 输出的对象,若想进一步在APP中实现预览,选择支持动画的 TextureView 进行转换即可。

Ⅳ CameraCaptureSession

        创建一个最基础的 CameraCaptureSession 需要至少一个输出缓冲区(Surface)以及一个回调(CameraCaptureSession.Callback)。

cameraDevice.createCaptureSession( Surface的List, 回调函数 , 线程 )

基本框架

个人一些浅浅的理解:

        逻辑上的主线就是内层概念的串联。CameraDevice 通过 CaptureSession 将数据传入不同的 Surface,然后用 TextureView 的 SurfaceTexture 来接收。

        实现方面,首先,由于 SurfaceTexture 处于接收的底端,需要在确保 SurfaceTexture 可用后再连接相机,所以可在 SurfaceTextureListener 中连接相机,连接完成后生成 CameraCaptureSession 向 SurfaceTexture 输出图像。

        此外,还需要动态申请相机权限,并在用之前检查。否则,可以先安装,等闪退了去设置里手动打开权限。


代码实现(Kotlin)

准备工作

一、需要在 AndroidMenifest 清单文件中添加权限

<uses-feature android:name="android.hardware.camera.any" />
<uses-permission android:name="android.permission.CAMERA" />

二、在 Layout 布局文件中创建一个 TextureView ,分配id(后续代码分配 的 id 为 textureView)

三、配置 Gradle 这个我也不太懂,可以参考我的(文末)

MainActivity

**添加包名**

import android.Manifest
import android.annotation.SuppressLint
import android.content.Context
import android.content.Intent
import android.content.pm.PackageManager
import android.graphics.SurfaceTexture
import android.hardware.camera2.*
import android.net.Uri
import android.os.*
import android.provider.Settings
import android.view.Surface
import android.view.TextureView
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
import androidx.core.app.ActivityCompat
import androidx.core.content.ContextCompat


const val CAMERA_REQUEST_RESULT = 1

class MainActivity : AppCompatActivity() {

    private lateinit var textureView: TextureView
    private lateinit var cameraId: String
    private lateinit var cameraManager: CameraManager
    private lateinit var cameraDevice: CameraDevice
    private lateinit var captureRequestBuilder: CaptureRequest.Builder // 用来构建重复发送的预览请求
    private lateinit var cameraCaptureSession: CameraCaptureSession

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        supportActionBar?.hide()

        //变量初始化
        textureView = findViewById(R.id.textureView)
        cameraManager = getSystemService(Context.CAMERA_SERVICE) as CameraManager
        textureView.surfaceTextureListener = surfaceTextureListener

        // 权限判断
        if (!wasCameraPermissionWasGiven()) {
            requestPermissions(arrayOf(Manifest.permission.CAMERA,Manifest.permission.RECORD_AUDIO), CAMERA_REQUEST_RESULT)
        }
    }


    // 设置相机相关参数
    private fun setupCamera() {
        val cameraIds: Array<String> = cameraManager.cameraIdList
        // 获取后置摄像头ID
        for (id in cameraIds) {
            val cameraCharacteristics = cameraManager.getCameraCharacteristics(id)

            if (cameraCharacteristics.get(CameraCharacteristics.LENS_FACING) == CameraCharacteristics.LENS_FACING_BACK) {
                cameraId = id
                break
            }
        }
    }


    //判断是否获得相机权限
    private fun wasCameraPermissionWasGiven() : Boolean {
        if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED)
        {
            return true
        }

        return false
    }


    //申请相机权限
    override fun onRequestPermissionsResult(
        requestCode: Int,
        permissions: Array<out String>,
        grantResults: IntArray
    ) {
        super.onRequestPermissionsResult(requestCode, permissions, grantResults)
        if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
            surfaceTextureListener.onSurfaceTextureAvailable(textureView.surfaceTexture!!, textureView.width, textureView.height)
        } else {
            Toast.makeText(
                this,
                "Camera permission is needed to run this application",
                Toast.LENGTH_LONG
            )
                .show()
            if (ActivityCompat.shouldShowRequestPermissionRationale(
                    this,
                    Manifest.permission.CAMERA
                )) {
                val intent = Intent()
                intent.action = Settings.ACTION_APPLICATION_DETAILS_SETTINGS
                intent.data = Uri.fromParts("package", this.packageName, null)
                startActivity(intent)
            }
        }
    }


    //连接相机
    @SuppressLint("MissingPermission")
    private fun connectCamera() {
        cameraManager.openCamera(cameraId, cameraStateCallback, null)
    }


    // 当SurfaceTexture可用,设置相机然后打开相机,在 onSurfaceTextureAvailable 中实现
    private val surfaceTextureListener = object : TextureView.SurfaceTextureListener {
        @SuppressLint("MissingPermission")
        override fun onSurfaceTextureAvailable(texture: SurfaceTexture, width: Int, height: Int) {
            if (wasCameraPermissionWasGiven()) {
                setupCamera()
                connectCamera()
            }
        }

        override fun onSurfaceTextureSizeChanged(texture: SurfaceTexture, width: Int, height: Int) {
        }

        override fun onSurfaceTextureDestroyed(texture: SurfaceTexture): Boolean {
            return true
        }

        override fun onSurfaceTextureUpdated(texture: SurfaceTexture) {
        }
    }

    // 获取相机设备后,建立 CaptureSession
    private val cameraStateCallback = object : CameraDevice.StateCallback() {
        override fun onOpened(camera: CameraDevice) {
            cameraDevice = camera
            //textureView.surfaceTexture默认的大小和textureView一致
            val surfaceTexture : SurfaceTexture? = textureView.surfaceTexture
            //创建 PREVIEW 预览请求,请求将在 CaptureSession 的回调函数中被使用(重复发送预览请求)
            captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
            //目标
            captureRequestBuilder.addTarget(Surface(surfaceTexture))

            cameraDevice.createCaptureSession(listOf(Surface(surfaceTexture)), captureStateCallback, null)
        }

        override fun onDisconnected(cameraDevice: CameraDevice) {
        }

        override fun onError(cameraDevice: CameraDevice, error: Int) {
        }
    }




    private val captureStateCallback = object : CameraCaptureSession.StateCallback() {
        override fun onConfigureFailed(session: CameraCaptureSession) {

        }
        // 设置完成后重复发送预览请求
        override fun onConfigured(session: CameraCaptureSession) {
            cameraCaptureSession = session
            cameraCaptureSession.setRepeatingRequest(captureRequestBuilder.build(), null, null)
        }
    }
}

我写的Demo嘤嘤嘤

GitHub - 2470611712/Simple-Camera

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
要在 Android 应用中使用 GLSurfaceView 和 Camera2 API 实现预览,可以参考以下步骤: 1. 在你的 Android 项目中添加 GLSurfaceView 控件,并在应用程序中初始化它。 2. 通过 Camera2 API 打开相机,并将相机输出连接到 GLSurfaceView 控件上。 3. 在 GLSurfaceView 控件中实现自定义的 Renderer,并在 Renderer 中实现图像渲染和处理逻辑。 4. 将渲染结果显示在 GLSurfaceView 控件上。 以下是一个简单的代码示例,演示如何使用 GLSurfaceView 和 Camera2 API 实现预览: ```java public class PreviewActivity extends AppCompatActivity { private CameraManager cameraManager; private CameraDevice cameraDevice; private CameraCaptureSession captureSession; private CaptureRequest.Builder previewRequestBuilder; private CaptureRequest previewRequest; private Size previewSize; private SurfaceTexture surfaceTexture; private GLSurfaceView glSurfaceView; private CameraDevice.StateCallback stateCallback = new CameraDevice.StateCallback() { @Override public void onOpened(@NonNull CameraDevice camera) { cameraDevice = camera; createCameraPreviewSession(); } @Override public void onDisconnected(@NonNull CameraDevice camera) { cameraDevice.close(); cameraDevice = null; } @Override public void onError(@NonNull CameraDevice camera, int error) { cameraDevice.close(); cameraDevice = null; } }; private void openCamera() { cameraManager = (CameraManager) getSystemService(Context.CAMERA_SERVICE); try { String cameraId = cameraManager.getCameraIdList()[0]; CameraCharacteristics characteristics = cameraManager.getCameraCharacteristics(cameraId); StreamConfigurationMap map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP); previewSize = map.getOutputSizes(SurfaceTexture.class)[0]; surfaceTexture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight()); Surface previewSurface = new Surface(surfaceTexture); previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); previewRequestBuilder.addTarget(previewSurface); cameraDevice.createCaptureSession(Arrays.asList(previewSurface), new CameraCaptureSession.StateCallback() { @Override public void onConfigured(@NonNull CameraCaptureSession session) { captureSession = session; updatePreview(); } @Override public void onConfigureFailed(@NonNull CameraCaptureSession session) { } }, null); } catch (CameraAccessException e) { e.printStackTrace(); } } private void createCameraPreviewSession() { try { surfaceTexture = glSurfaceView.getSurfaceTexture(); surfaceTexture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight()); openCamera(); } catch (CameraAccessException e) { e.printStackTrace(); } } private void updatePreview() { previewRequestBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO); previewRequest = previewRequestBuilder.build(); try { captureSession.setRepeatingRequest(previewRequest, null, null); } catch (CameraAccessException e) { e.printStackTrace(); } } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); glSurfaceView = new GLSurfaceView(this); glSurfaceView.setEGLContextClientVersion(2); glSurfaceView.setRenderer(new PreviewRenderer()); setContentView(glSurfaceView); } @Override protected void onResume() { super.onResume(); if (glSurfaceView != null) { glSurfaceView.onResume(); } if (cameraDevice == null) { try { cameraManager.openCamera(cameraManager.getCameraIdList()[0], stateCallback, null); } catch (CameraAccessException e) { e.printStackTrace(); } } } @Override protected void onPause() { if (glSurfaceView != null) { glSurfaceView.onPause(); } if (cameraDevice != null) { cameraDevice.close(); cameraDevice = null; } super.onPause(); } private class PreviewRenderer implements GLSurfaceView.Renderer { private final float[] vertexData = { -1f, -1f, 1f, -1f, -1f, 1f, 1f, 1f }; private final float[] textureData = { 0f, 1f, 1f, 1f, 0f, 0f, 1f, 0f }; private int textureId; private int program; private int aPositionLocation; private int aTextureLocation; private int uTextureMatrixLocation; @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { textureId = createTexture(); program = createProgram(); aPositionLocation = glGetAttribLocation(program, "aPosition"); aTextureLocation = glGetAttribLocation(program, "aTextureCoord"); uTextureMatrixLocation = glGetUniformLocation(program, "uTextureMatrix"); glClearColor(0f, 0f, 0f, 0f); } @Override public void onSurfaceChanged(GL10 gl, int width, int height) { glViewport(0, 0, width, height); Matrix.scaleM(textureMatrix, 0, 1f, -1f, 1f); Matrix.translateM(textureMatrix, 0, 0f, -1f, 0f); Matrix.rotateM(textureMatrix, 0, 90f, 0f, 0f, 1f); } @Override public void onDrawFrame(GL10 gl) { glClear(GL_COLOR_BUFFER_BIT); glUseProgram(program); glEnableVertexAttribArray(aPositionLocation); glVertexAttribPointer(aPositionLocation, 2, GL_FLOAT, false, 0, vertexBuffer); glEnableVertexAttribArray(aTextureLocation); glVertexAttribPointer(aTextureLocation, 2, GL_FLOAT, false, 0, textureBuffer); glUniformMatrix4fv(uTextureMatrixLocation, 1, false, textureMatrix, 0); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisableVertexAttribArray(aPositionLocation); glDisableVertexAttribArray(aTextureLocation); } private int createTexture() { int[] textures = new int[1]; glGenTextures(1, textures, 0); int textureId = textures[0]; glBindTexture(GL_TEXTURE_EXTERNAL_OES, textureId); glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); return textureId; } private int createProgram() { String vertexShaderCode = "attribute vec4 aPosition;\n" + "attribute vec4 aTextureCoord;\n" + "uniform mat4 uTextureMatrix;\n" + "varying vec2 vTextureCoord;\n" + "void main() {\n" + " vTextureCoord = (uTextureMatrix * aTextureCoord).xy;\n" + " gl_Position = aPosition;\n" + "}"; String fragmentShaderCode = "#extension GL_OES_EGL_image_external : require\n" + "precision mediump float;\n" + "uniform samplerExternalOES uTexture;\n" + "varying vec2 vTextureCoord;\n" + "void main() {\n" + " gl_FragColor = texture2D(uTexture, vTextureCoord);\n" + "}"; int vertexShader = loadShader(GL_VERTEX_SHADER, vertexShaderCode); int fragmentShader = loadShader(GL_FRAGMENT_SHADER, fragmentShaderCode); int program = glCreateProgram(); glAttachShader(program, vertexShader); glAttachShader(program, fragmentShader); glLinkProgram(program); glUseProgram(program); return program; } private int loadShader(int type, String code) { int shader = glCreateShader(type); glShaderSource(shader, code); glCompileShader(shader); return shader; } } } ``` 需要注意的是,这只是一个简单的示例,并且可能需要进行进一步的优化和改进,以满足你的实际需求和性能要求。同时,为了确保应用程序的稳定性,还需要进行充分的测试和错误处理。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值