三维模型变换与修正Android相机图像方向

三维模型变换与修正Android相机图像方向
 

介绍

三维模型变换:

什么是MVP矩阵:

MVP矩阵就是模型-视图-投影矩阵(Model View Projection Martix)

  • 模型矩阵:将物体坐标变换为世界坐标,主要用到平移变换,旋转变换和放缩变换。
  • 视图矩阵:将世界坐标变换为眼睛坐标/观察坐标系
  • 投影矩阵:将眼睛坐标变换为裁剪坐标/投影坐标系
     

矩阵变换:

MVP矩阵是一个4 x 4的矩阵,以列向量存储。
 

  • 单位矩阵:

 

  • 平移变换矩阵:

 

  • 缩放变换矩阵:

 

  • 绕X轴的对称变换:

 

  • 对Y轴的对称变换:

 

  • 对Z轴的对称变换:

 

  • 对xoy平面的对称变换:

 

就是,对某轴进行变换,就是某轴的值不变,其他轴的值取反

对某平面进行对称变换,就是构成该平面的轴的值不变,其他轴的值取反
 

  • 对X轴旋转的变换矩阵:

 

  • 对Y轴旋转的变换矩阵:

 

  • 对Z轴旋转的变换矩阵:

 

Opengl ES中的坐标系:

 

android相机图像方向:

  1. 相机图像数据都是来自于相机硬件的图像传感器(Image Sensor),这个Sendor被固定到手机之后是有一个默认的取景方向,且不会改变。
  1. 屏幕坐标:在Android系统中,屏幕的左上角是坐标系统的原点(0,0)坐标。原点向右延伸是X轴方向,原点向下延伸是Y轴正方向。
  1. 自然方向:每个设备都有一个自然方向,手机和平板的自然方向不同。手机的自然方向是竖屏(portrait),平板的自然方向是横屏(landscape)。
  1. 图像传感器(Image Sensor)方向:手机相机的图像数据都是来自于摄像头硬件的图像传感器,这个传感器在被固定到手机上后有一个默认的取景方向。
  1. 相机的预览方向:将图像传感器捕获的图像,显示在屏幕上的方向。在默认情况下,与图像传感器方向一致。在旧版Camera中,可以通过setDisplayOrientation()设置相机的预览方向。
  1. 相机采集的图像方向:相机采集图像后需要进行顺时针旋转的角度,这个方向与预览时的方向互不相干。
     

得到手机方向:

要得到手机是竖屏还是横屏,需要使用这个:

activity.getWindowManager().getDefaultDisplay().getRotation();

Returns the rotation of the screen from its "natural" orientation. The returned value may be Surface#ROTATION_0 (no rotation), Surface#ROTATION_90, Surface#ROTATION_180, or Surface#ROTATION_270. For example, if a device has a naturally tall screen, and the user has turned it on its side to go into a landscape orientation, the value returned here may be either Surface#ROTATION_90 or Surface#ROTATION_270 depending on the direction it was turned. The angle is the rotation of the drawn graphics on the screen, which is the opposite direction of the physical rotation of the device. For example, if the device is rotated 90 degrees counter-clockwise, to compensate rendering will be rotated by 90 degrees clockwise and thus the returned value here will be Surface#ROTATION_90.

得到的是从自然方向,屏幕旋转的角度。

它的值可能为:

  • ROTATION_0
  • ROTATION_90
  • ROTATION_180
  • ROTATION_270
     

在华为nova2s上,如果应用的screenOrientation是

  • portrait,那么这个值是0度,
  • landscape,那么这个值是90度。
  • sensor,如果手机横屏且头朝向右边,则这个指是270度。
     

因此,我们可以知道,手机的自然方向是

 

             

             

             

旧版Camera

相机采集的图像方向

如果不设置orientation,那么对于竖直拍摄,得到的图像就是

 

对于横屏拍摄:

 

 

在旧版Camera的API中,有这样一个字段:

/**

 * <p>The orientation of the camera image. The value is the angle that the

 * camera image needs to be rotated clockwise so it shows correctly on

 * the display in its natural orientation. It should be 0, 90, 180, or 270.</p>

 *

 * <p>For example, suppose a device has a naturally tall screen. The

 * back-facing camera sensor is mounted in landscape. You are looking at

 * the screen. If the top side of the camera sensor is aligned with the

 * right edge of the screen in natural orientation, the value should be

 * 90. If the top side of a front-facing camera sensor is aligned with

 * the right of the screen, the value should be 270.</p>

 *

 * @see #setDisplayOrientation(int)

 * @see Parameters#setRotation(int)

 * @see Parameters#setPreviewSize(int, int)

 * @see Parameters#setPictureSize(int, int)

 * @see Parameters#setJpegThumbnailSize(int, int)

 */

public int orientation;
 

这个字段标识了相机采集图片的角度。这个值是相机所采集到的图片需要顺时针旋转至自然方向的角度值。它可以是0,90,180或者270.

例如:

假如你自然地竖着拿着手机,后置摄像头的传感器在手机里是水平方向的,你现在看着手机,如果传感器的顶部在自然方向上手机屏幕的右边(此时,手机是竖屏,传感器是横屏),那么这个orientation的值就是90。 如果前置摄像头的传感器顶部在手机屏幕右边,那么这个值就是270.
 

简单来说,就是后置摄像头采集的图像顺时针旋转90度到屏幕的自然方向,前置摄像头采集的图像顺时针旋转270到到屏幕的自然方向。需要注意的是,这个旋转是不会处理镜像问题的。
 

打印cameraInfo.orientation,竖直应用来说,前置是270,后置是90,
 

一般来说,手机的传感器方向是这样的:

 

设置相机的预览方向

所以,对于竖屏应用,需要调用:来旋转图片到正确的预览方向。

camera.setDisplayOrientation(int degress);

 

/**

 * Set the clockwise rotation of preview display in degrees. This affects

 * the preview frames and the picture displayed after snapshot. This method

 * is useful for portrait mode applications. Note that preview display of

 * front-facing cameras is flipped horizontally before the rotation, that

 * is, the image is reflected along the central vertical axis of the camera

 * sensor. So the users can see themselves as looking into a mirror.

 

* <p><b>Note: </b>Before API level 24, the default value for orientation is 0. Starting in

* API level 24, the default orientation will be such that applications in forced-landscape mode

* will have correct preview orientation, which may be either a default of 0 or

* 180. Applications that operate in portrait mode or allow for changing orientation must still

* call this method after each orientation change to ensure correct preview display in all

* cases.</p>

**/

预览图像需要顺时针旋转的角度。这个方向会影响预览图像和拍照后显示的照片。需要注意的是,前置摄像头,在图像旋转之前,就已经水平翻转,也就是即图像沿着摄像头传感器的中心垂直轴翻转。

 

Android Camera都具备两路流:

  • preview stream:用于预览数据
  • record stream:用于拍照/录像

我们通常只用到Preview Stream。
 

 

在没有限定 Activity 方向时,采用官方推荐的代码来设置方向:

public static void setCameraDisplayOrientation(Activity activity, int cameraId, android.hardware.Camera camera) {

        android.hardware.Camera.CameraInfo info =

                new android.hardware.Camera.CameraInfo();

        android.hardware.Camera.getCameraInfo(cameraId, info);

        int rotation = activity.getWindowManager().getDefaultDisplay()

                .getRotation();

        int degrees = 0;

        switch (rotation) {

            case Surface.ROTATION_0:

                degrees = 0;

                break;

            case Surface.ROTATION_90:

                degrees = 90;

                break;

            case Surface.ROTATION_180:

                degrees = 180;

                break;

            case Surface.ROTATION_270:

                degrees = 270;

                break;

        }



        int result;

        if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {

            result = (info.orientation + degrees) % 360;

            result = (360 - result) % 360;  // compensate the mirror

        } else {  // back-facing

            result = (info.orientation - degrees + 360) % 360;

        }

        camera.setDisplayOrientation(result);

    }

 

这段代码,在华为nova2s上,对于竖屏应用,无论前置摄像头还是后置摄像头,result 都是90.
 

setPreviewSize

文档描述:

Sets the dimensions for preview pictures. If the preview has already started, applications should stop the preview first before changing preview size. The sides of width and height are based on camera orientation. That is, the preview size is the size before it is rotated by display orientation. So applications need to consider the display orientation while setting preview size. For example, suppose the camera supports both 480x320 and 320x480 preview sizes. The application wants a 3:2 preview ratio. If the display orientation is set to 0 or 180, preview size should be set to 480x320. If the display orientation is set to 90 or 270, preview size should be set to 320x480. The display orientation should also be considered while setting picture size and thumbnail size.

这个需要注意的是,这个size需要考虑camera 的orientation,但是这个orientation应该是初始的orientation,也就是没有经过rotated的orientation。

如果初始的orientation是0或者180,那么设置的时候应该是宽大于高

如果初始的orientation是90或270,那么设置的时候应该是宽小于高。
 

通过查看源码,这个orientation和CameraInfo中的orientation应该不是一个,这个orientation初始状态下为0。如果我们在设置的时候,需要宽大于高。
 

相机拍照方向

所以,cameraInfo.orientation有什么用呢?用来设置相机的拍照方向。使用mParameter.setRotation来设置:

/**

 * Sets the clockwise rotation angle in degrees relative to the

 * orientation of the camera. This affects the pictures returned from

 * JPEG {@link PictureCallback}. The camera driver may set orientation

 * in the EXIF header without rotating the picture. Or the driver may

 * rotate the picture and the EXIF thumbnail. If the Jpeg picture is

 * rotated, the orientation in the EXIF header will be missing or 1 (row

 * #0 is top and column #0 is left side).

 *

 * <p>

 * If applications want to rotate the picture to match the orientation

 * of what users see, apps should use

 * {@link android.view.OrientationEventListener} and

 * {@link android.hardware.Camera.CameraInfo}. The value from

 * OrientationEventListener is relative to the natural orientation of

 * the device. CameraInfo.orientation is the angle between camera

 * orientation and natural device orientation. The sum of the two is the

 * rotation angle for back-facing camera. The difference of the two is

 * the rotation angle for front-facing camera. Note that the JPEG

 * pictures of front-facing cameras are not mirrored as in preview

 * display.

 *

 * <p>

 * For example, suppose the natural orientation of the device is

 * portrait. The device is rotated 270 degrees clockwise, so the device

 * orientation is 270. Suppose a back-facing camera sensor is mounted in

 * landscape and the top side of the camera sensor is aligned with the

 * right edge of the display in natural orientation. So the camera

 * orientation is 90. The rotation should be set to 0 (270 + 90).

 *

 * <p>The reference code is as follows. *

 */

public void setRotation(int rotation) {

    if (rotation == 0 || rotation == 90 || rotation == 180

            || rotation == 270) {

        set(KEY_ROTATION, Integer.toString(rotation));

    } else {

        throw new IllegalArgumentException(

                "Invalid rotation=" + rotation);

    }

}

它是作用于从PictureCallback返回的照片,通过 Camera.Parameters.setRotation 函数可以设置相机最终拍出的图片方向。
 

官方推介代码:

public void onOrientationChanged(int orientation) {

        if (orientation == ORIENTATION_UNKNOWN) {

            return;

        }

        android.hardware.Camera.CameraInfo info = new android.hardware.Camera.CameraInfo();

        android.hardware.Camera.getCameraInfo(cameraId, info);



        orientation = (orientation + 45) / 90 * 90;

        int rotation = 0;



        if (info.facing == CameraInfo.CAMERA_FACING_FRONT) {

            rotation = (info.orientation - orientation + 360) % 360;

        } else {  // back-facing camera

            rotation = (info.orientation + orientation) % 360;

        }

        mParameters.setRotation(rotation);

    }

计算旋转的方向,需要考虑到当前屏幕的方向和相机的方向。

OrientationEventListener 和 Camera.orientation 一起配合使用。当屏幕方向改变时,OrientationEventListener 会收到相应的通知,在 onOrientationChanged 的回调方法中去改变相机的拍摄方向,实际上在相机预览方向的改变也是在该回调方法中进行的。

onOrientationChanged 方法的返回值是从 0 ~ 359。而 setRotation 的值只能是 0、90、180、270。所以需要对屏幕方向的 orientation 做一个类似四舍五入的操作。

当然也可以在此回调中根据 Display 类的 getRotation 方法得到方向就行,总之就是有一个回调的通知,然后在此改变屏幕拍摄和预览的方向。

对于前置摄像头,摄像头的 orientation 和屏幕方向的 orientation 两个之差即为要旋转的角度;对于后置摄像头,两者之和即为要旋转的角度。

Camera2

在camera2中,有这样一个字段:

CameraCharacteristics.SENSOR_ORIENTATION

 

  1. Clockwise angle through which the output image needs to be rotated to be upright on the device screen in its native orientation.
  1. Also defines the direction of rolling shutter readout, which is from top to bottom in the sensor's coordinate system.
  1. Units: Degrees of clockwise rotation; always a multiple of 90
  1. Range of valid values:
  1. 0, 90, 180, 270
  1. This key is available on all devices.
     

翻译过来:

图像从手机正竖直方向顺时针旋转到设备屏幕原始方向(传感器方向)的角度。

还定义了滚动快门读数的方向,即传感器坐标系中从上到下的方向。

其实就是得到传感器方向,这个参数,是由设备的生产商决定的

在华为nova2s上:

对于竖直应用:

  • 前置摄像头:270
  • 后置摄像头:90

对于横屏应用:

  • 前置摄像头:270
  • 后置摄像头:90
     

  可以看到,这个字段和旧版Camera中的CameraInfo.orientation是一样的。             

 

             

所以,在使用Camera2时,就没有旧版的Camera那么麻烦,不需要关心图像的旋转问题,Camera2自己帮我们解决了这个问题。

在设置预览大小时,只需要根据你传入的希望的预览size,得到支持的预览size,进行设置就好,在设置时,需记住,一定是宽大于高。在得到支持的size时,也是宽大于高。

 

Camerasurface来自SurfaceTexture时:
 

我们先试着加载一张纹理,原图是这样的:

 

对于这样的shader和顶点坐标:

    private String mVertexShader =
            "precision mediump float;"+
            "attribute vec3 aPosition;" +
                    "attribute vec2 aTexCoord;" +
                    "uniform mat4 rotation;"+
                    "void main(){" +
                    "vec4 pos = vec4(aPosition,1);"+
                    "gl_Position = pos;" +
                    "vTextureCoord = (aPosition.xy)*0.5+0.5;" +
                    "}";

float[] vertexs = new float[]{
        -1f,-1f,0,
        1f,-1f,0,
        1f,1f,0,
        -1f,1f,0
};

我们得到的图是这样的:

 

为什么会出现这样的倒置呢?因为我们纹理坐标和屏幕坐标系的Y轴的方向是相反的,所以默认按顶点坐标系方向输入的图像是倒置的。
 

以Camera2为例:

使用的顶点shader是:

static ConstString sVshExternalOES = SHADER_STRING(
attribute vec2 vPosition;
varying vec2 texCoord;
uniform mat4 transform;
void main()
{
    gl_Position = vec4(vPosition, 0.0, 1.0);
    vec2 coord = (vPosition / 2.0 ) + 0.5;
    texCoord = coord;
});

使用的顶点坐标是: 

float GlobalConfig::sVertexDataCommon[8] =

        {

                -1.0f, -1.0f,

                1.0f, -1.0f,

                1.0f, 1.0f,

                -1.0f, 1.0f

        };

 

假如我们使用前置摄像头进行拍摄,方向就是竖直方向,我们会得到这样的一张图:
 

 

通过上面的介绍嘛,我们可以知道,我们前置摄像头采集的图像是以传感器为坐标的,通过映射关系,我们可以知道,原始的图像是这样:
 

 

而这张图所展示的方向就是我们相机采集到的图片的方向。

我们知道在SurfaceTexture中,我们可以通过mSurfaceTexture.getTransformMatrix(mStMatrix);得到纹理矩阵,现在我们给纹理坐标乘上这个矩阵:

static ConstString sVshExternalOES = SHADER_STRING(

attribute vec2 vPosition;

varying vec2 texCoord;

uniform mat4 transform;

void main()

{

    vec4 po = vec4(vPosition, 0.0, 1.0)*transform;

    gl_Position = vec4(po.x,po.y,0,1);

    vec2 coord = (vPosition / 2.0 ) + 0.5;

    texCoord = coord;

});

得到图是这样的:
 

 
 

在上面,我们可以知道,如果乘以了由SurfaceTexture得到的纹理矩阵,我们就可以得到本应该的呈现的图像,那么这个纹理矩阵是怎么来的呢?

我们先看一下getTransformMatrix的介绍:

  Retrieve the 4x4 texture coordinate transform matrix associated with the texture image set by

  the most recent call to updateTexImage.

 

  This transform matrix maps 2D homogeneous texture coordinates of the form (s, t, 0, 1) with s

  and t in the inclusive range [0, 1] to the texture coordinate that should be used to sample

  that location from the texture.  Sampling the texture outside of the range of this transform

 is undefined.

 

 The matrix is stored in column-major order so that it may be passed directly to OpenGL ES via

  the glLoadMatrixf or glUniformMatrix4fv functions.
 

   该变换矩阵将形式(s,t,0,1)的2D均匀纹理坐标与包含范围[0,1]中的s和t映射到应该用于从纹理中对该位置进行采样的纹理坐标。 在此变换范围之外采样纹理是未定义的。

矩阵以列主顺序存储,以便可以直接传递给OpenGL ES  glLoadMatrixf或glUniformMatrix4fv函数。
 

它是怎么得到呢?在上面描述上说是调用updateTexImage方法时设置的,我们可以看看源码:
 

//SurfaceTexture.cpp
static void SurfaceTexture_getTransformMatrix(JNIEnv* env, jobject thiz,
        jfloatArray jmtx)
{
    sp<GLConsumer> surfaceTexture(SurfaceTexture_getSurfaceTexture(env, thiz));
    float* mtx = env->GetFloatArrayElements(jmtx, NULL);
    surfaceTexture->getTransformMatrix(mtx);
    env->ReleaseFloatArrayElements(jmtx, mtx, 0);
}

可以看到,它是交给了GLConsumer:

void GLConsumer::getTransformMatrix(float mtx[16]) {

    Mutex::Autolock lock(mMutex);

    memcpy(mtx, mCurrentTransformMatrix, sizeof(mCurrentTransformMatrix));

}

可以看到,我们得到的就是mCurrentTransformMatrix的副本,那么这个mCurrentTransformMatrix是在哪里得到的呢?

updateTexImage函数中:

status_t GLConsumer::updateTexImage() {

    ATRACE_CALL();

    GLC_LOGV("updateTexImage");

    Mutex::Autolock lock(mMutex);

    if (mAbandoned) {

        GLC_LOGE("updateTexImage: GLConsumer is abandoned!");

        return NO_INIT;

    }

    // Make sure the EGL state is the same as in previous calls.

    status_t err = checkAndUpdateEglStateLocked();

    if (err != NO_ERROR) {

        return err;

    }

    BufferItem item;

    // Acquire the next buffer.

    // In asynchronous mode the list is guaranteed to be one buffer

    // deep, while in synchronous mode we use the oldest buffer.

    //获得一个新的缓冲区,保存在item中

    err = acquireBufferLocked(&item, 0);

    if (err != NO_ERROR) {

        if (err == BufferQueue::NO_BUFFER_AVAILABLE) {

            // We always bind the texture even if we don't update its contents.

            GLC_LOGV("updateTexImage: no buffers were available");

            glBindTexture(mTexTarget, mTexName);

            err = NO_ERROR;

        } else {

            GLC_LOGE("updateTexImage: acquire failed: %s (%d)",

                strerror(-err), err);

        }

        return err;

    }

    // Release the previous buffer.

    //更新并释放之前的缓冲区

    err = updateAndReleaseLocked(item);

    if (err != NO_ERROR) {

        // We always bind the texture.

        glBindTexture(mTexTarget, mTexName);

        return err;

    }

    // Bind the new buffer to the GL texture, and wait until it's ready.

    //绑定新的缓冲区给GL texture

    return bindTextureImageLocked();

}

在这段代码中,我们重点看updateAndReleaseLocked(item)这个函数就好了:

 

status_t updateAndReleaseLocked(const BufferItem& item,

            PendingRelease* pendingRelease = nullptr);



status_t GLConsumer::updateAndReleaseLocked(const BufferItem& item,

        PendingRelease* pendingRelease)

{

    status_t err = NO_ERROR;

    int slot = item.mSlot;

    ... ....

    // Confirm state.

    err = checkAndUpdateEglStateLocked();

    ... ...

    // Ensure we have a valid EglImageKHR for the slot, creating an EglImage

    // if nessessary, for the gralloc buffer currently in the slot in

    // ConsumerBase.

    // We may have to do this even when item.mGraphicBuffer == NULL (which

    // means the buffer was previously acquired).

    err = mEglSlots[slot].mEglImage->createIfNeeded(mEglDisplay, item.mCrop);

   ... ...

    // Do whatever sync ops we need to do before releasing the old slot.

    if (slot != mCurrentTexture) {

        err = syncForReleaseLocked(mEglDisplay);

        if (err != NO_ERROR) {

            // Release the buffer we just acquired.  It's not safe to

            // release the old buffer, so instead we just drop the new frame.

            // As we are still under lock since acquireBuffer, it is safe to

            // release by slot.

            releaseBufferLocked(slot, mSlots[slot].mGraphicBuffer,

                    mEglDisplay, EGL_NO_SYNC_KHR);

            return err;

        }

    }

  ... ...

    // Hang onto the pointer so that it isn't freed in the call to

    // releaseBufferLocked() if we're in shared buffer mode and both buffers are

    // the same.

    sp<EglImage> nextTextureImage = mEglSlots[slot].mEglImage;

    // release old buffer

    if (mCurrentTexture != BufferQueue::INVALID_BUFFER_SLOT) {

        if (pendingRelease == nullptr) {

            status_t status = releaseBufferLocked(

                    mCurrentTexture, mCurrentTextureImage->graphicBuffer(),

                    mEglDisplay, mEglSlots[mCurrentTexture].mEglFence);

            if (status < NO_ERROR) {

                GLC_LOGE("updateAndRelease: failed to release buffer: %s (%d)",

                        strerror(-status), status);

                err = status;

                // keep going, with error raised [?]

            }

        } else {

            pendingRelease->currentTexture = mCurrentTexture;

            pendingRelease->graphicBuffer =

                    mCurrentTextureImage->graphicBuffer();

            pendingRelease->display = mEglDisplay;

            pendingRelease->fence = mEglSlots[mCurrentTexture].mEglFence;

            pendingRelease->isPending = true;

        }

    }

    // Update the GLConsumer state.

    mCurrentTexture = slot;

    mCurrentTextureImage = nextTextureImage;

    mCurrentCrop = item.mCrop;

    mCurrentTransform = item.mTransform;

    mCurrentScalingMode = item.mScalingMode;

    mCurrentTimestamp = item.mTimestamp;

    mCurrentDataSpace = item.mDataSpace;

    mCurrentFence = item.mFence;

    mCurrentFenceTime = item.mFenceTime;

    mCurrentFrameNumber = item.mFrameNumber;

    computeCurrentTransformMatrixLocked();

    return err;

}

这段代码,我们可以明确的是,item是新的缓冲区,在释放掉旧缓冲区后,会给mCurrentTransform进行赋值,然后会调用computeCurrentTransformMatrixLocked函数,进行纹理矩阵的计算:
 

 

void GLConsumer::computeCurrentTransformMatrixLocked() {

    GLC_LOGV("computeCurrentTransformMatrixLocked");

    sp<GraphicBuffer> buf = (mCurrentTextureImage == nullptr) ?

            nullptr : mCurrentTextureImage->graphicBuffer();

    if (buf == nullptr) {

        GLC_LOGD("computeCurrentTransformMatrixLocked: "

                "mCurrentTextureImage is NULL");

    }

    computeTransformMatrix(mCurrentTransformMatrix, buf,

        isEglImageCroppable(mCurrentCrop) ? Rect::EMPTY_RECT : mCurrentCrop,

        mCurrentTransform, mFilteringEnabled);

}


void GLConsumer::computeTransformMatrix(float outTransform[16],

        const sp<GraphicBuffer>& buf, const Rect& cropRect, uint32_t transform,

        bool filtering) {

    float xform[16];

    //给xform初始化为单位矩阵

    for (int i = 0; i < 16; i++) {

        xform[i] = mtxIdentity[i];

    }

    //如果是水平翻转,就乘上水平翻转矩阵

    if (transform & NATIVE_WINDOW_TRANSFORM_FLIP_H) {

        float result[16];

        mtxMul(result, xform, mtxFlipH);

        for (int i = 0; i < 16; i++) {

            xform[i] = result[i];

        }

    }

    //如果是竖直翻转,就乘上竖直翻转矩阵

    if (transform & NATIVE_WINDOW_TRANSFORM_FLIP_V) {

        float result[16];

        mtxMul(result, xform, mtxFlipV);

        for (int i = 0; i < 16; i++) {

            xform[i] = result[i];

        }

    }

    //如果是旋转90度,就乘上旋转90度的矩阵

    if (transform & NATIVE_WINDOW_TRANSFORM_ROT_90) {

        float result[16];

        mtxMul(result, xform, mtxRot90);

        for (int i = 0; i < 16; i++) {

            xform[i] = result[i];

        }

    }

    float mtxBeforeFlipV[16];

    if (!cropRect.isEmpty()) {

        float tx = 0.0f, ty = 0.0f, sx = 1.0f, sy = 1.0f;

        float bufferWidth = buf->getWidth();

        float bufferHeight = buf->getHeight();

        float shrinkAmount = 0.0f;

        if (filtering) {

            // In order to prevent bilinear sampling beyond the edge of the

            // crop rectangle we may need to shrink it by 2 texels in each

            // dimension.  Normally this would just need to take 1/2 a texel

            // off each end, but because the chroma channels of YUV420 images

            // are subsampled we may need to shrink the crop region by a whole

            // texel on each side.

            switch (buf->getPixelFormat()) {

                case PIXEL_FORMAT_RGBA_8888:

                case PIXEL_FORMAT_RGBX_8888:

                case PIXEL_FORMAT_RGBA_FP16:

                case PIXEL_FORMAT_RGBA_1010102:

                case PIXEL_FORMAT_RGB_888:

                case PIXEL_FORMAT_RGB_565:

                case PIXEL_FORMAT_BGRA_8888:

                    // We know there's no subsampling of any channels, so we

                    // only need to shrink by a half a pixel.

                    shrinkAmount = 0.5;

                    break;

                default:

                    // If we don't recognize the format, we must assume the

                    // worst case (that we care about), which is YUV420.

                    shrinkAmount = 1.0;

                    break;

            }

        }

        // Only shrink the dimensions that are not the size of the buffer.

        if (cropRect.width() < bufferWidth) {

            tx = (float(cropRect.left) + shrinkAmount) / bufferWidth;

            sx = (float(cropRect.width()) - (2.0f * shrinkAmount)) /

                    bufferWidth;

        }

        if (cropRect.height() < bufferHeight) {

            ty = (float(bufferHeight - cropRect.bottom) + shrinkAmount) /

                    bufferHeight;

            sy = (float(cropRect.height()) - (2.0f * shrinkAmount)) /

                    bufferHeight;

        }

        float crop[16] = {

            sx, 0, 0, 0,

            0, sy, 0, 0,

            0, 0, 1, 0,

            tx, ty, 0, 1,

        };

        mtxMul(mtxBeforeFlipV, crop, xform);

    } else {

        for (int i = 0; i < 16; i++) {

            mtxBeforeFlipV[i] = xform[i];

        }

    }

    // SurfaceFlinger expects the top of its window textures to be at a Y

    // coordinate of 0, so GLConsumer must behave the same way.  We don't

    // want to expose this to applications, however, so we must add an

    // additional vertical flip to the transform after all the other transforms.

    //再进行一次垂直翻转

    mtxMul(outTransform, mtxFlipV, mtxBeforeFlipV);

}

在GLConsumer中,有这样几个矩阵:

// Transform matrices

static float mtxIdentity[16] = {

    1, 0, 0, 0,

    0, 1, 0, 0,

    0, 0, 1, 0,

    0, 0, 0, 1,

};

static float mtxFlipH[16] = {

    -1, 0, 0, 0,

    0, 1, 0, 0,

    0, 0, 1, 0,

    1, 0, 0, 1,

};

static float mtxFlipV[16] = {

    1, 0, 0, 0,

    0, -1, 0, 0,

    0, 0, 1, 0,

    0, 1, 0, 1,

};

static float mtxRot90[16] = {

    0, 1, 0, 0,

    -1, 0, 0, 0,

    0, 0, 1, 0,

    1, 0, 0, 1,

};

 

现在我们来看看对于Camera2相机,前置摄像头的纹理矩阵是怎么样的?

 

先说明mtxMul这个函数:是b x a

  1. static void mtxMul(float out[16], const float a[16], const float b[16]);
     

假设现在result是前置摄像头的纹理矩阵,那么result = mtxBeforeFlipV  x  mtxFlipV

那么mtxBeforeFlipV =

 

这个矩阵是mtxRot90 x mtxFlipH,也就是先水平翻转,再竖直翻转:
 

 

然后我们来看一下后置的:

如果还是用最开始的那段shader:
 

那么得到的图像是:

 

我们知道opengl纹理坐标映射会进行一次垂直翻转,所以原图是:

 

那么我们来看一下后置摄像头的矩阵:

 

同样的,我们可以计算出:result = mtxBeforeFlipV  x  mtxFlipV

那么mtxBeforeFlipV =

 

 

这个矩阵是mtxRot90,所以:对于后置摄像头,只顺时针旋转了90度,它不需要再进行倒置

 

 

所以,我们可以用这个纹理变换矩阵进行对图像的补偿。
 

总结:
 

  1. 如果给Camera设置的Surface来自SurfaceTexture,那么我们从SurfaceTexture得到的图像是相机采集到的图像。
  1. Camera的预览方向和采集方向是不一样的。
  1. 解决Camera预览方向的问题有两种方式:
  1. 手动解决:
  1. 前置需要先做水平翻转,再顺时针旋转90度。
  1. 后置只需要顺时针旋转90度。
  1. 使用surfaceTexture的MVP矩阵。

 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值