LearnOpenGL之摄像机

   ================================  前序===============================

         AndroidLearnOpenGL是本博主自己实现的LearnOpenGL练习集合:

        Github地址:https://github.com/wangyongyao1989/AndroidLearnOpenGL  

        系列文章:

        1、LearnOpenGL之入门基础

        2、LearnOpenGL之3D显示

        3、LearnOpenGL之摄像机

        4、LearnOpenGL之光照 

        ============================== 显示效果 ===============================               

                        

        ===================================================================

         OpenGL本身没有摄像机(Camera)的概念,但我们可以通过把场景中的所有物体往相反方向移动的方式来模拟出摄像机,产生一种我们在移动的感觉,而不是场景在移动。

     

一、摄像机/观察空间:

        当我们讨论摄像机/观察空间(Camera/View Space)的时候,是在讨论以摄像机的视角作为场景原点时场景中所有的顶点坐标:观察矩阵把所有的世界坐标变换为相对于摄像机位置与方向的观察坐标。要定义一个摄像机,我们需要它在世界空间中的位置、观察的方向、一个指向它右侧的向量以及一个指向它上方的向量。细心的读者可能已经注意到我们实际上创建了一个三个单位轴相互垂直的、以摄像机的位置为原点的坐标系。

       

        1、摄像机位置:

                获取摄像机位置很简单。摄像机位置简单来说就是世界空间中一个指向摄像机位置的向量。

    glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);

        2、摄像机方向:

        摄像机的方向指的是摄像机指向哪个方向,如果将两个矢量相减,我们就能得到这两个矢量的差,用场景原点向量减去摄像机位置向量的结果就是摄像机的指向向量。由于我们知道摄像机指向z轴负方向,但我们希望方向向量(Direction Vector)指向摄像机的z轴正方向。如果我们交换相减的顺序,我们就会获得一个指向摄像机正z轴方向的向量:

    glm::vec3 cameraTarget = glm::vec3(0.0f, 0.0f, 0.0f);
    glm::vec3 cameraDirection = glm::normalize(cameraPos - cameraTarget);

        3、右轴:

        我们需要的另一个向量是一个右向量(Right Vector),它代表摄像机空间的x轴的正方向。为获取右向量我们需要先使用一个小技巧:先定义一个上向量(Up Vector)。接下来把上向量和第二步得到的方向向量进行叉乘。两个向量叉乘的结果会同时垂直于两向量,因此我们会得到指向x轴正方向的那个向量(如果我们交换两个向量叉乘的顺序就会得到相反的指向x轴负方向的向量):

    glm::vec3 up = glm::vec3(0.0f, 1.0f, 0.0f); 
    glm::vec3 cameraRight = glm::normalize(glm::cross(up, cameraDirection));

        4、上轴:

        现在我们已经有了x轴向量和z轴向量,获取一个指向摄像机的正y轴向量就相对简单了:我们把右向量和方向向量进行叉乘:

    glm::vec3 cameraUp = glm::cross(cameraDirection, cameraRight);

        5、Look At:

        使用矩阵的好处之一是如果你使用3个相互垂直(或非线性)的轴定义了一个坐标空间,你可以用这3个轴外加一个平移向量来创建一个矩阵,并且你可以用这个矩阵乘以任何向量来将其变换到那个坐标空间。这正是LookAt矩阵所做的,现在我们有了3个相互垂直的轴和一个定义摄像机空间的位置坐标,我们可以创建我们自己的LookAt矩阵了:

        

        其中R是右向量,U是上向量,D是方向向量P是摄像机位置向量。注意,位置向量是相反的,因为我们最终希望把世界平移到与我们自身移动的相反方向。把这个LookAt矩阵作为观察矩阵可以很高效地把所有世界坐标变换到刚刚定义的观察空间。LookAt矩阵就像它的名字表达的那样:它会创建一个看着(Look at)给定目标的观察矩阵。glm::LookAt函数需要一个位置、目标和上向量。

    glm::mat4 view;
    view = glm::lookAt(glm::vec3(0.0f, 0.0f, 3.0f), 
           glm::vec3(0.0f, 0.0f, 0.0f), 
           glm::vec3(0.0f, 1.0f, 0.0f));

          

二、摄像机移位:

       1、摄像机绕场景转:

       创建一个x和z坐标,它会代表圆上的一点,我们将会使用它作为摄像机的位置。通过重新计算x和y坐标,我们会遍历圆上的所有点,这样摄像机就会绕着场景旋转了。我们预先定义这个圆的半径radius,在每次渲染迭代中使用clock()函数重新创建观察矩阵,来扩大这个圆。  

    // create transformations
    glm::mat4 view = glm::mat4(1.0f);           //观察矩阵(View Matrix)
    glm::mat4 projection = glm::mat4(1.0f);     //投影矩阵(Projection Matrix)
    float radius = 10.0f;
    timeValue = clock() * 10 / CLOCKS_PER_SEC;
    float camX = static_cast<float>(sin(timeValue) * radius);
    float camZ = static_cast<float>(cos(timeValue) * radius);

    //观察矩阵(View Matrix)平移,glm::LookAt函数需要一个位置、目标和上向量。
    view = glm::lookAt(glm::vec3(camX, 0.0f, camZ), glm::vec3(0.0f, 0.0f, 0.0f),
                       glm::vec3(0.0f, 1.0f, 0.0f));
    projection = glm::perspective(glm::radians(45.0f)
                , (float) screenW / (float) screenH, 0.1f,100.0f);
    // pass them to the shaders (3 different ways)
    setMat4("projection", projection);
    setMat4("view", view);

        2、自动移动:

        将摄像机位置设置为之前定义的cameraPos。方向是当前的位置加上我们刚刚定义的方向向量。这样能保证无论我们怎么移动,摄像机都会注视着目标方向。

    view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);

        3、移动速度:

        图形程序和游戏通常会跟踪一个时间差(Deltatime)变量,它储存了渲染上一帧所用的时间。我们把所有速度都去乘以deltaTime值。结果就是,如果我们的deltaTime很大,就意味着上一帧的渲染花费了更多时间,所以这一帧的速度需要变得更高来平衡渲染所花去的时间。使用这种方法时,无论你的电脑快还是慢,摄像机的速度都会相应平衡,这样每个用户的体验就都一样了。

    float deltaTime = 0.0f; // 当前帧与上一帧的时间差
    float lastFrame = 0.0f; // 上一帧的时间

    float currentFrame = glfwGetTime();
    deltaTime = currentFrame - lastFrame;
    lastFrame = currentFrame;

         4、欧拉角:

        欧拉角(Euler Angle)是可以表示3D空间中任何旋转的3个值,由莱昂哈德·欧拉(Leonhard Euler)在18世纪提出。一共有3种欧拉角:俯仰角(Pitch)、偏航角(Yaw)和滚转角(Roll),下面的图片展示了它们的含义:

        

         俯仰角是描述我们如何往上或往下看的角,可以在第一张图中看到。第二张图展示了偏航角,偏航角表示我们往左和往右看的程度。滚转角代表我们如何翻滚摄像机,通常在太空飞船的摄像机中使用。每个欧拉角都有一个值来表示,把三个角结合起来我们就能够计算3D空间中任何的旋转向量了。

        对于我们的摄像机系统来说,我们只关心俯仰角和偏航角,所以我们不会讨论滚转角。给定一个俯仰角和偏航角,我们可以把它们转换为一个代表新的方向向量的3D向量。

        

         在xz平面上,看向y轴,我们可以基于第一个三角形计算来计算它的长度/y方向的强度(Strength)(我们往上或往下看多少)。从图中我们可以看到对于一个给定俯仰角的y值等于sinθ: 

    direction.y = sin(glm::radians(pitch)); // 注意我们先把角度转为弧度

         只更新了y值,仔细观察x和z分量也被影响了。从三角形中我们可以看到它们的值等于:

    direction.x = cos(glm::radians(pitch));
    direction.z = cos(glm::radians(pitch));

         偏航角找到需要的分量:

        

         就像俯仰角的三角形一样,我们可以看到x分量取决于cos(yaw)的值,z值同样取决于偏航角的正弦值。把这个加到前面的值中,会得到基于俯仰角和偏航角的方向向量:   

       // 译注:direction代表摄像机的前轴(Front),
       //这个前轴是和本文第一幅图片的第二个摄像机的方向向量是相反的    
    direction.x = cos(glm::radians(pitch)) * cos(glm::radians(yaw));
    direction.y = sin(glm::radians(pitch));
    direction.z = cos(glm::radians(pitch)) * sin(glm::radians(yaw));

         

三、摄像机类:

        Camer3D类提供构造函数中提供默认值,初始化时提供摄像机方向向量、右向量、上向量。结合Look At给GetViewMatrix()外供函数给外调用。

        1、Camera3D.h代码:

#ifndef ANDROIDLEARNOPENGL_CAMERA3D_H
#define ANDROIDLEARNOPENGL_CAMERA3D_H

#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>

enum Camera_Movement {
    FORWARD,
    BACKWARD,
    LEFT,
    RIGHT
};

// Default camera values
const float YAW = -90.0f;
const float PITCH = 0.0f;
const float SPEED = 1.5f;
const float SENSITIVITY = 0.1f;
const float ZOOM = 45.0f;

using namespace glm;

class Camera3D {

private:
    // camera Attributes
    vec3 Position;
    vec3 Front;
    vec3 Up;
    vec3 Right;
    vec3 WorldUp;
    // euler Angles
    float Yaw;
    float Pitch;
    // camera options
    float MovementSpeed;
    float Sensitivity;

    // calculates the front vector from the Camera's (updated) Euler Angles
    void updateCameraVectors();

public:
    float Zoom;

    // constructor with vectors
    Camera3D(
            vec3 position = vec3(0.0f, 0.0f, 10.0f),
            vec3 up = vec3(0.0f, 1.0f, 0.0f),
            float yaw = YAW,
            float pitch = PITCH);

    // constructor with scalar values
    Camera3D(float posX, float posY, float posZ
            , float upX, float upY, float upZ
            , float yaw, float pitch);

    mat4 GetViewMatrix();

    void ProcessKeyboard(Camera_Movement direction, float deltaTime);

    void ProcessXYMovement(float xoffset, float yoffset, bool constrainPitch = true);

    void ProcessScroll(float yoffset);


};

#endif //ANDROIDLEARNOPENGL_CAMERA3D_H

        2、Camera3D.cpp代码:

#include "Camera3D.h"

Camera3D::Camera3D(vec3 position, vec3 up, float yaw, float pitch) :
        Front(vec3(0.0f, 0.0f, -1.0f))
        , MovementSpeed(SPEED)
        , Sensitivity(SENSITIVITY),
        Zoom(ZOOM) {
    Position = position;
    WorldUp = up;
    Yaw = yaw;
    Pitch = pitch;
    updateCameraVectors();
}

Camera3D::Camera3D(float posX, float posY, float posZ
                    , float upX, float upY, float upZ, float yaw
                    , float pitch) 
                    : Front(glm::vec3(0.0f, 0.0f, -1.0f))
                    , MovementSpeed(SPEED)
                    , Sensitivity(SENSITIVITY), Zoom(ZOOM) {
    Position = glm::vec3(posX, posY, posZ);
    WorldUp = glm::vec3(upX, upY, upZ);
    Yaw = yaw;
    Pitch = pitch;
    updateCameraVectors();
}

mat4 Camera3D::GetViewMatrix() {

    return lookAt(Position, Position + Front, Up);
}

void Camera3D::ProcessKeyboard(Camera_Movement direction, float deltaTime) {
    float velocity = MovementSpeed * deltaTime;
    if (direction == FORWARD)
        Position += Front * velocity;
    if (direction == BACKWARD)
        Position -= Front * velocity;
    if (direction == LEFT)
        Position -= Right * velocity;
    if (direction == RIGHT)
        Position += Right * velocity;
}

void Camera3D::ProcessXYMovement(float xoffset, float yoffset, bool constrainPitch) {
    xoffset *= Sensitivity;
    yoffset *= Sensitivity;

    Yaw += xoffset;
    Pitch += yoffset;

    // make sure that when pitch is out of bounds, screen doesn't get flipped
    if (constrainPitch) {
        if (Pitch > 89.0f)
            Pitch = 89.0f;
        if (Pitch < -89.0f)
            Pitch = -89.0f;
    }
    Zoom = 45.0f;
    // update Front, Right and Up Vectors using the updated Euler angles
    updateCameraVectors();
}

void Camera3D::ProcessScroll(float yoffset) {
    Zoom -= (float) yoffset;
    if (Zoom < 25.0f)
        Zoom = 25.0f;
    if (Zoom > 100.0f)
        Zoom = 100.0f;
}

void Camera3D::updateCameraVectors() {
    // calculate the new Front vector
    glm::vec3 front;
    front.x = cos(glm::radians(Yaw)) * cos(glm::radians(Pitch));
    front.y = sin(glm::radians(Pitch));
    front.z = sin(glm::radians(Yaw)) * cos(glm::radians(Pitch));
    Front = glm::normalize(front);
    // also re-calculate the Right and Up vector
    // normalize the vectors, because their length gets closer to 0 the more you look up
    // or down which results in slower movement.
    Right = glm::normalize(glm::cross(Front, WorldUp));
    Up = glm::normalize(glm::cross(Right, Front));
}

 四、代码实现:

           在上一节LearnOpenGL之3D显示的代码框架下加入摄像机类,来实现“从Java层传入view的手势事件实现上下左右拖拽及放大缩小的效果。”

        1、首先在Android层的View中获取上下左右滑动及放大缩小事件:

package com.wangyongyao.androidlearnopengl.view;

import android.content.Context;
import android.graphics.Matrix;
import android.opengl.GLSurfaceView;
import android.util.AttributeSet;
import android.util.Log;
import android.view.GestureDetector;
import android.view.MotionEvent;
import android.view.ScaleGestureDetector;


import com.wangyongyao.androidlearnopengl.JniCall;
import com.wangyongyao.androidlearnopengl.utils.OpenGLUtil;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

public class GL3DCameraView extends GLSurfaceView implements GLSurfaceView.Renderer {

    private GestureDetector gestureDetector;
    private ScaleGestureDetector scaleGestureDetector;

    private static String TAG = GL3DCameraView.class.getSimpleName();
    private JniCall mJniCall;
    private Context mContext;
    private boolean isScaleGesture;

    private float downX;
    private float downY;


    public GL3DCameraView(Context context, JniCall jniCall) {
        super(context);
        mContext = context;
        mJniCall = jniCall;
        init();
    }

    public GL3DCameraView(Context context, AttributeSet attrs) {
        super(context, attrs);
        mContext = context;
        init();
    }

    private void init() {
        getHolder().addCallback(this);
        setEGLContextClientVersion(3);
        setEGLConfigChooser(8, 8, 8, 8, 16, 0);
        String fragPath = OpenGLUtil.getModelFilePath(mContext, "camera_3d_fragment.glsl");
        String vertexPath = OpenGLUtil.getModelFilePath(mContext, "camera_3d_vertex.glsl");
        String picSrc1 = OpenGLUtil.getModelFilePath(mContext, "yao.jpg");
        String picSrc2 = OpenGLUtil.getModelFilePath(mContext, "awesomeface.png");

        if (mJniCall != null) {
            mJniCall.setCameraGLSLPath(fragPath, vertexPath, picSrc1, picSrc2);
        }
        setRenderer(this);

        gestureDetector = new GestureDetector(getContext(), new GestureDetector.SimpleOnGestureListener());
        scaleGestureDetector = new ScaleGestureDetector(getContext()
                , new ScaleGestureDetector.OnScaleGestureListener() {
            @Override
            public boolean onScale(ScaleGestureDetector detector) {
                // 处理缩放事件
                float scaleFactor = detector.getScaleFactor();
//                Log.e(TAG, "onScale scaleFactor: " + scaleFactor
//                        + "==getFocusX:" + detector.getFocusX()
//                        + "===getFocusY" + detector.getFocusY());
                mJniCall.CameraOnScale(scaleFactor, detector.getFocusX()
                        , detector.getFocusY(), 2);
                return true;
            }

            @Override
            public boolean onScaleBegin(ScaleGestureDetector detector) {
                // 开始缩放事件
//                Log.e(TAG, "onScaleBegin: " + detector);
                mJniCall.CameraOnScale(detector.getScaleFactor(), detector.getFocusX()
                        , detector.getFocusY(), 1);
                return true;
            }

            @Override
            public void onScaleEnd(ScaleGestureDetector detector) {
                // 结束缩放事件
//                Log.e(TAG, "onScaleEnd: " + detector);
                mJniCall.CameraOnScale(detector.getScaleFactor(), detector.getFocusX()
                        , detector.getFocusY(), 3);
                isScaleGesture = false;
            }
        });
    }


    public void onDrawFrame(GL10 gl) {
        if (mJniCall != null)
            mJniCall.CameraOpenGLRenderFrame();
    }

    public void onSurfaceChanged(GL10 gl, int width, int height) {
        if (mJniCall != null)
            mJniCall.initCamera3DOpenGl(width, height);
    }


    @Override
    public void onSurfaceCreated(GL10 gl10, EGLConfig eglConfig) {

    }


    @Override
    public boolean onTouchEvent(MotionEvent event) {
        if (isScaleGesture) {
            gestureDetector.onTouchEvent(event);
            scaleGestureDetector.onTouchEvent(event);
            return true;
        }
        switch (event.getAction()) {
            case MotionEvent.ACTION_POINTER_2_DOWN: {
                isScaleGesture = true;
            }
            break;
            case MotionEvent.ACTION_POINTER_2_UP: {
                isScaleGesture = false;
            }
            break;
            case MotionEvent.ACTION_DOWN: {
//                Log.e(TAG, "onTouchEvent: " + event.getAction());
                downX = event.getX();
                downY = event.getY();
                mJniCall.CameraMoveXY(0, 0, 1);
            }
            break;
            case MotionEvent.ACTION_MOVE: {
//                Log.e(TAG, "onTouchEvent: " + event.getAction());
                float dx = event.getX() - downX;
                float dy = event.getY() - downY;
//                Log.e(TAG, "ACTION_MOVE:dx= "
//                        + dx + "==dy:" + dy);
                mJniCall.CameraMoveXY(dx, dy, 2);
            }
            break;
            case MotionEvent.ACTION_UP: {
//                Log.e(TAG, "onTouchEvent: " + event.getAction());
                downX = 0;
                downY = 0;
                mJniCall.CameraMoveXY(0, 0, 3);
            }
            break;
        }


        return true;
    }


}

         2、把上下左右滑动及放大缩小事件传递给OpenglesCamera3D.cpp:

void OpenglesCamera3D::setMoveXY(float dx, float dy, int actionMode) {
    LOGI("setMoveXY dx:%f,dy:%f,actionMode:%d", dy, dy, actionMode);
    float xoffset = dx - lastX;
    float yoffset = lastY - dy; // reversed since y-coordinates go from bottom to top
    lastX = dx;
    lastY = dy;
    mActionMode = actionMode;
    mCamera.ProcessXYMovement(xoffset, yoffset);
}

void OpenglesCamera3D::setOnScale(float scaleFactor, float focusX
            , float focusY, int actionMode) {
//    LOGI("setOnScale scaleFactor:%f,focusX:%f,focusY:%f
//           ,actionMode:%d", scaleFactor, focusX, focusY,
//         actionMode);
//    LOGI("setOnScale scaleFactor:%f", scaleFactor);
    float scale;
    if (actionMode == 1 || actionMode == 3) {
        scale = 45.0f;
    } else {
        if (scaleFactor > 1) {
            scale = (scaleFactor - 1) * 1000 + 45;
        } else {
           scale = 50 - (1 - scaleFactor) * 1000;
        }
    }
    LOGI("setOnScale scale:%f", scale);
    mCamera.ProcessScroll(scale);
}

        3、在OpenglesCamera3D.cpp的 renderFrame()中把Camera3D对摄像机的数据转换后跟观察矩阵和投影矩阵关联上:

void OpenglesCamera3D::renderFrame() {
    glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
    // also clear the depth buffer now!
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); 
    // bind Texture
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, texture1);
    //2、使用程序
    glUseProgram(gProgram);
    checkGlError("glUseProgram");
    //开启深度测试
    glEnable(GL_DEPTH_TEST);

    // create transformations
    //观察矩阵(View Matrix)
    glm::mat4 view = glm::mat4(1.0f); 
    //投影矩阵(Projection Matrix)          
    glm::mat4 projection = glm::mat4(1.0f);     

//    float radius = 10.0f;
//    timeValue = 10 / CLOCKS_PER_SEC;
//    float camX = static_cast<float>(sin(timeValue) * radius);
//    float camZ = static_cast<float>(cos(timeValue) * radius);
//    LOGI("setMoveXY camX:%f,camZ:%f", camX,camZ);
    //观察矩阵(View Matrix)平移,glm::LookAt函数需要一个位置、目标和上向量。
//    view = glm::lookAt(glm::vec3(camX, 0.0f, camZ), glm::vec3(0.0f, 0.0f, 0.0f),
//                       glm::vec3(0.0f, 1.0f, 0.0f));
//    projection = glm::perspective(glm::radians(45.0f), (float) screenW / (float) screenH, 0.1f,
//                                  100.0f);

    //观察矩阵(View Matrix)平移,glm::LookAt函数需要一个位置、目标和上向量。
    view = mCamera.GetViewMatrix();
    projection = glm::perspective(glm::radians(mCamera.Zoom)
                                , (float) screenW / (float) screenH,
                                  0.1f,
                                  100.0f);
    // pass them to the shaders (3 different ways)
    setMat4("projection", projection);
    setMat4("view", view);

    // render boxes
    glBindVertexArray(VAO);
    for (unsigned int i = 0; i < 10; i++) {
        // calculate the model matrix for each object and pass it to shader before drawing
        //模型矩阵(Model Matrix)
        glm::mat4 model = glm::mat4(1.0f);  
        //对获取到的模型移动到对应位置                 
        model = glm::translate(model, cameraCubePositions[i]);    
        float angle = 20.0f * i;
        if (i < 6) {
            double timeValue = clock() * 10 / CLOCKS_PER_SEC;
            angle = timeValue * 25.0f;
        }
        //让模型经过旋转矩阵的变化
        model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f));
        setMat4("model", model);
        glDrawArrays(GL_TRIANGLES, 0, 36);
    }

    checkGlError("glDrawArrays");
}

  • 17
    点赞
  • 26
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值