本篇从两个角度分析Cardboard VR SDK生命周期的设计
目录:
一、应用端生命周期调用
二、SDK端生命周期功能实现
一、应用端生命周期调用
(1)主MainActivity中有一个CardboardOverlayView类型的成员mOverlayView,这是遮盖层的视图
(2)CardboardOverlayView中有CardboardOverlayEyeView类型的mLeftView和mRightView,用来显示左右眼
(3)CardboardOverlayEyeView中又包含一个ImageView类型的imageView和TextureView类型的textView
这是在遮盖层的架构安排,这部分只是为了遮盖出一个视野区域,这部分是次要的。
(4)图像显示视图是一个全屏的CardboardView类型的cardboardView,这是用来显示实际场景图像的。
注意:
MainActivity实现了StereoRenderer接口
MainActivity的this指针传递给CardboardView中构造的StereoRendererHelper对象的成员mStereoRenderer。
StereoRendererHelper对象则传递给了实现GLSurfaceView.Renderer的类RendererHelper的成员mRenderer。
这样,最后将RendererHelper类通过setRenderer设置给GLSurfaceView的时候,
实现了StereoRenderer接口的MainActivity便也传给了GLSurfaceView。
(5)如前面分析,MainActivity中所做的就是将this指针传给cardboardView,cardboardView会利用this构造一个
StereoRendererHelper,最后放在RendererHelper对象中通过setRenderer传递给GLSurfaceView。
以便让cardboardView可以在合适的时候调用用户在MainActivity中所实现的StereoRenderer接口
(6)因为RendererHelper是继承GLSurfaceView.Renderer的,所以。GLSurfaceView相应周期触发的时候,就会自动触发
RendererHelper中实现的GLSurfaceView.Renderer的生命周期函数。
这包括:
void onSurfaceCreated(GL10 gl, EGLConfig config);
void onSurfaceChanged(GL10 gl,
int width,
int height);
void onDrawFrame(GL10 gl);
这是基于Android的GLSurfaceView现有生命周期的第一层调用。
第二层,比如在onSurfaceCreated中,调用了
mRenderer.onSurfaceCreated(config);
在onSurfaceChanged中调用了
mRenderer.onSurfaceChanged(width, height);
在onDrawFrame中调用了
mRenderer.onDrawFrame(mHeadTransform, mLeftEye, mRightEye);
mRenderer.onFinishFrame(mMonocular.getViewport());
在shutdown中调用了
mRenderer.onRendererShutdown();
因此,可以视作,CardboardView.Renderer接口是对GLSurfaceView中生命周期函数的一种扩充。
这里调用的mRenderer对象显然就是在StereoRendererHelper中实现的CardboardView.Renderer接口
我们查看StereoRendererHelper的实现。其中会调用mStereoRenderer成员的实现。
这里其实也就是MainActivity的实现了。
因此第二层,就相当于在CardboardView.Renderer的基础上又扩充了StereoRenderer这一层生命周期。
比如,在CardboardView.Renderer的onDrawFrame生命周期被触发时(StereoRendererHelper .onDrawFrame)
会调用StereoRenderer.onNewFrame和两次StereoRenderer.onDrawEye来绘制左右眼。
因此。基本上整个SDK的生命周期架构就是基于GLSurfaceView.Render基础上的CardboardView.Renderer和StereoRenderer这两次扩充。
总时序图如下:
二、SDK端生命周期功能实现
/**
* Creates the buffers we use to store information about the 3D world. OpenGL doesn't use Java
* arrays, but rather needs data in a format it can understand. Hence we use ByteBuffers.
* @param config The EGL configuration used when creating the surface.
*/
@Override
public void onSurfaceCreated(EGLConfig config) {
Log.i(TAG, "onSurfaceCreated");
//清屏,使为黑色
GLES20.glClearColor(0.1f, 0.1f, 0.1f, 0.5f); // Dark background so text shows up well
//创建ByteBuffer存储Cube坐标
ByteBuffer bbVertices = ByteBuffer.allocateDirect(DATA.CUBE_COORDS.length * 4);
bbVertices.order(ByteOrder.nativeOrder());
mCubeVertices = bbVertices.asFloatBuffer();
mCubeVertices.put(DATA.CUBE_COORDS);
mCubeVertices.position(0);
//创建ByteBuffer存储Cube颜色
ByteBuffer bbColors = ByteBuffer.allocateDirect(DATA.CUBE_COLORS.length * 4);
bbColors.order(ByteOrder.nativeOrder());
mCubeColors = bbColors.asFloatBuffer();
mCubeColors.put(DATA.CUBE_COLORS);
mCubeColors.position(0);
//创建ByteBuffer存储Cube默认颜色,6面都是黄色
ByteBuffer bbFoundColors = ByteBuffer.allocateDirect(DATA.CUBE_FOUND_COLORS.length * 4);
bbFoundColors.order(ByteOrder.nativeOrder());
mCubeFoundColors = bbFoundColors.asFloatBuffer();
mCubeFoundColors.put(DATA.CUBE_FOUND_COLORS);
mCubeFoundColors.position(0);
//创建ByteBuffer存储单位Cube坐标
ByteBuffer bbNormals = ByteBuffer.allocateDirect(DATA.CUBE_NORMALS.length * 4);
bbNormals.order(ByteOrder.nativeOrder());
mCubeNormals = bbNormals.asFloatBuffer();
mCubeNormals.put(DATA.CUBE_NORMALS);
mCubeNormals.position(0);
// make a floor
//创建ByteBuffer存储地板坐标
ByteBuffer bbFloorVertices = ByteBuffer.allocateDirect(DATA.FLOOR_COORDS.length * 4);
bbFloorVertices.order(ByteOrder.nativeOrder());
mFloorVertices = bbFloorVertices.asFloatBuffer();
mFloorVertices.put(DATA.FLOOR_COORDS);
mFloorVertices.position(0);
//创建ByteBuffer存储单位地板坐标
ByteBuffer bbFloorNormals = ByteBuffer.allocateDirect(DATA.FLOOR_NORMALS.length * 4);
bbFloorNormals.order(ByteOrder.nativeOrder());
mFloorNormals = bbFloorNormals.asFloatBuffer();
mFloorNormals.put(DATA.FLOOR_NORMALS);
mFloorNormals.position(0);
//创建ByteBuffer存储地板颜色
ByteBuffer bbFloorColors = ByteBuffer.allocateDirect(DATA.FLOOR_COLORS.length * 4);
bbFloorColors.order(ByteOrder.nativeOrder());
mFloorColors = bbFloorColors.asFloatBuffer();
mFloorColors.put(DATA.FLOOR_COLORS);
mFloorColors.position(0);
//加载顶点着色器和片段着色器
int vertexShader = loadGLShader(GLES20.GL_VERTEX_SHADER, R.raw.light_vertex);
int gridShader = loadGLShader(GLES20.GL_FRAGMENT_SHADER, R.raw.grid_fragment);
//创建EGLProgram,链接着色器和EGLProgram
mGlProgram = GLES20.glCreateProgram();
GLES20.glAttachShader(mGlProgram, vertexShader);
GLES20.glAttachShader(mGlProgram, gridShader);
GLES20.glLinkProgram(mGlProgram);
//开启深度测试
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
//设置Cube的初始空间位置
// Object first appears directly in front of user
Matrix.setIdentityM(mModelCube, 0);
Matrix.translateM(mModelCube, 0, 0, 0, -mObjectDistance);
//设置地板的初始空间位置
Matrix.setIdentityM(mModelFloor, 0);
Matrix.translateM(mModelFloor, 0, 0, -mFloorDepth, 0); // Floor appears below user
checkGLError("onSurfaceCreated");
}
(2)CardboardView.StereoRenderer.onSurfaceChanged
什么都没做
@Override
public void onSurfaceChanged(int width, int height) {
Log.i(TAG, "onSurfaceChanged");
}
(3)CardboardView.StereoRenderer.onNewFrame
获取Shader中的变量ID,初始化ModelCube和Camra的视图矩阵
获取头部姿态角,保存到mHeadView中。
/**
* Prepares OpenGL ES before we draw a frame.
* @param headTransform The head transformation in the new frame.
*/
@Override
public void onNewFrame(HeadTransform headTransform) {
GLES20.glUseProgram(mGlProgram);
mModelViewProjectionParam = GLES20.glGetUniformLocation(mGlProgram, "u_MVP");
mLightPosParam = GLES20.glGetUniformLocation(mGlProgram, "u_LightPos");
mModelViewParam = GLES20.glGetUniformLocation(mGlProgram, "u_MVMatrix");
mModelParam = GLES20.glGetUniformLocation(mGlProgram, "u_Model");
mIsFloorParam = GLES20.glGetUniformLocation(mGlProgram, "u_IsFloor");
// Build the Model part of the ModelView matrix.
Matrix.rotateM(mModelCube, 0, TIME_DELTA, 0.5f, 0.5f, 1.0f);
// Build the camera matrix and apply it to the ModelView.
Matrix.setLookAtM(mCamera, 0, 0.0f, 0.0f, CAMERA_Z, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f);
headTransform.getHeadView(mHeadView, 0);
checkGLError("onReadyToDraw");
}
(4)CardboardView.StereoRenderer.onDrawEye
绘制眼睛看到的图像,这里眼睛相关的参数都封装在EyeTransform transform这个参数中了
这里开启了三个
VertexAttribArray,a_Position,a_Normal,a_Color都来自于Shader
mLightPosParam在onNewFrame中进行了初始化,在这里拿到左右眼的参数后绘制时才进行具体赋值
同样,绘制了cube和Floor
/**
* Draws a frame for an eye. The transformation for that eye (from the camera) is passed in as
* a parameter.
* @param transform The transformations to apply to render this eye.
*/
@Override
public void onDrawEye(EyeTransform transform) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
mPositionParam = GLES20.glGetAttribLocation(mGlProgram, "a_Position");
mNormalParam = GLES20.glGetAttribLocation(mGlProgram, "a_Normal");
mColorParam = GLES20.glGetAttribLocation(mGlProgram, "a_Color");
GLES20.glEnableVertexAttribArray(mPositionParam);
GLES20.glEnableVertexAttribArray(mNormalParam);
GLES20.glEnableVertexAttribArray(mColorParam);
checkGLError("mColorParam");
// Apply the eye transformation to the camera.
Matrix.multiplyMM(mView, 0, transform.getEyeView(), 0, mCamera, 0);
// Set the position of the light
Matrix.multiplyMV(mLightPosInEyeSpace, 0, mView, 0, mLightPosInWorldSpace, 0);
GLES20.glUniform3f(mLightPosParam, mLightPosInEyeSpace[0], mLightPosInEyeSpace[1],
mLightPosInEyeSpace[2]);
// Build the ModelView and ModelViewProjection matrices
// for calculating cube position and light.
Matrix.multiplyMM(mModelView, 0, mView, 0, mModelCube, 0);
Matrix.multiplyMM(mModelViewProjection, 0, transform.getPerspective(), 0, mModelView, 0);
drawCube();
// Set mModelView for the floor, so we draw floor in the correct location
Matrix.multiplyMM(mModelView, 0, mView, 0, mModelFloor, 0);
Matrix.multiplyMM(mModelViewProjection, 0, transform.getPerspective(), 0,
mModelView, 0);
drawFloor(transform.getPerspective());
}
(5)CardboardView.StereoRenderer.onFinishFrame
什么都没有做
@Override
public void onFinishFrame(Viewport viewport) {
}
(6)CardboardView.StereoRenderer.onRendererShutdown
也什么都没有做。
@Override
public void onRendererShutdown() {
Log.i(TAG, "onRendererShutdown");
}
所以在MainActivity实现的StereoRenderer接口并没有做什么太多的东西,只是做了一些OpenGL的配置工作,和具体内容的绘制。
这也和StereoRenderer接口是给应用开发者的初衷一致。即onSurfaceCreated中用来创建OpenGL要用到的数据结构,onNewFrame用来做一帧绘制前的预处理
onDrawEye中利用参数中传进来的左右眼视角矩阵做具体内容的绘制。
好了,到这里留给应用开发者的接口StereoRenderer就分析完了。
现在分析下剩下的CardboardView.Renderer接口,这部分在CardboardView.
StereoRendererHelper中做了实现
(7)CardboardView.Renderer.onSurfaceCreated
只是简单调用用户实现的StereoRenderer.onSurfaceCreated接口,没有做额外的工作,只是一层封装
public void onSurfaceCreated(EGLConfig config) {
mStereoRenderer.onSurfaceCreated(config);
}
(8)CardboardView.Renderer.onSurfaceChanged
这里对VR模式进行了判断,如果是VR模式,则将用户接收到的width值变为2,不过这里用户对StereoRenderer.onSurfaceChanged接口的没有做什么实现。
所以这个Demo里这个暂时没有什么效果。
public void onSurfaceChanged(int width, int height) {
if (mVRMode) {
mStereoRenderer.onSurfaceChanged(width / 2, height);
} else
mStereoRenderer.onSurfaceChanged(width, height);
}
(9)CardboardView.Renderer.onDrawFrame
这里调用了用户实现的onNewFrame接口,并设置左右眼的Viewport,然后调用用户实现的onDrawEye接口
public void onDrawFrame(HeadTransform head, EyeParams leftEye,
EyeParams rightEye) {
mStereoRenderer.onNewFrame(head);
GLES20.glEnable(3089);
leftEye.getViewport().setGLViewport();
leftEye.getViewport().setGLScissor();
mStereoRenderer.onDrawEye(leftEye.getTransform());
if (rightEye == null) {
return;
}
rightEye.getViewport().setGLViewport();
rightEye.getViewport().setGLScissor();
mStereoRenderer.onDrawEye(rightEye.getTransform());
}
(10)CardboardView.Renderer.onFinishFrame
结束的时候做一些善后工作,调用用户实现的onFinishFrame
public void onFinishFrame(Viewport viewport) {
viewport.setGLViewport();
viewport.setGLScissor();
mStereoRenderer.onFinishFrame(viewport);
}
(11)CardboardView.onRendererShutdown
直接调用用户实现的接口
public void onRendererShutdown() {
mStereoRenderer.onRendererShutdown();
}
上面就是整个接口相关的实现,可以看到,功能的层次非常清晰,不过到这里分析到的还是CardboardView.Renderer接口对CardboardView.StereoRenderer的调用使用情况。要分析最上层的,就要看CardboardView中
RendererHelper实现的GLSurfaceView.Render对CardboardView.Renderer接口的调用。
这是第三部分
(12)GLSurfaceView.Render.onSurfaceCreated
判断了一下程序是否退出,直接调用CardboardView.Renderer 的接口实现
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
if (mShuttingDown) {
return;
}
mRenderer.onSurfaceCreated(config);
}
(13)GLSurfaceView.Render.onSurfaceChanged
判断了一下程序是否退出,直接调用CardboardView.Renderer 的接口实现
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
if (mShuttingDown) {
return;
}
ScreenParams screen = mHmd.getScreen();
if ((width != screen.getWidth()) || (height != screen.getHeight())) {
if (!mInvalidSurfaceSize) {
GLES20.glClear(16384);
Log.w("CardboardView", "Surface size " + width + "x"
+ height
+ " does not match the expected screen size "
+ screen.getWidth() + "x" + screen.getHeight()
+ ". Rendering is disabled.");
}
mInvalidSurfaceSize = true;
} else {
mInvalidSurfaceSize = false;
}
mRenderer.onSurfaceChanged(width, height);
}
(14)GLSurfaceView.Render.onDrawFrame
首先判断程序是否退出,然后获取最新的头部姿态。
@Override
public void onDrawFrame(GL10 gl) {
// 如果程序已关闭或者SurfaceSize非法,则退出
if ((mShuttingDown) || (mInvalidSurfaceSize)) {
return;
}
// 获取屏幕参数
ScreenParams screen = mHmd.getScreen();
// 获取Cardboard设备参数
CardboardDeviceParams cdp = mHmd.getCardboard();
mHeadTracker.getLastHeadView(mHeadTransform.getHeadView(), 0);
float halfInterpupillaryDistance = cdp.getInterpupillaryDistance() * 0.5F;
if (mVRMode) {
Matrix.setIdentityM(mLeftEyeTranslate, 0);
Matrix.setIdentityM(mRightEyeTranslate, 0);
Matrix.translateM(mLeftEyeTranslate, 0,
halfInterpupillaryDistance, 0.0F, 0.0F);
Matrix.translateM(mRightEyeTranslate, 0,
-halfInterpupillaryDistance, 0.0F, 0.0F);
Matrix.multiplyMM(mLeftEye.getTransform().getEyeView(), 0,
mLeftEyeTranslate, 0, mHeadTransform.getHeadView(), 0);
Matrix.multiplyMM(mRightEye.getTransform().getEyeView(), 0,
mRightEyeTranslate, 0, mHeadTransform.getHeadView(), 0);
} else {
System.arraycopy(mHeadTransform.getHeadView(), 0, mMonocular
.getTransform().getEyeView(), 0, mHeadTransform
.getHeadView().length);
}
if (mProjectionChanged) {
mMonocular.getViewport().setViewport(0, 0, screen.getWidth(),
screen.getHeight());
if (!mVRMode) {
float aspectRatio = screen.getWidth() / screen.getHeight();
Matrix.perspectiveM(mMonocular.getTransform()
.getPerspective(), 0, cdp.getFovY(), aspectRatio,
mZNear, mZFar);
} else if (mDistortionCorrectionEnabled) {
updateFieldOfView(mLeftEye.getFov(), mRightEye.getFov());
mDistortionRenderer.onProjectionChanged(mHmd, mLeftEye,
mRightEye, mZNear, mZFar);
} else {
float distEyeToScreen = cdp.getVisibleViewportSize()/ 2.0F/ (float) Math.tan(Math.toRadians(cdp.getFovY()) / 2.0D);
float left = screen.getWidthMeters() / 2.0F
- halfInterpupillaryDistance;
float right = halfInterpupillaryDistance;
float bottom = cdp.getVerticalDistanceToLensCenter()
- screen.getBorderSizeMeters();
float top = screen.getBorderSizeMeters()
+ screen.getHeightMeters()
- cdp.getVerticalDistanceToLensCenter();
FieldOfView leftEyeFov = mLeftEye.getFov();
leftEyeFov.setLeft((float) Math.toDegrees(Math.atan2(left,
distEyeToScreen)));
leftEyeFov.setRight((float) Math.toDegrees(Math.atan2(
right, distEyeToScreen)));
leftEyeFov.setBottom((float) Math.toDegrees(Math.atan2(
bottom, distEyeToScreen)));
leftEyeFov.setTop((float) Math.toDegrees(Math.atan2(top,
distEyeToScreen)));
FieldOfView rightEyeFov = mRightEye.getFov();
rightEyeFov.setLeft(leftEyeFov.getRight());
rightEyeFov.setRight(leftEyeFov.getLeft());
rightEyeFov.setBottom(leftEyeFov.getBottom());
rightEyeFov.setTop(leftEyeFov.getTop());
leftEyeFov.toPerspectiveMatrix(mZNear, mZFar, mLeftEye
.getTransform().getPerspective(), 0);
rightEyeFov.toPerspectiveMatrix(mZNear, mZFar, mRightEye
.getTransform().getPerspective(), 0);
mLeftEye.getViewport().setViewport(0, 0,
screen.getWidth() / 2, screen.getHeight());
mRightEye.getViewport().setViewport(screen.getWidth() / 2,
0, screen.getWidth() / 2, screen.getHeight());
}
mProjectionChanged = false;
}
if (mVRMode) {
if (mDistortionCorrectionEnabled) {
mDistortionRenderer.beforeDrawFrame();
if (mDistortionCorrectionScale == 1.0F) {
mRenderer.onDrawFrame(mHeadTransform, mLeftEye,
mRightEye);
} else {
int leftX = mLeftEye.getViewport().x;
int leftY = mLeftEye.getViewport().y;
int leftWidth = mLeftEye.getViewport().width;
int leftHeight = mLeftEye.getViewport().height;
int rightX = mRightEye.getViewport().x;
int rightY = mRightEye.getViewport().y;
int rightWidth = mRightEye.getViewport().width;
int rightHeight = mRightEye.getViewport().height;
mLeftEye.getViewport()
.setViewport(
(int) (leftX * mDistortionCorrectionScale),
(int) (leftY * mDistortionCorrectionScale),
(int) (leftWidth * mDistortionCorrectionScale),
(int) (leftHeight * mDistortionCorrectionScale));
mRightEye
.getViewport()
.setViewport(
(int) (rightX * mDistortionCorrectionScale),
(int) (rightY * mDistortionCorrectionScale),
(int) (rightWidth * mDistortionCorrectionScale),
(int) (rightHeight * mDistortionCorrectionScale));
mRenderer.onDrawFrame(mHeadTransform, mLeftEye,
mRightEye);
mLeftEye.getViewport().setViewport(leftX, leftY,
leftWidth, leftHeight);
mRightEye.getViewport().setViewport(rightX, rightY,
rightWidth, rightHeight);
}
mDistortionRenderer.afterDrawFrame();
} else {
mRenderer.onDrawFrame(mHeadTransform, mLeftEye, mRightEye);
}
} else
mRenderer.onDrawFrame(mHeadTransform, mMonocular, null);
mRenderer.onFinishFrame(mMonocular.getViewport());
}