Camera’s Depth Texture

https://docs.unity3d.com/Manual/SL-CameraDepthTexture.html

Camera’s Depth Texture(Unity5.5 doc)

Camera can generate a depth, depth+normals, or motion vector Texture(运动矢量纹理). This is a minimalistic G-buffer Texture(简略版G-buffer) that can be used for post-processing(后置处理) effects or to implement custom lighting models (e.g. light pre-pass). It is also possible to build similar textures yourself, using Shader Replacementfeature.

The Camera’s depth Texture mode can be enabled using Camera.depthTextureMode variable from script.

There are three possible depth texture modes:

  • DepthTextureMode.Depth: a depth texture.
  • DepthTextureMode.DepthNormals: depth and view space normals packed into one texture.*
  • DepthTextureMode.MotionVectors: per-pixel screen space motion of each screen texel for the current frame. Packed into a RG16 texture.

These are flags, so it is possible to specify any combination of the above textures.

这句貌似是说可以用|混合上面3种方式的组合


DepthTextureMode.Depth texture

This builds a screen-sized depth texture.

Depth texture is rendered using the same shader passes as used for shadow caster rendering (ShadowCaster pass type). So by extension, if a shader does not support shadow casting (i.e. there’s no shadow caster pass in the shader or any of the fallbacks), then objects using that shader will not show up in the depth texture.

深度贴图使用的passes跟shadow caster一样,所以,如果某个shader不支持shadow casting(shader中没有shadow caster pass,也没有任何shadow caster相关的fallbacks),那么使用这个shader的obj不会显示到深度贴图中。

  • Make your shader fallback to some other shader that has a shadow casting pass, or
  • If you’re using surface shaders, adding an addshadow directive will make them generate a shadow pass too.
  • (因为前面的原因,要正常生成深度贴图的画,就要保证)使shader 回调(fallback)到其他的包含shadow casting pass的shader,或者
  • 如果用的是surface shaders,添加addshadow指令,那么surface shader在生产代码时会自动生成一个shadow pass。

Note that only “opaque” objects (that which have their materials and shaders setup to use render queue <= 2500) are rendered into the depth texture.

注意只有不透明obj(queue<2500)会被渲染到深度贴图(注:其实Geometry是2000,后面AlphaText是3000,这里限制的是2500)


DepthTextureMode.DepthNormals texture

This builds a screen-sized 32 bit (8 bit/channel) texture, where view space normals are encoded into R&G channels, and depth is encoded in B&A channels. Normals are encoded using Stereographic projection, and depth is 16 bit value packed into two 8 bit channels.

UnityCG.cginc include file has a helper function DecodeDepthNormal to decode depth and normal from the encoded pixel value. Returned depth is in 0..1 range.

For examples on how to use the depth and normals texture, please refer to the EdgeDetection image effect in the Shader Replacement example project or Screen Space Ambient Occlusion Image Effect.

(这种模式)创建一个屏幕大小,32bit(8bit每个通道)的贴图,观察空间法向量会被编码到R和G通道,深度被编码到B和A通道

法线使用 球极平面投影(把球表面投到平面,为什么能存法线??),深度16bit,被压到两个8bit 的通道中

UnityCG.cginc中有对应解码方法,DecodeDepthNormal,返回的深度在0-1之间

深度法线贴图使用例子,清参考Shader替代工程中的边缘检测图片特效,或屏幕空间环境光遮蔽(物体靠近或相交时遮挡周围漫反射效果)图形特效

DepthTextureMode.MotionVectors texture

This builds a screen-sized RG16 (16-bit float/channel) texture, where screen space pixel motion is encoded into the R&G channels. The pixel motion is encoded in screen UV space.

When sampling from this texture motion from the encoded pixel is returned in a rance of –1..1. This will be the UV offset from the last frame to the current frame.

这个暂时不知道什么时候用,掠过

Tips & Tricks

Camera inspector indicates when a camera is rendering a depth or a depth+normals texture.

Camera详情显示器会指明camera是否正在渲染深度或深度+法线贴图

The way that depth textures are requested from the Camera (Camera.depthTextureMode) might mean that after you disable an effect that needed them, the Camera might still continue rendering them. If there are multiple effects present on a Camera, where each of them needs the depth texture, there’s no good way to automatically disable depth texture rendering if you disable the individual effects.

对于设置深度贴图模式的方法(Camera.depthTextureMode),当你禁用一个需要这个方法的效果时,Camera还会持续渲染(你设置的深度贴图类型),如果一个Camera当前有多个效果,每个(效果)都需要深度贴图,如果你禁用某个独立的效果,没有一个好的办法去自动禁用深度贴图渲染。-------这段不理解。。。。。。。。


When implementing complex Shaders or Image Effects, keep Rendering Differences Between Platforms in mind. In particular, using depth texture in an Image Effect often needs special handling on Direct3D + Anti-Aliasing.

当引入复合shader或图片特效时,要保证考虑不同平台渲染差异,特别说明,对于DX+反锯齿,使用深度贴图的图片特效经常需要特殊处理。

In some cases, the depth texture might come directly from the native Z buffer. If you see artifacts in your depth texture, make sure that the shaders that use it do not write into the Z buffer (use ZWrite Off).

有些情况下,深度贴图直接来自原生zbuffer,如果在你的深度贴图中看到artifacts,确保使用(深度贴图)的shader没有写入zbuffer(确保把ZWrite Off加上)

Shader variables

Depth textures are available for sampling in shaders as global shader properties. By declaring a sampler called _CameraDepthTexture you will be able to sample the main depth texture for the camera.

在shader中深度贴图可以作为全局shader属性,通过定义一个名称为_CameraDepthTexture的sampler,你就能采集相机的主深度图

_CameraDepthTexture always refers to the camera’s primary depth texture. By contrast, you can use _LastCameraDepthTexture to refer to the last depth texture rendered by any camera. This could be useful for example if you render a half-resolution depth texture in script using a secondary camera and want to make it available to a post-process shader.

_CameraDepthTexture总是跟相机主深度图相关,

你可以使用跟任何相机渲染的最后一次深度图相关的_LastCameraDepthTexture变量。举个栗子,如果你在脚本中使用第二个相机渲染一个半分辨率深度图,并且想让其在后处理shader中使用。

The motion vectors texture (when enabled) is avaialable in Shaders as a global Shader property. By declaring a sampler called ‘_CameraMotionVectorsTexture’ you can sample the Texture for the curently rendering Camera.

Under the hood(底层)

Depth textures can come directly from the actual depth buffer, or be rendered in a separate pass, depending on the rendering path used and the hardware. Typically when using Deferred Shading or Legacy Deferred Lighting rendering paths, the depth textures come “for free” since they are a product of the G-buffer rendering anyway.

深度贴图可以直接来自深度缓存,或者在一个独立pass中渲染,这取决于渲染路径和硬件。

典型的情况,当使用新旧延迟渲染路径时,深度贴图是在G-buffer渲染完成后(?是渲染时还是处理完成后?)直接取出来的(这个估计还需要看了延迟渲染具体过程才能理解)。

When the DepthNormals texture is rendered in a separate pass, this is done through Shader Replacement. Hence it is important to have correct “RenderType” tag in your shaders.

深度法线贴图在一个独立pass中渲染是通过shader替代完成的,所以设置正确的"RenderType"很重要(材质渲染替代通常都用RenderType这个Tag,也可以使用自定义的Tag)

When enabled, the MotionVectors texture always comes from a extra render pass. Unity will render moving GameObjects into this buffer, and construct their motion from the last frame to the current frame.

See also

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
要将GLSurfaceView绑定到Camera2中,您需要使用TextureView而不是GLSurfaceView。这是因为GLSurfaceView不支持与Camera2 API一起使用。 以下是一些步骤,可以帮助您在Android中将TextureView与Camera2 API结合使用: 1. 在布局文件中添加一个TextureView: ``` <TextureView android:id="@+id/texture_view" android:layout_width="match_parent" android:layout_height="match_parent" /> ``` 2. 在您的活动类中获取TextureView的引用: ``` TextureView textureView = findViewById(R.id.texture_view); ``` 3. 创建一个CameraDevice.StateCallback来处理CameraDevice的状态变化: ``` private CameraDevice.StateCallback cameraStateCallback = new CameraDevice.StateCallback() { @Override public void onOpened(@NonNull CameraDevice camera) { // CameraDevice已经打开,可以开始预览 } @Override public void onDisconnected(@NonNull CameraDevice camera) { // CameraDevice已经断开连接 } @Override public void onError(@NonNull CameraDevice camera, int error) { // CameraDevice遇到错误 } }; ``` 4. 获取CameraManager的实例,并使用它来打开相机: ``` CameraManager cameraManager = (CameraManager) getSystemService(Context.CAMERA_SERVICE); String cameraId = cameraManager.getCameraIdList()[0]; // 获取第一个后置摄像头 cameraManager.openCamera(cameraId, cameraStateCallback, null); ``` 5. 在CameraDevice.StateCallback的onOpened方法中,设置一个SurfaceTextureListener来处理TextureView的缓冲区更新: ``` @Override public void onOpened(@NonNull CameraDevice camera) { SurfaceTexture surfaceTexture = textureView.getSurfaceTexture(); surfaceTexture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight()); Surface previewSurface = new Surface(surfaceTexture); try { CaptureRequest.Builder previewRequestBuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); previewRequestBuilder.addTarget(previewSurface); camera.createCaptureSession(Arrays.asList(previewSurface), new CameraCaptureSession.StateCallback() { @Override public void onConfigured(@NonNull CameraCaptureSession session) { try { CaptureRequest previewRequest = previewRequestBuilder.build(); session.setRepeatingRequest(previewRequest, null, null); } catch (CameraAccessException e) { e.printStackTrace(); } } @Override public void onConfigureFailed(@NonNull CameraCaptureSession session) { // 配置失败 } }, null); } catch (CameraAccessException e) { e.printStackTrace(); } } ``` 6. 在SurfaceTextureListener中,创建一个OpenGL ES上下文,并将其与TextureView关联: ``` textureView.setSurfaceTextureListener(new TextureView.SurfaceTextureListener() { @Override public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) { // 创建OpenGL ES上下文 EGL10 egl = (EGL10) EGLContext.getEGL(); EGLDisplay display = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY); egl.eglInitialize(display, null); int[] attribList = { EGL10.EGL_RED_SIZE, 8, EGL10.EGL_GREEN_SIZE, 8, EGL10.EGL_BLUE_SIZE, 8, EGL10.EGL_ALPHA_SIZE, 8, EGL10.EGL_DEPTH_SIZE, 0, EGL10.EGL_STENCIL_SIZE, 0, EGL10.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT, EGL10.EGL_NONE }; EGLConfig[] configs = new EGLConfig[1]; int[] numConfigs = new int[1]; egl.eglChooseConfig(display, attribList, configs, 1, numConfigs); int[] contextAttribList = { EGL14.EGL_CONTEXT_CLIENT_VERSION, 2, EGL10.EGL_NONE }; EGLContext context = egl.eglCreateContext(display, configs[0], EGL10.EGL_NO_CONTEXT, contextAttribList); // 将OpenGL ES上下文与TextureView关联 EGLSurface eglSurface = egl.eglCreateWindowSurface(display, configs[0], surface, null); egl.eglMakeCurrent(display, eglSurface, eglSurface, context); } @Override public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) { } @Override public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) { return false; } @Override public void onSurfaceTextureUpdated(SurfaceTexture surface) { } }); ``` 注意:上述步骤仅提供了一个基本的示例,用于将TextureView与Camera2 API结合使用。在实际应用中,您可能需要根据自己的需求进行修改和定制。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值