在WebGL中屏幕默认绑定一个帧缓冲区,渲染管线的输出结果最终去往这个默认的帧缓冲区,屏幕用其中数据显示图像,但是WebGL是支持自定义帧缓冲区的(RenderBuffer),如果用户自定义了帧缓冲区并使用,那么渲染管线的输出数据将会去往这个“自定义帧缓冲区”(离屏渲染),这些输出数据可以保存为图片也可作为其他物体的纹理,ThreeJS中的WebGLRenderTarget就是这个“自定义帧缓冲区”的封装,这一技术的经典应用—“屏幕后处理”。
1.WebGL中是如何自定义帧缓冲区的?
unsigned int fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
首先我们创建一个帧缓冲对象,将它绑定为激活的(Active)帧缓冲,使用glBindFramebuffer来绑定帧缓冲。在绑定到GL_FRAMEBUFFER目标之后,所有的读取和写入帧缓冲的操作将会影响当前绑定的帧缓冲。也可以使用GL_READ_FRAMEBUFFER或GL_DRAW_FRAMEBUFFER,将一个帧缓冲分别绑定到读取目标或写入目标。绑定到GL_READ_FRAMEBUFFER的帧缓冲将会使用在所有像是glReadPixels的读取操作中,而绑定到GL_DRAW_FRAMEBUFFER的帧缓冲将会被用作渲染、清除等写入操作的目标。大部分情况你都不需要区分它们,通常都会使用GL_FRAMEBUFFER,绑定到两个上。
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDeleteFramebuffers(1, &fbo);
解绑帧缓冲和删除缓冲区 图形接口
一个完整的帧缓冲需要满足几个条件:附加至少一个缓冲(颜色、深度或模板缓冲),至少有一个颜色附件(Attachment),所有的附件都必须是完整的(保留了内存),每个缓冲都应该有相同的样本数。
为帧缓冲创建一个纹理和创建一个普通的纹理差不多:
unsigned int texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 800, 600, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
主要的区别就是,我们将维度设置为了屏幕大小(尽管这不是必须的),并且我们给纹理的data参数传递了NULL。对于这个纹理,我们仅仅分配了内存而没有填充它。填充这个纹理将会在我们渲染到帧缓冲之后来进行。同样注意我们并不关心环绕方式或多级渐远纹理,我们在大多数情况下都不会需要它们。
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
创建好一个纹理了,要做的最后一件事就是将它附加到帧缓冲上了;除了颜色附件之外,我们还可以附加一个深度和模板缓冲纹理到帧缓冲对象中。
绑定好帧缓冲区附件之后,只需要向以往一样,正常渲染场景,原来去默认帧缓冲区的数据就会直接去往自定义的帧缓冲区。牛笔Class!!!
2.ThreeJS渲染缓冲对象:WebGLRenderTarget
上边介绍了WebGL是如何底层实现“自定义帧缓冲区”的(如果不是深入了解过WebGL接口,迷茫是很正常的),那么在ThreeJS源码中,WebGLRenderTarget是如何封装的呢?
function WebGLRenderTarget( width, height, options ) {
this.width = width;
this.height = height;
// 模板测试
this.scissor = new Vector4( 0, 0, width, height );
this.scissorTest = false;
// 设置视口
this.viewport = new Vector4( 0, 0, width, height );
options = options || {};
// 创建纹理
this.texture = new Texture( undefined, undefined, options.wrapS, options.wrapT, options.magFilter, options.minFilter, options.format, options.type, options.anisotropy, options.encoding );
// 设置纹理属性
this.texture.image = {};
this.texture.image.width = width;
this.texture.image.height = height;
// 渐变纹理属性
this.texture.generateMipmaps = options.generateMipmaps !== undefined ? options.generateMipmaps : false;
this.texture.minFilter = options.minFilter !== undefined ? options.minFilter : LinearFilter;
// 是否启用 深度 模板 纹理 附件
this.depthBuffer = options.depthBuffer !== undefined ? options.depthBuffer : true;
this.stencilBuffer = options.stencilBuffer !== undefined ? options.stencilBuffer : true;
this.depthTexture = options.depthTexture !== undefined ? options.depthTexture : null;
}
在ThreeJS的渲染器(WebGLRender)中有几个接口专用于设置和操作RT的:
接下来给出一个设置自定义帧缓冲区并获取深度纹理数据的案例代码:
<script id="post-vert" type="x-shader/x-vertex">
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
</script>
<script id="post-frag" type="x-shader/x-fragment">
#include <packing>
varying vec2 vUv;
uniform sampler2D tDiffuse;
uniform sampler2D tDepth;
uniform float cameraNear;
uniform float cameraFar;
float readDepth( sampler2D depthSampler, vec2 coord ) {
float fragCoordZ = texture2D( depthSampler, coord ).x;
float viewZ = perspectiveDepthToViewZ( fragCoordZ, cameraNear, cameraFar );
return viewZToOrthographicDepth( viewZ, cameraNear, cameraFar );
}
void main() {
//vec3 diffuse = texture2D( tDiffuse, vUv ).rgb;
float depth = readDepth( tDepth, vUv );
gl_FragColor.rgb = 1.0 - vec3( depth );
gl_FragColor.a = 1.0;
}
</script>
</head>
<body>
<canvas></canvas>
<div id="info">
<a href="http://threejs.org" target="_blank" rel="noopener">threejs</a> webgl - depth texture<br/>
Stores render target depth in a texture attachment.<br/>
Created by <a href="http://twitter.com/mattdesl" target="_blank" rel="noopener">@mattdesl</a>.
<div id="error" style="display: none;">
Your browser does not support <strong>WEBGL_depth_texture</strong>.<br/><br/>
This demo will not work.
</div>
</div>
<script type="module">
import * as THREE from '../build/three.module.js';
import Stats from './jsm/libs/stats.module.js';
import { OrbitControls } from './jsm/controls/OrbitControls.js';
var camera, scene, renderer, controls, stats;
var target;
var postScene, postCamera;
var supportsExtension = true;
init();
animate();
function init() {
renderer = new THREE.WebGLRenderer( { canvas: document.querySelector( 'canvas' ) } );
if ( ! renderer.extensions.get( 'WEBGL_depth_texture' ) ) {
supportsExtension = false;
document.querySelector( '#error' ).style.display = 'block';
return;
}
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( window.innerWidth, window.innerHeight );
//
stats = new Stats();
document.body.appendChild( stats.dom );
camera = new THREE.PerspectiveCamera( 70, window.innerWidth / window.innerHeight, 0.01, 50 );
camera.position.z = 4;
controls = new OrbitControls( camera, renderer.domElement );
controls.enableDamping = true;
controls.dampingFactor = 0.05;
// Create a multi render target with Float buffers
target = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight );
target.texture.format = THREE.RGBFormat;
target.texture.minFilter = THREE.NearestFilter;
target.texture.magFilter = THREE.NearestFilter;
target.texture.generateMipmaps = false;
target.stencilBuffer = false;
target.depthBuffer = true;
target.depthTexture = new THREE.DepthTexture();
target.depthTexture.type = THREE.UnsignedShortType;
// Our scene
scene = new THREE.Scene();
setupScene();
// Setup post-processing step
setupPost();
onWindowResize();
window.addEventListener( 'resize', onWindowResize, false );
}
function setupPost() {
// Setup post processing stage
postCamera = new THREE.OrthographicCamera( - 1, 1, 1, - 1, 0, 1 );
var postMaterial = new THREE.ShaderMaterial( {
vertexShader: document.querySelector( '#post-vert' ).textContent.trim(),
fragmentShader: document.querySelector( '#post-frag' ).textContent.trim(),
uniforms: {
cameraNear: { value: camera.near },
cameraFar: { value: camera.far },
tDiffuse: { value: target.texture },
tDepth: { value: target.depthTexture }
}
} );
var postPlane = new THREE.PlaneBufferGeometry( 2, 2 );
var postQuad = new THREE.Mesh( postPlane, postMaterial );
postScene = new THREE.Scene();
postScene.add( postQuad );
}
function setupScene() {
//var diffuse = new TextureLoader().load( 'textures/brick_diffuse.jpg' );
//diffuse.wrapS = diffuse.wrapT = RepeatWrapping;
// Setup some geometries
var geometry = new THREE.TorusKnotBufferGeometry( 1, 0.3, 128, 64 );
var material = new THREE.MeshBasicMaterial( { color: 'blue' } );
var count = 50;
var scale = 5;
for ( var i = 0; i < count; i ++ ) {
var r = Math.random() * 2.0 * Math.PI;
var z = ( Math.random() * 2.0 ) - 1.0;
var zScale = Math.sqrt( 1.0 - z * z ) * scale;
var mesh = new THREE.Mesh( geometry, material );
mesh.position.set(
Math.cos( r ) * zScale,
Math.sin( r ) * zScale,
z * scale
);
mesh.rotation.set( Math.random(), Math.random(), Math.random() );
scene.add( mesh );
}
}
function onWindowResize() {
var aspect = window.innerWidth / window.innerHeight;
camera.aspect = aspect;
camera.updateProjectionMatrix();
var dpr = renderer.getPixelRatio();
target.setSize( window.innerWidth * dpr, window.innerHeight * dpr );
renderer.setSize( window.innerWidth, window.innerHeight );
}
function animate() {
if ( ! supportsExtension ) return;
requestAnimationFrame( animate );
// render scene into target
renderer.setRenderTarget( target );
renderer.render( scene, camera );
// render post FX
renderer.setRenderTarget( null );
renderer.render( postScene, postCamera );
controls.update(); // required because damping is enabled
stats.update();
}
</script>
4.简介PostProcess是如何实现的?
屏幕后处理的实现原理:ThreeJS中有许多的RenderPass,这些RenderPass就是将最终去往默认帧缓冲区的数据截获,去往自定义的帧缓冲区中,然后获取其中数据去搞一些蝇营狗苟的事情,然后再把场景渲染到默认帧缓冲区中去。