source code:
https://github.com/daw42/glslcookbook
in this chapter, we will look at several examples of animation within shaders, focusing mostly on particle systems.
the first example, animating with vertex displacement, demonstrates animation by transforming the vertex positions of an object based on a time-dependent function.
in the creating a particle fountain recipe, we create a simple particle system under constant acceleration.
in the creating a particle system using transform feedback recipe there is an example illustrating how to use opengl’s transform feedback functinality within a particle system.
the creating a particle system using instanced particles recipe shows u how to animate many complex objects using instanced rendering.
the last two recipes demonstrate some particle systems for simulating complex real phenomena such as smoke and fire.
animating a surface with vertex displacement
a straightforward way to leverage shaders to animation is to simply transform the vertices within the vertex shader based on some time-dependent function.
the opengl application supplies static geometry, and the vertex shader modifies the geometry using the current time (supply as a uniform variable).
this moves the computation of the vertex position from the CPU to the GPU, and leverages whatever parallelism the graphics driver makes available.
in this example, we will create a waving surface by transforming the vertices of a tessellated quad based on a sine wave.
we will send down the pipeline a set of triangles that make up a flat surface in the x-z plane.
in the vertex shader we will transform the y-coordinate of each vertex based on a time-dependent sine function, and compute the normal vector of the transformed vertex.
the following image shows the diesired result.
(u will have to imagine that the waves are travelling across the surface from left to right).
【
alternatively, we could use a noise texture to animate the vertices (that make up the surface) based on a random function.
see chapter 8, using noise in shaders, for details on noise textures).
】
before we jump to the code, let us take a look at the mathematics that we will need.
we will transform the y-coordinate of the surface as a function of the current time and the modeling x-coordinate.
to do so, we will use the basic plane wave equation as shown in the following diagram:
where A is the wave’s amplitude (the height of the peaks), lambda (λ) is the wavelength (the distance between successive peaks), and v is the wave’s velocity.
the previous image shows an example of the wave when t=0 and the wavelength is equal to one. 这个地方不对吧
we will configure these coefficients through uniform variables.
in order the render the surface with proper shading, we also need the normal vector at the transformed location.
we can compute the normal vector using the (partial) derivative of the previous function.
the result is the following equation:
为啥导数就是法线了????
of course, the previous vector should be normalized before using it in our shader model.
layout (location = 0) in vec3 VertexPosition;
out vec4 Position;
out vec3 Normal;
uniform float Time; // The animation time
// Wave parameters
uniform float K; // Wavenumber
uniform float Velocity; // Wave's velocity
uniform float Amp; // Wave's amplitude
uniform mat4 ModelViewMatrix;
uniform mat3 NormalMatrix;
uniform mat4 MVP;
void main()
{
vec4 pos = vec4(VertexPosition,1.0);
// Translate the y coordinate
float u = K * (pos.x - Velocity * Time);
pos.y = Amp * sin( u );
// Compute the normal vector
vec3 n = vec3(0.0);
n.xy = normalize(vec2(-K * Amp *cos( u ), 1.0));
// Send position and normal (in camera cords) to frag.
Position = ModelViewMatrix * pos;
Normal = NormalMatrix * n;
// The position in clip coordinates
gl_Position = MVP * pos;
}
creating a particle system using transform feedback
transform feedback provides a way to capture the output of the vertex (or geometry) shader to a buffer for use in subsequent passes.
originally introduced into opengl with version 3.0, this feature is particully well suited for particle systems because among other things, it enables us to do discrete simulations.
we can update a particle’s position within the vertex shader and render that updated position in a subsequent pass (or the same pass).
then the updated positions can be used in the same way as input to the next frame of animation.
in this example, we will implement the same particle system from the previous recipe (creating a particle fountain), this time making use of transform feedback.
instead of using an equation that describes the particle’s motion for all time, we will update the particle positions incrementally, solving the equations of motion based on the forces involved at the time each frame is rendered.
to make our simulation work, we will use a technique sometimes called “ping-ponging”.
we maintain two sets of vertex buffers and swap their uses each frame.
for example, we use buffer A to provide the positions and velocities as input to the vertex shader.
the vertex shader updates the positions and velocities using the euler method and sends the results to buffer B using transform feedback.
then in a second pass, we render the particles using buffer B.
in the next frame of animation, we repeat the same process, swapping the two buffers.
in general, transform feedback allows us to define a set of shader ouput variables that are to be written to a designated buffer (or set of buffers).
there are several steps involved that will be demonstrated, but the basic idea is as follows.
just before the shader program is linked, we define the relationship between buffers and shader output variables using the function glTransformFeedbackVaryings.
during rendering, we initiate a transform feedback pass. we bind the appropriate buffers to the transform feedback binding points.
(if desired, we can disable rasterization so that the particles are not rendered).
we enable transform feedback using the function glBeginTransformFeedback and then draw the the point primivies.
the output from the vertex shader will be stored in the appropriate buffers.
then we disable transform feedback by calling glEndTransformFeedback.
准备工作
create and allocate three pairs of buffers.
the first pair will be for the particle position, the second for the particle velocities, and the third for the “start time” for each particle (the time when the particle comes to life).
for clarity, we will refer to the first buffer in each pair as the A buffer, and the second as the B buffer.
also, we will need a single buffer to contain the initial velocity for each particle.
create two vertex arrays.
the first vertex array should link the A position buffer with the first vertex attribtue (attribute index 0), the A velocity buffer with vertex attribute one, the A start time buffer with vertex attribute two, and the initial velocity buffer with vertex attribute three.
the second vertex array should be set up in the same way using the B buffers and the same initial velocity buffer.
in the following code, the handles to the two vertex arrays will be accessed via the