Animation Part 3
11.8 Compression Techniques
4 bytes * 10 channels * 30 samples/second = 1200 bytes per second per joint , quite large!
channel omisson
- scale channel: 3 to 1, even 0
- quaternions are always normalized, so w in (x, y, z ,w) can be omitted and reconstructed at runtime
- for poses that do not change, the entire animation for that frame can be stored as a single bit, indicating
not changing
quantization
- converting a 32-bit IEEE float into an n-bit integer representation
- for example, 32-bit float is too precise for a quat that lies in [-1, 1]
- often 16 bit is enough
- encoding & decoding in quantization
- encoding : float to n-bit integer presentation
- decoding: opposite
- decoding only recovers approximation, because quantization is lossy
Sampling frequency and key omission
- three problems causing animation data large
- up to 10 channels(channel omission helps)
- large number of joints(hard to solve for high-res character)
- high sample rate
- reduce sample rate overall(when 15 fps satisfies, don’t do 30)
- omit linear changing frames, use LERP to restore
Curved-based compression
- use non-uniform B-splines to approximate each channel curve
selective loading and streaming
- load only necessary clips
- keep core clips
11.9 Animation System Architecture
- three distinct layers for a Animation System
- Animation Pipeline
- generate a single local skeleton pose, and post-process
- Action State Machine (ASM)
- upper-level animation transitions
- Animation Controller
- high-level control of animation mode
- such as drive mode, run and gun mode…
11.10 The Animation Pipeline
- input: animation clips and blend specifications
- output: local and global poses, matrix palette
- stages:
- Clip decompression and pose extraction
- Pose Blending
- Global pose generation
- Post-Processing
- Recalculating of global poses
- Matrix palette generation
- review: local pose and global pose of a joint
- local joint pose: the joint transformation in parent joint space
- global joint pose: the joint transformation in model space
- P j → M = Π i = j 0 P i → p ( i ) P_{j \to M} = \Pi_{i=j}^0P_{i \to p(i)} Pj→M=Πi=j0Pi→p(i), where p ( 0 ) ≡ M p(0) \equiv M p(0)≡M
- M is model space, P i → p ( i ) P_{i \to p(i)} Pi→p(i) is transform from local joint i to parent p(i)
Data structure
- shared resources
- Skeleton: joint hierarchy and its bind pose
- Skinned meshes
- Animation Clips
- A set of animation clips only apply to one skeleton
- better use different meshes for the same skeleton to create new characters, so that they can all share a set of animation clips
- animation retargeting system makes it less strict
- per-instance data
- each character instance has its own state data
- including:
- Clip state
- Local clock: determines which frame to be extracted
- Playback rate: speed of the clip
- Blend specification
- how to blend multiple clips
- Partial Skeleton joint weight
- Local Pose
- Global Pose
- Matrix palette
- Clip state
Flat-weighted Average Blend v.s. Blend Tree
pass
Cross Fading Architectures
- cross-fades with a flat weighed average
- Ramping down A, Ramping up B
- Complex blends cross fades, from ABC to DE
- keep relative weights of A,B,C( or D,E) constant
- ramping up or down ABC, DE as a whole
- Cross Fades with Expression Trees
- simply introduce a new LERP node at root
- simply introduce a new LERP node at root
Animation Pipeline Optimization
- different on most hardware platforms
- main RAM <—> Local cache of processing units
- hUMA: heterogeneous unified memory architecture
- however, optimization techniques can apply to multiple platforms at most times, more or less