ACL介绍
ACL是一个由社区开发的动画压缩插件,包含了多种压缩算法,在UE5.3中被官方纳入为引擎自带插件,并设置为默认动画压缩方案。相比于此前UE的提供的各种单独动画压缩算法(移除线性帧、重复帧等),ACL支持了一些新的动画压缩技术(可变压缩率、分段压缩等)并且一次性可以使用多种压缩算法。ACL支持无损和有损压缩,并且支持指定最大误差等参数。ACL是目前游戏动画压缩一流的库,能够大幅降低动画在游戏内存中的占用并保证视觉正确性,支持多种引擎和语言。Apex Legend(Mobile)、PUBG Mobile、Valorant等大型游戏均采用了此库进行动画压缩
压缩算法
ConstantTrack
http://nfrechette.github.io/2016/11/03/anim_compression_constant_tracks/
在一段动画中,总是有部分骨骼并不移动,这时一般有两种情况他们要么是处在BasePose状态要么是一个常量,所以可以使用1bit来表示此Track是否为ConstantTrack,如果是第二种情况的话就额外存储第一帧,这样就能直接压缩掉整个轨道的所有key
如果使用的是LocalSpace则这种方案能得到更高的压缩率,因为大部分的骨骼都不会有相对于父骨骼的移动
Loop
http://nfrechette.github.io/2022/04/03/anim_compression_looping/
对循环动画基础处理分为Clamp和Wrap,目前默认是Wrap,Wrap会移除最后一帧(和第一帧相同的帧),依靠第一帧和倒数第二帧的插值来表示最后一帧,这步主要是针对现在主流的循环动画表示方式为最后一帧和第一帧相同
LinearKeyReduction
http://nfrechette.github.io/2016/12/07/anim_compression_key_reduction/
移除线性帧的方式很简单,挑选某一帧,看其是否能被前后帧插值表示,评估插值的误差是否能接受,如果可以的话就移除该帧。但这里评估误差的方式比较复杂,并不是直接和原帧做减法,因为移除该帧也会影响到周围其他帧的准确性。
UniformSegment
http://nfrechette.github.io/2016/11/10/anim_compression_uniform_segmenting/
将每个轨道分段,基本原理是通过分段让每段的动画更接近,压缩时具有更高的准确性和压缩率,几乎所有的压缩都是基于段而不是整个轨道。而且段的大小可以利用Cache的特性加快解压,因为解压时每个帧总是依赖自己附近的帧,而他们正好在同一个Page上
RangeReduction
http://nfrechette.github.io/2016/11/09/anim_compression_range_reduction/
实际动画的每个轨道的数值都处在一定范围内,而不是整个浮点数表示范围,可以通过运算将所有数值映射到[0,1]
normalized value = (input value - range minimum) / range extent
这能显著提高解压的运算速度并且能提高压缩精度
SimpleQuantization
http://nfrechette.github.io/2016/11/15/anim_compression_quantization/
简单来说就是把浮点数映射为一个N bit的整数,在经过RangeReduction后我们的数字已经在[0,1],如果要映射为16bit整型的话就乘65536然后Round取整。
AdvancedQuantization
http://nfrechette.github.io/2017/03/12/anim_compression_advanced_quantization/
前面提到映射为一个N bit的整数,而这个N可以是一个变值,意义在于不同的Track需要不同的精度。比如在LocalSpace对于上层骨骼节点(root等)往往需要更高的精度因为下层的节点计算全部要乘以上层节点,上层节点的误差对于视觉效果的影响要远大于下层节点;同时Scale track需要的精度可能就不如Translation Track,同一个Track不同Segment需要的精度也可能不同。具体决定N大小的算法过于复杂不在这里展开
压缩代码解析
压缩代码主要在acl::compress_track_list这个函数中,在开始压缩前,需要将你的动画数据转换成acl的track格式比如acl::track_array_qvvf,这部分可以参考UE插件中的BuildACLTransformTrackArray,注意下output_index必须是连续不重复且从0开始,parent_index最好设置正确会影响压缩比特数等的计算
InitializeContext
创建一个Segment并初始化bone_stream
初始化比较基础一看就懂
out_clip_context.segments = allocate_type_array<segment_context>(allocator, 1);
out_clip_context.ranges = nullptr;
out_clip_context.metadata = allocate_type_array<transform_metadata>(allocator, num_transforms);
out_clip_context.leaf_transform_chains = nullptr;
out_clip_context.sorted_transforms_parent_first = allocate_type_array<uint32_t>(allocator, num_transforms);
out_clip_context.clip_shell_metadata = nullptr;
out_clip_context.contributing_error = nullptr;
out_clip_context.num_segments = 1;
out_clip_context.num_bones = num_transforms;
out_clip_context.num_samples_allocated = num_samples;
out_clip_context.num_samples = num_samples;
out_clip_context.sample_rate = sample_rate;
out_clip_context.duration = track_list.get_finite_duration();
out_clip_context.looping_policy = looping_policy;
out_clip_context.additive_format = additive_format;
out_clip_context.are_rotations_normalized = false;
out_clip_context.are_translations_normalized = false;
out_clip_context.are_scales_normalized = false;
out_clip_context.has_additive_base = additive_format != additive_clip_format8::none;
out_clip_context.num_leaf_transforms = 0;
out_clip_context.allocator = &allocator;
将bonetracklist转为bonestream,之后操作修改的都是bonestream,bonetracklist对于压缩管线是只读的
for (uint32_t transform_index = 0; transform_index < num_transforms; ++transform_index)
{
const track_qvvf& track = track_list[transform_index];
const track_desc_transformf& desc = track.get_description();
transform_streams& bone_stream = bone_streams[transform_index];
bone_stream.segment = &segment;
bone_stream.bone_index = transform_index;
bone_stream.parent_bone_index = desc.parent_index;
bone_stream.output_index = desc.output_index;
bone_stream.default_value = desc.default_value;
bone_stream.rotations = rotation_track_stream(allocator, num_samples, sizeof(rtm::quatf), sample_rate, rotation_format8::quatf_full);
bone_stream.translations = translation_track_stream(allocator, num_samples, sizeof(rtm::vector4f), sample_rate, vector_format8::vector3f_full);
bone_stream.scales = scale_track_stream(allocator, num_samples, sizeof(rtm::vector4f), sample_rate, vector_format8::vector3f_full);
for (uint32_t sample_index = 0; sample_index < num_samples; ++sample_index)
{
const rtm::qvvf& transform = track[sample_index];
// If we request raw data and we are already normalized, retain the original value
// otherwise we normalize for safety
rtm::quatf rotation;
if (settings.rotation_format != rotation_format8::quatf_full || !rtm::quat_is_normalized(transform.rotation))
rotation = rtm::quat_normalize(transform.rotation);
else
rotation = transform.rotation;
are_samples_valid &= rtm::quat_is_finite(rotation);
are_samples_valid &= rtm::vector_is_finite3(transform.translation);
are_samples_valid &= rtm::vector_is_finite3(transform.scale);
bone_stream.rotations.set_raw_sample(sample_index, rotation);
bone_stream.translations.set_raw_sample(sample_index, transform.translation);
bone_stream.scales.set_raw_sample(sample_index, transform.scale);
}
transform_metadata& metadata = out_clip_context.metadata[transform_index];
metadata.transform_chain = nullptr;
metadata.parent_index = desc.parent_index;
metadata.precision = desc.precision;
metadata.shell_distance = desc.shell_distance;
out_clip_context.sorted_transforms_parent_first[transform_index] = transform_index;
}
初始化层级信息
第一步先计算哪些track是leaf,用一个bitset进行标记
// Initialize our hierarchy information
if (num_transforms != 0)
{
// Calculate which bones are leaf bones that have no children
bitset_description bone_bitset_desc = bitset_description::make_from_num_bits(num_transforms);
const size_t bitset_size = bone_bitset_desc.get_size();
uint32_t* is_leaf_bitset = allocate_type_array<uint32_t>(allocator, bitset_size);
bitset_reset(is_leaf_bitset, bone_bitset_desc, false);
// By default everything is marked as a leaf
// We'll then iterate on every transform and mark their parent as non-leaf
bitset_set_range(is_leaf_bitset, bone_bitset_desc, 0, num_transforms, true);
// Move and validate the input data
for (uint32_t transform_index = 0; transform_index < num_transforms; ++transform_index)
{
const transform_metadata& metadata = out_clip_context.metadata[transform_index];
const bool is_root = metadata.parent_index == k_invalid_track_index;
// If we have a parent, mark it as not being a leaf bone (it has at least one child)
if (!is_root)
bitset_set(is_leaf_bitset, bone_bitset_desc, metadata.parent_index, false);
}
const uint32_t num_leaf_transforms = bitset_count_set_bits(is_leaf_bitset, bone_bitset_desc);
out_clip_context.num_leaf_transforms = num_leaf_transforms;
从每个leaf构造一条transform_chain,用leafnum个bitset来表示(在chain中置1),对于非leaf节点查找时就按顺序看哪个chain包括了该节点就用哪条chain
uint32_t* leaf_transform_chains = allocate_type_array<uint32_t>(allocator, num_leaf_transforms * bitset_size);
out_clip_context.leaf_transform_chains = leaf_transform_chains;
uint32_t leaf_index = 0;
for (uint32_t transform_index = 0; transform_index < num_transforms; ++transform_index)
{
if (!bitset_test(is_leaf_bitset, bone_bitset_desc, transform_index))
continue; // Skip non-leaf bones
uint32_t* bone_chain = leaf_transform_chains + (leaf_index * bitset_size);
bitset_reset(bone_chain, bone_bitset_desc, false);
uint32_t chain_bone_index = transform_index;
while (chain_bone_index != k_invalid_track_index)
{
bitset_set(bone_chain, bone_bitset_desc, chain_bone_index, true);
transform_metadata& metadata = out_clip_context.metadata[chain_bone_index];
// We assign a bone chain the first time we find a bone that isn't part of one already
if (metadata.transform_chain == nullptr)
metadata.transform_chain = bone_chain;
chain_bone_index = metadata.parent_index;
}
leaf_index++;
}
按照先parentindex后sibling的顺序为track排序(先序遍历,不是覆盖原来的,写到sorted_transforms_parent_first),在后面计算最优比特率和计算shelldistance时需要先序遍历
sort_transform_indices_parent_first(
transform_clip_context_adapter_t(out_clip_context),
out_clip_context.sorted_transforms_parent_first,
num_transforms);
}
OptimizeLooping
这里的Looping处理其实并不是指一段动画可以拆成多段相同的帧序列,然后只留下一段。这种动画一般不存在,就算存在在导入引擎时也已经精简了。这里的处理其实就是Wrap Looping,很多循环动画会保持最后一帧和第一帧一样(UE就是这样),那么这个最后一帧实际上就可以丢掉,利用插值去解压计算
const bool is_wrapping = is_clip_looping(
transform_clip_context_adapter_t(context),
transform_clip_context_adapter_t(additive_base_clip_context),
*settings.error_metric);
if (is_wrapping)
{
// Our last sample matches the first, we can wrap
context.num_samples--;
context.looping_policy = sample_looping_policy::wrap;
segment.num_samples--;
const uint32_t num_transforms = segment.num_bones;
for (uint32_t transform_index = 0; transform_index < num_transforms; ++transform_index)
{
segment.bone_streams[transform_index].rotations.strip_last_sample();
segment.bone_streams[transform_index].translations.strip_last_sample();
if (context.has_scale)
segment.bone_streams[transform_index].scales.strip_last_sample();
}
}
ExtractBoneRange
这里计算每个bonestream的数值范围,然后存储到context的transform_range中,后续可以用于比特率压缩和判断是否在一定阈值下可视为常量轨道
inline void extract_bone_ranges_impl(const segment_context& segment, transform_range* bone_ranges)
{
const bool has_scale = segment_context_has_scale(segment);
for (uint32_t bone_index = 0; bone_index < segment.num_bones; ++bone_index)
{
const transform_streams& bone_stream = segment.bone_streams[bone_index];
transform_range& bone_range = bone_ranges[bone_index];
bone_range.rotation = calculate_track_range(bone_stream.rotations, true);
bone_range.translation = calculate_track_range(bone_stream.translations, false);
if (has_scale)
bone_range.scale = calculate_track_range(bone_stream.scales, false);
else
bone_range.scale = track_stream_range();
}
}
inline void extract_clip_bone_ranges(iallocator& allocator, clip_context& context)
{
context.ranges = allocate_type_array<transform_range>(allocator, context.num_bones);
ACL_ASSERT(context.num_segments == 1, "context must contain a single segment!");
const segment_context& segment = context.segments[0];
acl_impl::extract_bone_ranges_impl(segment, context.ranges);
}
CompactConstantStream
压缩常量轨道这里会利用到前面计算的bonerange以及shelldistance等进行判断是否可视为常量轨道,更进一步的对于常量轨道而言会尝试判断该常量是否可以等同于该track的default值。对于常量轨道会将bonestream缩减成仅剩第一个sample,如果是defaulttrack则会打上标记在后续直接strip掉这个轨道
if (are_rotations_constant(settings, context, additive_base_clip_context, transform_index))
{
rotation_track_stream constant_stream(allocator, 1, bone_stream.rotations.get_sample_size(), bone_stream.rotations.get_sample_rate(), bone_stream.rotations.get_rotation_format());
const rtm::vector4f default_bind_rotation = rtm::quat_to_vector(desc.default_value.rotation);
rtm::vector4f rotation = num_samples != 0 ? bone_stream.rotations.get_raw_sample<rtm::vector4f>(0) : default_bind_rotation;
bone_stream.is_rotation_constant = true;
if (are_rotations_default(settings, context, additive_base_clip_context, desc, transform_index))
{
bone_stream.is_rotation_default = true;
rotation = default_bind_rotation;
}
constant_stream.set_raw_sample(0, rotation);
bone_stream.rotations = std::move(constant_stream);
bone_range.rotation = track_stream_range::from_min_extent(rotation, rtm::vector_zero());
}
NormalizeClipStreams
这个对应的实际上就是RangeReduction,只不过这步是QuantizeStream的前置动作,如果我们不能做比特率压缩,那这步就没有意义,所以只对非Raw轨道执行Normalize,最后得到一个范围在[0,1]的轨道
inline void normalize_clip_streams(clip_context& context, range_reduction_flags8 range_reduction)
{
ACL_ASSERT(context.num_segments == 1, "context must contain a single segment!");
segment_context& segment = context.segments[0];
const bool has_scale = segment_context_has_scale(segment);
if (are_any_enum_flags_set(range_reduction, range_reduction_flags8::rotations))
{
normalize_rotation_streams(segment.bone_streams, context.ranges, segment.num_bones);
context.are_rotations_normalized = true;
}
if (are_any_enum_flags_set(range_reduction, range_reduction_flags8::translations))
{
normalize_translation_streams(segment.bone_streams, context.ranges, segment.num_bones);
context.are_translations_normalized = true;
}
if (has_scale && are_any_enum_flags_set(range_reduction, range_reduction_flags8::scales))
{
normalize_scale_streams(segment.bone_streams, context.ranges, segment.num_bones);
context.are_scales_normalized = true;
}
}
SegmentStream
接下来就是对整个clip分段,分段的数量根据setting中的每段sample数量进行计算,但还有一大堆rebalance来保证不会前面的每个segment都一样然后最后一个segment放的很少
//其实就是numsamples/ideal_num_samples然后向上取整
const uint32_t num_estimated_segments = (num_samples + settings.ideal_num_samples - 1) / settings.ideal_num_samples;
然后根据每段的大小循环拷贝整个clip中的stream到每个segment中就可以了
for (uint32_t segment_index = 0; segment_index < num_segments; ++segment_index)
{
const uint32_t num_samples_in_segment = num_samples_per_segment[segment_index];
segment_context& segment = clip.segments[segment_index];
segment.clip = &clip;
segment.bone_streams = allocate_type_array<transform_streams>(allocator, clip.num_bones);
segment.ranges = nullptr;
segment.contributing_error = nullptr;
segment.num_bones = clip.num_bones;
segment.num_samples_allocated = num_samples_in_segment;
segment.num_samples = num_samples_in_segment;
segment.clip_sample_offset = clip_sample_index;
segment.segment_index = segment_index;
segment.are_rotations_normalized = false;
segment.are_translations_normalized = false;
segment.are_scales_normalized = false;
segment.animated_rotation_bit_size = 0;
segment.animated_translation_bit_size = 0;
segment.animated_scale_bit_size = 0;
segment.animated_pose_bit_size = 0;
segment.animated_data_size = 0;
segment.range_data_size = 0;
segment.total_header_size = 0;
for (uint32_t bone_index = 0; bone_index < clip.num_bones; ++bone_index)
{
const transform_streams& clip_bone_stream = clip_segment->bone_streams[bone_index];
transform_streams& segment_bone_stream = segment.bone_streams[bone_index];
segment_bone_stream.segment = &segment;
segment_bone_stream.bone_index = bone_index;
segment_bone_stream.parent_bone_index = clip_bone_stream.parent_bone_index;
segment_bone_stream.output_index = clip_bone_stream.output_index;
segment_bone_stream.default_value = clip_bone_stream.default_value;
if (clip_bone_stream.is_rotation_constant)
{
segment_bone_stream.rotations = clip_bone_stream.rotations.duplicate();
}
else
{
const uint32_t sample_size = clip_bone_stream.rotations.get_sample_size();
rotation_track_stream rotations(allocator, num_samples_in_segment, sample_size, clip_bone_stream.rotations.get_sample_rate(), clip_bone_stream.rotations.get_rotation_format(), clip_bone_stream.rotations.get_bit_rate());
std::memcpy(rotations.get_raw_sample_ptr(0), clip_bone_stream.rotations.get_raw_sample_ptr(clip_sample_index), size_t(num_samples_in_segment) * sample_size);
segment_bone_stream.rotations = std::move(rotations);
}
}
}
重新计算range并normalize
如果我们成功分成多段则重新计算每一段的range并normalize
// If we have a single segment, skip segment range reduction since it won't help
if (range_reduction != range_reduction_flags8::none && lossy_clip_context.num_segments > 1)
{
// Extract and fixup our segment wide ranges per bone
extract_segment_bone_ranges(allocator, lossy_clip_context);
// Normalize our samples into the segment wide ranges per bone
normalize_segment_streams(lossy_clip_context, range_reduction);
}
QuantizeStreams
这里有两种情况,一种就是我们不使用比特率压缩,那么前面的NormalizeClipStream我们就已经跳过,这里就不计算每个轨道的压缩比特数直接走fixed_quantize;另一种就是我们使用了比特率压缩(默认设置下的format就是variable),此时我们的stream已经是normalized,那么我们先计算每个轨道的目标比特数,再走variable_quantize。
需要注意的是不要被variable和fixed的含义迷惑了,以为一个对应前面说的AdvancedQuantization另一个对应SimpleQuantization。实际上fixed等于不使用比特率压缩,variable则是AdvancedQuantizetion
inline void quantize_streams(iallocator& allocator, clip_context& clip, const compression_settings& settings, const clip_context& raw_clip_context, const clip_context& additive_base_clip_context, const output_stats& out_stats)
{
const bool is_rotation_variable = is_rotation_format_variable(settings.rotation_format);
const bool is_translation_variable = is_vector_format_variable(settings.translation_format);
const bool is_scale_variable = is_vector_format_variable(settings.scale_format);
const bool is_any_variable = is_rotation_variable || is_translation_variable || is_scale_variable;
quantization_context context(allocator, clip, raw_clip_context, additive_base_clip_context, settings);
for (segment_context& segment : clip.segment_iterator())
{
context.set_segment(segment);
// If we use a variable bit rate, run our optimization algorithm to find the optimal bit rates
if (is_any_variable)
find_optimal_bit_rates(context);
// If we need the contributing error of each frame, find it now before we quantize
if (settings.metadata.include_contributing_error)
find_contributing_error(context);
// Quantize our streams now that we found the optimal bit rates
quantize_all_streams(context);
}
}
inline void quantize_all_streams(quantization_context& context)
{
const bool is_rotation_variable = is_rotation_format_variable(context.rotation_format);
const bool is_translation_variable = is_vector_format_variable(context.translation_format);
const bool is_scale_variable = is_vector_format_variable(context.scale_format);
for (uint32_t bone_index = 0; bone_index < context.num_bones; ++bone_index)
{
const transform_bit_rates& bone_bit_rate = context.bit_rate_per_bone[bone_index];
if (is_rotation_variable)
quantize_variable_rotation_stream(context, bone_index, bone_bit_rate.rotation);
else
quantize_fixed_rotation_stream(context, bone_index, context.rotation_format);
if (is_translation_variable)
quantize_variable_translation_stream(context, bone_index, bone_bit_rate.translation);
else
quantize_fixed_translation_stream(context, bone_index, context.translation_format);
if (context.has_scale)
{
if (is_scale_variable)
quantize_variable_scale_stream(context, bone_index, bone_bit_rate.scale);
else
quantize_fixed_scale_stream(context, bone_index, context.scale_format);
}
}
}
FixedQuantize
对于rotation而言如果格式是drop_w(默认设置下)则在此弃掉w部,然后就是做单纯的内存拷贝(除了去掉了内存对齐外);translation也完全是单纯的内存拷贝
inline void quantize_fixed_rotation_stream(iallocator& allocator, const rotation_track_stream& raw_stream, rotation_format8 rotation_format, rotation_track_stream& out_quantized_stream)
{
ACL_ASSERT(raw_stream.get_sample_size() == sizeof(rtm::vector4f), "Unexpected rotation sample size. %u != %zu", raw_stream.get_sample_size(), sizeof(rtm::vector4f));
const uint32_t num_samples = raw_stream.get_num_samples();
const uint32_t rotation_sample_size = get_packed_rotation_size(rotation_format);
const float sample_rate = raw_stream.get_sample_rate();
rotation_track_stream quantized_stream(allocator, num_samples, rotation_sample_size, sample_rate, rotation_format);
for (uint32_t sample_index = 0; sample_index < num_samples; ++sample_index)
{
const rtm::quatf rotation = raw_stream.get_raw_sample<rtm::quatf>(sample_index);
uint8_t* quantized_ptr = quantized_stream.get_raw_sample_ptr(sample_index);
switch (rotation_format)
{
case rotation_format8::quatf_full:
pack_vector4_128(rtm::quat_to_vector(rotation), quantized_ptr);
break;
case rotation_format8::quatf_drop_w_full:
pack_vector3_96(rtm::quat_to_vector(rotation), quantized_ptr);
break;
case rotation_format8::quatf_drop_w_variable:
default:
ACL_ASSERT(false, "Invalid or unsupported rotation format: " ACL_ASSERT_STRING_FORMAT_SPECIFIER, get_rotation_format_name(rotation_format));
break;
}
}
out_quantized_stream = std::move(quantized_stream);
}
VariableQuantize
pack_vector3_uXX_unsafe这个函数就是把[0,1]范围的数映射为Nbit整数,原理就和算法部分的SimpleQuantization和VariableQuantization一样,对常量轨道有一些特殊的处理
inline void quantize_variable_translation_stream(quantization_context& context, const transform_streams& raw_track, const transform_streams& lossy_track, uint8_t bit_rate, translation_track_stream& out_quantized_stream)
{
const translation_track_stream& raw_translations = raw_track.translations;
const translation_track_stream& lossy_translations = lossy_track.translations;
const uint32_t num_samples = is_constant_bit_rate(bit_rate) ? 1 : lossy_translations.get_num_samples();
const uint32_t sample_size = sizeof(uint64_t) * 2;
const float sample_rate = lossy_translations.get_sample_rate();
translation_track_stream quantized_stream(context.allocator, num_samples, sample_size, sample_rate, vector_format8::vector3f_variable, bit_rate);
if (is_constant_bit_rate(bit_rate))
{
#if defined(ACL_IMPL_ENABLE_WEIGHTED_AVERAGE_CONSTANT_SUB_TRACKS)
const track_stream_range& bone_range = context.segment->ranges[lossy_track.bone_index].translation;
const rtm::vector4f normalized_translation = clip_range.get_weighted_average();
#else
const rtm::vector4f normalized_translation = lossy_track.constant_translation;
#endif
uint8_t* quantized_ptr = quantized_stream.get_raw_sample_ptr(0);
pack_vector3_u48_unsafe(normalized_translation, quantized_ptr);
}
else
{
const uint32_t num_bits_at_bit_rate = get_num_bits_at_bit_rate(bit_rate);
for (uint32_t sample_index = 0; sample_index < num_samples; ++sample_index)
{
uint8_t* quantized_ptr = quantized_stream.get_raw_sample_ptr(sample_index);
if (is_raw_bit_rate(bit_rate))
{
const rtm::vector4f translation = raw_translations.get_raw_sample<rtm::vector4f>(context.segment_sample_start_index + sample_index);
pack_vector3_96(translation, quantized_ptr);
}
else
{
const rtm::vector4f translation = lossy_translations.get_raw_sample<rtm::vector4f>(sample_index);
pack_vector3_uXX_unsafe(translation, num_bits_at_bit_rate, quantized_ptr);
}
}
}
StripKeyFrames
这步类似于算法LinearKeyReduction,但还包括平凡帧等的移除判断,会在给定的ErrorThreshold和StripProportion内尽可能的多的丢弃帧,不详细分析
Tips
源数据空间
经过测试同样的动画,用ComponentSpace仅能达到50%左右的压缩率;而改为LocalSpace即便不考虑移除掉的Pos轨道压缩率也提高到了80%-90%,加上移除掉的PosTrack则压缩率稳定在90%以上。因为LocalSpace下的动画数值范围要远小于ComponentSpace,这很大的影响了RangeReduction、Quantize和FrameStrip的效率。而且从代码上看acl对输入也是默认为LocalSpace去进行可变比特率等计算。
版本兼容
经测试使用低版本的acl库压缩得到的数据,可以使用高版本的acl库进行解压。UE默认设置为仅支持最新版本的数据(latest),可以手动修改为Any来解压旧版本数据,但Any状态会失去对特定版本的解压速度优势,由于解压时间本身就很短所以这点也是完全可以接受的
/** The decompression settings used by ACL */
struct UEDefaultDecompressionSettings : public acl::default_transform_decompression_settings
{
// Only support our latest version
static constexpr acl::compressed_tracks_version16 version_supported() { return acl::compressed_tracks_version16::any; }
#if UE_BUILD_SHIPPING
// Shipping builds do not need safety checks, by then the game has been tested enough
// Only data corruption could cause a safety check to fail
// We keep this disabled regardless because it is generally better to output a T-pose than to have a
// potential crash. Corruption can happen and it would be unfortunate if a demo or playtest failed
// as a result of a crash that we can otherwise recover from.
//static constexpr bool skip_initialize_safety_checks() { return true; }
#endif
};
struct UEDebugDecompressionSettings : public acl::debug_transform_decompression_settings
{
// Only support our latest version
static constexpr acl::compressed_tracks_version16 version_supported() { return acl::compressed_tracks_version16::any; }
};
struct UEDefaultDatabaseSettings final : public acl::default_database_settings
{
// Only support our latest version
static constexpr acl::compressed_tracks_version16 version_supported() { return acl::compressed_tracks_version16::any; }
};