GAMES202——作业5 实时光线追踪降噪(联合双边滤波、多帧的投影与积累、À-Trous Wavelet 加速单帧降噪)

任务

        1.实现单帧降噪

        2.实现多帧投影

        3.实现多帧累积

        Bonus:使用À-Trous Wavelet 加速单帧降噪

实现

        单帧降噪

        这里实现比较简单,直接根据给出的联合双边滤波核的公式就能实现

        

Buffer2D<Float3> Denoiser::Filter(const FrameInfo &frameInfo) {
    int height = frameInfo.m_beauty.m_height;
    int width = frameInfo.m_beauty.m_width;
    Buffer2D<Float3> filteredImage = CreateBuffer2D<Float3>(width, height);
    int kernelRadius = 16;
#pragma omp parallel for
    for (int y = 0; y < height; y++) {
        for (int x = 0; x < width; x++) {
            // TODO: Joint bilateral filter
            int x_min = std::max(0,x-kernelRadius);
            int x_max = std::min(width-1,x+kernelRadius);
            int y_min = std::max(0,y-kernelRadius);
            int y_max = std::min(height-1,y+kernelRadius);

            auto center_color = frameInfo.m_beauty(x,y);
            auto center_normal = frameInfo.m_normal(x,y);
            auto center_positon = frameInfo.m_position(x,y);

            Float3 finalColor;
            float weight = 0.0f;

            for(int i = x_min ; i<=x_max ; i++){
                for(int j = y_min ; j<=y_max ; j++){

                    auto position = frameInfo.m_position(i,j);
                    auto normal = frameInfo.m_normal(i,j);
                    auto color = frameInfo.m_beauty(i,j);
                    //first
                    float distance2 = SqrDistance(position,center_positon);
                    float P = distance2 / ( 2 * m_sigmaCoord * m_sigmaCoord);
                    //second
                    auto color2 = SqrDistance(color,center_color);
                    float C = color2 / (2 * m_sigmaColor * m_sigmaColor);
                    //third

                    auto N = SafeAcos(Dot(center_normal,normal));
                    N *=N;
                    N / (2.0 * m_sigmaNormal * m_sigmaNormal);  //youwenti

                    //  std::cout << N << std::endl;
                    // auto normal2 = SafeAcos(Dot(center_normal,normal));
                    // normal2 *= normal2;
                    // auto N = normal2 / ( 2 * m_sigmaNormal * m_sigmaNormal);

                    //forth
                    float D = 0.0;
                    //not self to makesure not/0
                    if(P > 0.0f){
                        auto direction = Normalize( position - center_positon );
                        float plane = Dot(direction , center_normal);
                        float plane2 = plane * plane;

                        D = plane2 / ( 2 * m_sigmaPlane * m_sigmaPlane);
                    }


                    //final
                    float temp = std::exp(-P-C-N-D);
                    finalColor += color * temp;
                    weight += temp;


                }
            }
 
            filteredImage(x,y) = finalColor / weight;
            
            // filteredImage(x, y) = Float3(0.0);
        }
    }
    return filteredImage;
}

        帧的投影

        

        根据上面的式子,能够求出世界坐标系在上一帧的屏幕坐标的位置。求出位置后,需要先判断这个位置是否超出了屏幕坐标。如果在屏幕内,判断是否为同一个物体,如果不是就不能采用上一帧的信息,否则会造成拖影现象。

        

void Denoiser::Reprojection(const FrameInfo &frameInfo) {
    int height = m_accColor.m_height;
    int width = m_accColor.m_width;
    Matrix4x4 preWorldToScreen =
        m_preFrameInfo.m_matrix[m_preFrameInfo.m_matrix.size() - 1];
    Matrix4x4 preWorldToCamera =
        m_preFrameInfo.m_matrix[m_preFrameInfo.m_matrix.size() - 2];
#pragma omp parallel for
    for (int y = 0; y < height; y++) {
        for (int x = 0; x < width; x++) {
            // TODO: Reproject
            auto id = frameInfo.m_id(x,y);
            auto world_pos = frameInfo.m_position(x,y);
            m_valid(x, y) = false;
            m_misc(x, y) = Float3(0.f);
            // std::cout << id  << std::endl;
            if(id == -1)continue;

            auto world_to_local = Inverse(frameInfo.m_matrix[id]);
            auto local_to_pre_world = m_preFrameInfo.m_matrix[id];
            
            auto local_pos = world_to_local(world_pos,Float3::EType::Point);
            auto pre_world_pos = local_to_pre_world(local_pos,Float3::EType::Point);
            auto pre_screen_coord = preWorldToScreen(pre_world_pos,Float3::EType::Point);


            if(pre_screen_coord.x<0 || pre_screen_coord.x>=width || pre_screen_coord.y<0 || pre_screen_coord.y >=height){
                continue;
            }else{

                auto pre_id = m_preFrameInfo.m_id(int(pre_screen_coord.x),int(pre_screen_coord.y));
                if(pre_id == id){
                    m_valid(x,y) = true;
                    m_misc(x,y) = m_accColor(int(pre_screen_coord.x),int(pre_screen_coord.y));
                }
            }

        }
    }
    std::swap(m_misc, m_accColor);
}

        帧的累积

        

        先判断某个像素是否存在于上一帧里,如果存在,那么就按照α来进行插值沿用上一帧。如果不存在,说明该像素不能以上一帧进行参考,将α设置为1,只用自己这一帧。

        对于Clamp部分,首先需要计算 Ci 在 7×7 的邻域内的均值 µ 和方差 σ, 然后我们将上一帧的颜色限制在 (µ − kσ, µ + ) 范围内。

void Denoiser::TemporalAccumulation(const Buffer2D<Float3> &curFilteredColor) {
    int height = m_accColor.m_height;
    int width = m_accColor.m_width;
    int kernelRadius = 3;
#pragma omp parallel for
    for (int y = 0; y < height; y++) {
        for (int x = 0; x < width; x++) {
            // TODO: Temporal clamp
            Float3 color = m_accColor(x, y);
            float alpha = 1.0f; 
            if(m_valid(x,y)){
                alpha = m_alpha;
                int x_min = std::max(0,x-kernelRadius);
                int x_max = std::min(width-1,x+kernelRadius);
                int y_min = std::max(0,y-kernelRadius);
                int y_max = std::min(height-1,y+kernelRadius);

                auto mu = Float3(0.0);
                auto sigma = Float3(0.0);

                for(int i =x_min;i<=x_max;i++){
                    for(int j=y_min;j<=y_max;j++){
                        mu += curFilteredColor(i,j);
                        sigma += Sqr(curFilteredColor(i,j)-curFilteredColor(x,y));

                    }
                }

                int count = kernelRadius * 2 + 1;
                count *= count;
                mu = mu / float(count);
                sigma = SafeSqrt( sigma / float(count));

                // mu = mu / ( (x_max-x_min) * (y_max - y_min) );
                // sigma = sigma / ( (x_max-x_min) * (y_max - y_min) );

                color = Clamp(color,mu - sigma * m_colorBoxK,mu + sigma * m_colorBoxK );
            }


            // TODO: Exponential moving average

            m_misc(x, y) = Lerp(color, curFilteredColor(x, y), alpha);
        }
    }
    std::swap(m_misc, m_accColor);
}

        À-Trous Wavelet 加速单帧降噪

        课程里给出了一维的解释。由于没有学过信号与系统,这里我的简单理解是离得越远,点的贡献就越小,那么在远的地方就选一个点来代表其附近区域的贡献。

Buffer2D<Float3> Denoiser::AFilter(const FrameInfo &frameInfo) {
    int height = frameInfo.m_beauty.m_height;
    int width = frameInfo.m_beauty.m_width;
    Buffer2D<Float3> filteredImage = CreateBuffer2D<Float3>(width, height);
    int kernelRadius = 16;
#pragma omp parallel for
    for (int y = 0; y < height; y++) {
        for (int x = 0; x < width; x++) {
            // TODO: Joint bilateral filter
            int x_min = std::max(0,x-kernelRadius);
            int x_max = std::min(width-1,x+kernelRadius);
            int y_min = std::max(0,y-kernelRadius);
            int y_max = std::min(height-1,y+kernelRadius);

            auto center_color = frameInfo.m_beauty(x,y);
            auto center_normal = frameInfo.m_normal(x,y);
            auto center_positon = frameInfo.m_position(x,y);

            Float3 finalColor;
            float weight = 0.0f;

            int passes = 6;
            for(int pass = 0;pass < passes;pass++){
                for(int filterX = -3 ; filterX <=3; filterX++){
                    for(int filterY = -3 ; filterY <= 3; filterY++){

                        int m = x + std::pow(2,pass)*filterX;
                        int n = y + std::pow(2,pass)*filterY;

                        auto position = frameInfo.m_position(m,n);
                        auto normal = frameInfo.m_normal(m,n);
                        auto color = frameInfo.m_beauty(m,n);
                        //first
                       float distance2 = SqrDistance(position,center_positon);
                       float P = distance2 / ( 2 * m_sigmaCoord * m_sigmaCoord);
                        //second
                       auto color2 = SqrDistance(color,center_color);
                       float C = color2 / (2 * m_sigmaColor * m_sigmaColor);
                       //third

                       auto N = SafeAcos(Dot(center_normal,normal));
                       N *=N;
                       N / (2.0 * m_sigmaNormal * m_sigmaNormal);  //youwenti

                        //forth
                       float D = 0.0;
                      //not self to makesure not/0
                        if(P > 0.0f){
                             auto direction = Normalize( position - center_positon );
                            float plane = Dot(direction , center_normal);
                             float plane2 = plane * plane;

                          D = plane2 / ( 2 * m_sigmaPlane * m_sigmaPlane);
                         }


                        //final
                        float temp = std::exp(-P-C-N-D);
                        finalColor += color * temp;
                        weight += temp;


                   }
                }
            }

            filteredImage(x,y) = finalColor / weight;
            
            // filteredImage(x, y) = Float3(0.0);
        }
    }
    return filteredImage;
}

结果

        原始输入数据

        

        采用加速降噪后,并使用帧的投影与累积生成的结果

        原始输入数据

        采用加速降噪后,并使用帧的投影与累积生成的结果

       第二幅图的变化不大,是因为滤波核小

        在示例给出的图中,很明显用了非常大的滤波核。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值