WebGPU(六):颜色渲染

WebGPU(六): 颜色渲染

本节介绍如何在画框中画一个颜色方框,主要需要学习下面三个概念:

  • Swap Chains
  • Texture Views
  • Render Passes

Swap Chain

要了解Swap Chain(交换链),我们需要知道窗口是怎么样被画的。
首先,渲染管线不会直接在当前显示的Texture上直接绘制,否则我们会看到像素一直在发生变化。实际上一般是现在屏幕外的纹理Texture中进行绘制,该texture绘制完成后会呈现到surface中。
其次,绘制所需的时间与应用程序所需的帧速率不同,因此 GPU 可能必须等到需要下一帧。 队列中可能有多个屏幕外纹理等待显示,因此渲染时间的波动得到摊 销.
最后,尽可能重复使用这些屏幕外纹理。一旦呈现新纹理,就可以将前一个纹理重用为下一帧的目标。整个纹理交换机制由Swap Chain对象实现。
在这里插入图片描述

因为GPU拥有独立的时间线管理,我们的代码指令只能异步执行,所以如果要自主定义Swap Chain会非常复杂,所以我们可以使用APIs!

创建Swap Chain

跟以前一样,我们先通过描述符定义Swap Chain的一些参数。

WGPUSwapChainDescriptor swapChainDesc = {};
swapChainDesc.nextInChain = nullptr;
swapChainDesc.width = 640;
swapChainDesc.height = 480;

Warning: 正如你可以猜到的,当调整窗口大小时,我们将不得不负责创建新的交换链。同时,不要尝试调整其大小。您可以在创建窗口之前添加以指示 GLFW 禁用大小调整。glfwWindowHint(GLFW_RESIZABLE, GLFW_FALSE);

为了使Swap Chain 分配 Textures,我们还需要指定它们的格式。格式是多个通道(一个子集合包含red、green、blue、alpha)、每个通道的大小(8、16 或 32 位)和通道类型(浮点数、整数、有符号或无符号),压缩方案、归一化模式等。
枚举中列出了所有可用的组合,但由于我们的交换链针对现有表面,因此我们可以只使用表面使用的任何格式:WGPUTextureFormat

WGPUTextureFormat swapChainFormat = wgpuSurfaceGetPreferredFormat(surface, adapter);
swapChainDesc.format = swapChainFormat;

Dawn: 当使用WebGPU的Dawn实现时,wgpuSurfaceGetPreferredFormat尚未实现。实际上,它支持的唯一纹理格式是WGPUTextureFormat_BGRA8Unorm

纹理是为特定用途分配的,这决定了 GPU 组织其内存的方式。在我们的例子中,我们使用Swap Chain Textures作为Render Pass(渲染管道)的目标,因此需要使用RenderAttachment标志位。

swapChainDesc.usage = WGPUTextureUsage_RenderAttachment;

最后,我们需要明确在每一帧中呈现纹理队列中的拿一些纹理,可选的参数有:

  • Immediate: 不使用屏幕外纹理,渲染过程直接在表面上绘制,这可能会导致伪影(称为tearing,撕裂),但为零延迟.
  • Mailbox: 仅有一个间隙在等待队列中,当一个新帧被渲染的时候,它替换在队列中正在等待的最新那个纹理(可能丢弃一些没有呈现过的纹理)
  • Fifo:则“first in, first out”,队列的原则,可能渲染一些很旧的纹理。
    测试使用FIFO模式:
swapChainDesc.presentMode = WGPUPresentMode_Fifo;

我们创建Swap Chain:

WGPUSwapChain swapChain = wgpuDeviceCreateSwapChain(device, surface, &swapChainDesc);
std::cout << "Swapchain: " << swapChain << std::endl;

在结束的时候,同样把它销毁,

wgpuSwapChainRelease(swapChain);

Warning:如果在调用 时收到错误Uncaptured device error: type 3 (Device(OutOfMemory)) ,请检查是否在创建窗口时将glfw值指定为 GLFW_NO_API

Texture View

在Swap Chain的主循环中执行的逻辑过程如下:

while (!glfwWindowShouldClose(window)) {
    glfwPollEvents();
    {{Get target texture view}}
    {{Draw things}}
    {{Destroy texture view}}
    {{Present swap chain}}
}
WGPUTextureView nextTexture = wgpuSwapChainGetCurrentTextureView(swapChain);
std::cout << "nextTexture: " << nextTexture << std::endl;

上面的代码会返回一个TextureView,这限制了对Swap Chain分配的实际纹理对象的访问,因此Swap Chain可以在内部使用它想要的任何组织,同时展示具有我们想要的尺寸和格式的view。

当获取texture view的时候经常会出错,特别是当视窗发送变化之后,所以我们在试图展示之前必须进行检查:

if (!nextTexture) {
    std::cerr << "Cannot acquire next swap chain texture" << std::endl;
    break;
}

texture view 仅仅用作一个帧的展示,所以我们需要自己进行销毁:

wgpuTextureViewRelease(nextTexture);

在循环体的最后,当texture以及填充并且视图已经被释放,我们可以让Swap Chain加载下一个帧,这个和之前设置的presentMode有关。

wgpuSwapChainPresent(swapChain);

渲染通道(Render Pass)

到目前为止,我们已经可以获取一个texture并且可以在window中呈现一些东西。像其他GPU的运算一样,我们需要使用前面说的encoder来进行命令编码从而触发一些画图操作。
像之前一样我们创建encoder然后提交命令,但是在中间我们先增加一个指令请求清空屏幕。

{{Create Command Encoder}}
{{Encode Render Pass}}
{{Finish encoding and submit}}

webgpu.h中定义encoder的方法一般是先使用wgpuCommandEncoder作为指令的开始,这种方法一般是作用于缓存的拷贝或者texture的拷贝。除了两个特殊的encoder的指令开始:wgpuCommandEncoderBeginComputePass以及wgpuCommandEncoderBeginRenderPass。这两个方法会返回一个专门的encoder对象:WGPUComputePassEncoder或者WGPURenderPassEncoder,分别提供专门的通用计算(GPGPU)能力或者渲染能力。
在我们这个例子里面,我们返回的是一个渲染通道(render pass)。

WGPURenderPassDescriptor renderPassDesc = {};
{{Describe Render Pass}}
WGPURenderPassEncoder renderPass = wgpuCommandEncoderBeginRenderPass(encoder, &renderPassDesc);
wgpuRenderPassEncoderEnd(renderPass);

这里我们直接终止了render pass没有写入任何的命令,因为我们使用了render pass一个内置的机制去清空屏幕,这个机制只需要在render pass的描述符里面定义就好。

Color attatchment(渲染目标)

render pass利用GPU的电路在一个或者多个texture中绘制图形,所以我们需要明确定义在某个线程中目标的texture是哪一个,这个就像是render pass 的attachment。
attachment是可变的,所以在描述符中可以通过两个字段来获取。首先是定义数量:colorAttachmentCount以及保存其地址attachment array,colorAttachments。 在我们这个例子中,我们只需要一个渲染管线,所以我们只需要一个WGPURenderPassColorAttachment的变量地址。

WGPURenderPassColorAttachment renderPassColorAttachment = {};
{{Set up the attachment}}

renderPassDesc.colorAttachmentCount = 1;
renderPassDesc.colorAttachments = &renderPassColorAttachment;

设置attachment的第一件事就是要先设置texture view(用来绘制的)。在我们这个例子中,我们的view由Swap Chain返回,因为我们想要直接在屏幕中绘制,但是在一些高级的管线中,我们往往是绘制在一些中间texture中,然后把其加载到post-process passs中。

renderPassColorAttachment.view = nextTexture;

其次,我们需要设置resolveTarget,但是在这里因为我们没有需要多采样(multi-sampling),这个在后面会介绍,所以在这里,我们直接设置为空。

renderPassColorAttachment.resolveTarget = nullptr;

参数loadOp的设置规定在执行渲染通道之前要在视图上执行的加载操作。它可以从视图中读取,也可以设置为默认的统一颜色,即清除值。当它无关紧要时,使用WGPULoadOp_Clear,因为它可能更有效。
参数storeOp指示在执行渲染通道后要在视图上执行的操作。它可以被存储或丢弃(这只有在渲染通道有副作用时才有意义)。

参数clearValue是清除屏幕的值,在这个历程中我们可以把任何值放进去!这 4 个值是red、green、blue和 alpha 通道,alpha通道取值范围从 0.0 到 1.0。

renderPassColorAttachment.loadOp = WGPULoadOp_Clear;
renderPassColorAttachment.storeOp = WGPUStoreOp_Store;
renderPassColorAttachment.clearValue = WGPUColor{ 0.9, 0.1, 0.2, 1.0 };

一些杂项内容拓展

还有一种特殊类型的附件,即深度(depth)和模板(stencil) attachment(它是可能包含两个通道的单个attachment)。我们后面会回到这个问题,现在我们不使用它,所以我们将其设置为 null:

renderPassDesc.depthStencilAttachment = nullptr;

当想要测量渲染通道的性能时,无法使用 CPU 端计时函数,因为这些命令不是同步执行的。相反,渲染通道可以接收一组时间戳查询。在本例中我们不使用它。

renderPassDesc.timestampWriteCount = 0;
renderPassDesc.timestampWrites = nullptr;

最后,类似之前的,对于拓展指针,我们设置为空,这个指针是标准WebGPU保留的拓展机制。

renderPassDesc.nextInChain = nullptr;

测试代码以及结果

#include <iostream>
#include <GLFW/glfw3.h> // Add libs after the setting in CMakeLists.txt
#include <webgpu/webgpu.h>
#include <cassert> //Used for debugging
#include <vector>
#include <glfw3webgpu.h>
// #include<windows.h>  这个包含进来会出现APIENTRY的宏重复定义问题
/**
 * Utility function to get a WebGPU adapter, so that
 *     WGPUAdapter adapter = requestAdapter(options);
 * is roughly equivalent to
 *     const adapter = await navigator.gpu.requestAdapter(options);
 */
WGPUAdapter requestAdapter(WGPUInstance instance, WGPURequestAdapterOptions const * options) {
    // A simple structure holding the local information shared with the
    // onAdapterRequestEnded callback.
    struct UserData {
        WGPUAdapter adapter = nullptr;
        bool requestEnded = false;
    };
    UserData userData;

    // Callback called by wgpuInstanceRequestAdapter when the request returns
    // This is a C++ lambda function, but could be any function defined in the
    // global scope. It must be non-capturing (the brackets [] are empty) so
    // that it behaves like a regular C function pointer, which is what
    // wgpuInstanceRequestAdapter expects (WebGPU being a C API). The workaround
    // is to convey what we want to capture through the pUserData pointer,
    // provided as the last argument of wgpuInstanceRequestAdapter and received
    // by the callback as its last argument.
    auto onAdapterRequestEnded = [](WGPURequestAdapterStatus status, WGPUAdapter adapter, char const * message, void * pUserData) {
        UserData& userData = *reinterpret_cast<UserData*>(pUserData);
        if (status == WGPURequestAdapterStatus_Success) {
            userData.adapter = adapter;
        } else {
            std::cout << "Could not get WebGPU adapter: " << message << std::endl;
        }
        userData.requestEnded = true;
    };

    // Call to the WebGPU request adapter procedure
    wgpuInstanceRequestAdapter(
        instance /* equivalent of navigator.gpu */,
        options,
        onAdapterRequestEnded,
        (void*)&userData
    );

    // In theory we should wait until onAdapterReady has been called, which
    // could take some time (what the 'await' keyword does in the JavaScript
    // code). In practice, we know that when the wgpuInstanceRequestAdapter()
    // function returns its callback has been called.
    assert(userData.requestEnded);

    return userData.adapter;
}

/**
 * Utility function to get a WebGPU device, so that
 *     WGPUAdapter device = requestDevice(adapter, options);
 * is roughly equivalent to
 *     const device = await adapter.requestDevice(descriptor);
 * It is very similar to requestAdapter
 */
WGPUDevice requestDevice(WGPUAdapter adapter, WGPUDeviceDescriptor const * descriptor) {
    struct UserData {
        WGPUDevice device = nullptr;
        bool requestEnded = false;
    };
    UserData userData;

    auto onDeviceRequestEnded = [](WGPURequestDeviceStatus status, WGPUDevice device, char const * message, void * pUserData) {
        UserData& userData = *reinterpret_cast<UserData*>(pUserData);
        if (status == WGPURequestDeviceStatus_Success) {
            userData.device = device;
        } else {
            std::cout << "Could not get WebGPU device: " << message << std::endl;
        }
        userData.requestEnded = true;
    };

    wgpuAdapterRequestDevice(
        adapter,
        descriptor,
        onDeviceRequestEnded,
        (void*)&userData
    );

    assert(userData.requestEnded);

    return userData.device;
}

int main (int, char**) {
    std::cout << "Hello, My WebGPU!" << std::endl;

    // First all the call of GLFW must be defined between its initialization and termination
    glfwInit();//Initialization for GLFW
    if (!glfwInit()) {
        std::cerr << "Could not initialize GLFW!" << std::endl;
        return 1;
    }else{
        // Create the window
        glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API); // NEW
        GLFWwindow* window = glfwCreateWindow(640, 480, "Learn WebGPU Surface", NULL, NULL);
        //...
        //...
        if (!window) {
            std::cerr << "Could not open window!" << std::endl;
            glfwTerminate();
            return 1;
        }else{
                WGPUInstanceDescriptor desc = {};
                desc.nextInChain = nullptr;// 某个字符的描述字段 保留为以后的自定义拓展
                // 2. We create the instance using this descriptor
                WGPUInstance instance = wgpuCreateInstance(&desc);
                // 3. We can check whether there is actually an instance created
                if (!instance) {
                    std::cerr << "Could not initialize WebGPU!" << std::endl;
                    return 1;
                }
                // 4. Display the object (WGPUInstance is a simple pointer, it may be
                // copied around without worrying about its size).
                std::cout << "WGPU instance: " << instance << std::endl;

                WGPUSurface surface = glfwGetWGPUSurface(instance, window);
                //glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API); // NEW
                //GLFWwindow* window = glfwCreateWindow(640, 480, "Learn WebGPU Surface", NULL, NULL);

                // 5. Get the adapter via instance
                std::cout << "Requesting adapter..." << std::endl;
                WGPURequestAdapterOptions adapterOpts = {};
                WGPUAdapter adapter = requestAdapter(instance, &adapterOpts);
                std::cout << "Got adapter: " << adapter << std::endl;
                
                std::vector<WGPUFeatureName> features;
                // Call the function a first time with a null return address, just to get
                // the entry count.
                size_t featureCount = wgpuAdapterEnumerateFeatures(adapter, nullptr);

                // Allocate memory (could be a new, or a malloc() if this were a C program)
                features.resize(featureCount);

                // Call the function a second time, with a non-null return address
                wgpuAdapterEnumerateFeatures(adapter, features.data());

                std::cout << "Adapter features:" << std::endl;
                for (auto f : features) {
                    std::cout << " - " << f << std::endl;
                }

                std::cout << "Requesting device..." << std::endl;

                WGPUDeviceDescriptor deviceDesc = {};
                deviceDesc.nextInChain = nullptr;
                deviceDesc.label = "My Device"; // anything works here, that's your call
                deviceDesc.requiredFeaturesCount = 0; // we do not require any specific feature
                deviceDesc.requiredLimits = nullptr; // we do not require any specific limit
                deviceDesc.defaultQueue.nextInChain = nullptr;
                deviceDesc.defaultQueue.label = "The default queue";
                WGPUDevice device = requestDevice(adapter, &deviceDesc);

                std::cout << "Got device: " << device << std::endl;
                auto onDeviceError = [](WGPUErrorType type, char const* message, void* /* pUserData */) {
                    std::cout << "Uncaptured device error: type " << type;
                    if (message) std::cout << " (" << message << ")";
                    std::cout << std::endl;
                };
                wgpuDeviceSetUncapturedErrorCallback(device, onDeviceError, nullptr /* pUserData */);

                auto onDeviceLostError = [](WGPUDeviceLostReason reason, char const* message, void* userdata) {
                    std::cout << "Uncaptured device lost error: reason " << reason;
                    if (message) std::cout << " (" << message << ")";
                    std::cout << std::endl;
                };
                wgpuDeviceSetDeviceLostCallback(device, onDeviceLostError, nullptr);

                //创建Sawp Chain描述符
                WGPUSwapChainDescriptor swapChainDesc = {};
                swapChainDesc.nextInChain = nullptr;
                swapChainDesc.width = 640;
                swapChainDesc.height = 480;
                //定义Swap Chain 描述符
                //WGPUTextureFormat swapChainFormat = wgpuSurfaceGetPreferredFormat(surface, adapter);这个函数在Dawn中不存在
                swapChainDesc.format = WGPUTextureFormat_BGRA8Unorm;//直接指定类型
                swapChainDesc.usage = WGPUTextureUsage_RenderAttachment;//指定作为render pass的目标
                swapChainDesc.presentMode = WGPUPresentMode_Fifo;
                //创建Swap Chain
                WGPUSwapChain swapChain = wgpuDeviceCreateSwapChain(device, surface, &swapChainDesc);
                std::cout << "Swapchain: " << swapChain << std::endl;

                // Finally submit the command queue
                //std::cout << "Submitting command..." << std::endl;
                // 获取指令队列
                WGPUQueue queue = wgpuDeviceGetQueue(device);


                //wgpuQueueSubmit(queue, 1, &command);
                
            while (!glfwWindowShouldClose(window)) {
                // Check whether the user clicked on the close button (and any other
                // mouse/key event, which we don't use so far)
                glfwPollEvents();
                /*
                    {{1.Get target texture view}}
                    {{2.Draw things}}
                    {{3.Destroy texture view}}
                    {{4.Present swap chain}}
                */
                WGPUTextureView nextTexture = wgpuSwapChainGetCurrentTextureView(swapChain); //{ {1.Get target texture view}}
                //std::cout << "nextTexture: " << nextTexture << std::endl;
                if (!nextTexture) {
                    std::cerr << "Cannot acquire next swap chain texture" << std::endl;
                    break;
                }
                //{ {2.Draw things}}
                    /*
                    {{1.Create Command Encoder}}
                    {{2.Encode Render Pass}}
                    {{3.Finish encoding and submit}}
                    */
                    //想要画东西是需要使用到命令的
                                //creat command encoder
                WGPUCommandEncoderDescriptor encoderDesc = {};
                encoderDesc.nextInChain = nullptr;
                encoderDesc.label = "My command encoder";
                WGPUCommandEncoder encoder = wgpuDeviceCreateCommandEncoder(device, &encoderDesc);

                WGPURenderPassDescriptor renderPassDesc = {};

                WGPURenderPassColorAttachment renderPassColorAttachment = {};
                renderPassColorAttachment.view = nextTexture;
                renderPassColorAttachment.resolveTarget = nullptr;
                renderPassColorAttachment.loadOp = WGPULoadOp_Clear;
                renderPassColorAttachment.storeOp = WGPUStoreOp_Store;
                renderPassColorAttachment.clearValue = WGPUColor{ 0.7, 0.1, 0.2, 0.7 };

                renderPassDesc.colorAttachmentCount = 1;
                renderPassDesc.colorAttachments = &renderPassColorAttachment;
                renderPassDesc.depthStencilAttachment = nullptr;
                renderPassDesc.timestampWriteCount = 0;
                renderPassDesc.timestampWrites = nullptr;
                renderPassDesc.nextInChain = nullptr;
                WGPURenderPassEncoder renderPass = wgpuCommandEncoderBeginRenderPass(encoder, &renderPassDesc);
                wgpuRenderPassEncoderEnd(renderPass);
                //generate command buffer
                WGPUCommandBufferDescriptor cmdBufferDescriptor = {};
                cmdBufferDescriptor.nextInChain = nullptr;
                cmdBufferDescriptor.label = "Command buffer";
                WGPUCommandBuffer command= wgpuCommandEncoderFinish(encoder, &cmdBufferDescriptor);
                wgpuQueueSubmit(queue, 1, &command);
                wgpuTextureViewRelease(nextTexture);//{{3.Destroy texture view}}
                wgpuCommandEncoderRelease(encoder);
                wgpuCommandBufferRelease(command);
                wgpuSwapChainPresent(swapChain);//{{4.Present swap chain}}

            }
            wgpuSwapChainRelease(swapChain);//最后同样需要删除
            wgpuSurfaceRelease(surface);
            wgpuAdapterRelease(adapter);// Never forget to destory it.



 
            wgpuDeviceRelease(device);
        }
        
        glfwDestroyWindow(window);//after all the process destory the window
    }
    glfwTerminate();//Termination for GLFW
    return 0;
}

在这里插入图片描述
**NOTE:**在测试的过程中原本想要每次使用同一个command Buffer所以创建一个全局变量command buffer 以及encoder 然后每次往里面写指令,但是发现这种情况下在清屏的时候会出现闪烁的情况。
所以在进行指令编码的时候最好每次使用心得编码器和指令缓存区。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值