I want to talk about WebGPU我想谈谈WebGPU

WebGPU is the new WebGL. That means it is the new way to draw 3D in web browsers. It is, in my opinion, very good actually. It is so good I think it will also replace Canvas and become the new way to draw 2D in web browsers. In fact it is so good I think it will replace Vulkan as well as normal OpenGL, and become just the standard way to draw, in any kind of software, from any programming language. This is pretty exciting to me. WebGPU is a little bit irritating— but only a little bit, and it is massively less irritating than any of the things it replaces.
WebGPU是新的WebGL。这意味着它是在Web浏览器中绘制3D的新方法。在我看来,这实际上非常好。它是如此之好,我认为它也将取代画布,并成为在Web浏览器中绘制2D的新方式。事实上,它是如此之好,我认为它将取代Vulkan以及普通的OpenGL,并成为任何软件,任何编程语言的标准绘图方式。这让我很兴奋。WebGPU有点令人恼火-但只有一点点,而且它比它所取代的任何东西都要少得多。

WebGPU goes live… today, actually. Chrome 113 shipped in the final minutes of me finishing this post and should be available in the "About Chrome" dialog right this second. If you click here, and you see a rainbow triangle, your web browser has WebGPU. By the end of the year WebGPU will be everywhere, in every browser. (All of this refers to desktop computers. On phones, it won't be in Chrome until later this year; and Apple I don't know. Maybe one additional year after that.)
WebGPU上线。其实是今天Chrome 113在我完成这篇文章的最后几分钟发布,应该可以在“关于Chrome”对话框中找到。如果你点击这里,你看到一个彩虹三角形,你的网络浏览器有WebGPU。到今年年底,WebGPU将无处不在,在每个浏览器中。(All指的是台式计算机。在手机上,Chrome浏览器要到今年晚些时候才会出现;苹果我不知道也许再过一年。)

If you are not a programmer, this probably doesn't affect you. It might get us closer to a world where you can just play games in your web browser as a normal thing like you used to be able to with Flash. But probably not because WebGL wasn't the only problem there.
如果你不是程序员,这可能不会影响你。它可能会让我们更接近一个世界,在那里你可以在你的网络浏览器中玩游戏,就像你过去可以用Flash一样。但可能不是因为WebGL不是唯一的问题。

If you are a programmer, let me tell you what I think this means for you.
如果你是一个程序员,让我告诉你我认为这对你意味着什么。

Sections below:

  • A history of graphics APIs (You can skip this)
    图形API的历史(您可以跳过此)
  • What's it like? 是什么样的?
  • How do I use it?  我该如何使用它?
    • Typescript / NPM world
    • I don't know what a NPM is I Just wanna write CSS and my stupid little script tags
      我不知道什么是NPM,我只想写CSS和我愚蠢的小脚本标签
    • Rust / C++ / Posthuman Intersecting Tetrahedron
      Rust / C++ / Posthuman相交四面体

A history of graphics APIs (You can skip this)
图形API的历史(您可以跳过此)

1991
Back in the dawn of time there were two ways to make 3D on a computer: You did a bunch of math; or you bought an SGI machine. SGI were the first people who were designing circuitry to do the rendering parts of a 3D engine for you. They had this C API for describing your 3D models to the hardware. At some point it became clear that people were going to start making plugin cards for regular desktop computers that could do the same acceleration as SGI's big UNIX boxes, so SGI released a public version of their API so it would be possible to write code that would work both on the UNIX boxes and on the hypothetical future PC cards. This was OpenGL. `color()` and `rectf()` in IRIS GL became `glColor()` and `glRectf()` in OpenGL.
在早期,有两种方法可以在计算机上制作3D:你做了一堆数学题或者你买了台SGI机器SGI是第一个设计电路来为你做3D引擎渲染部分的人。他们有这个C API来向硬件描述你的3D模型。在某种程度上,人们开始为普通的桌面计算机制作插件卡,可以像SGI的大型UNIX机器一样进行加速,所以SGI发布了他们的API的公共版本,这样就可以编写既可以在UNIX机器上工作又可以在假设的未来PC卡上工作的代码。这就是OpenGL。IRIS GL中的`color()`和`rectf()`在OpenGL中变成了`glColor()`和`glRectf()`。

1995
When the PC 3D cards actually became a real thing you could buy, things got real messy for a bit. Instead of signing on with OpenGL Microsoft had decided to develop their own thing (Direct3D) and some of the 3D card vendors also developed their own API standards, so for a while certain games were only accelerated on certain graphics cards and people writing games had to write their 3D pipelines like four times, once as a software renderer and a separate one for each card type they wanted to support. My perception is it was Direct3D, not OpenGL, which eventually managed to wrangle all of this into a standard, which really sucked if you were using a non-Microsoft OS at the time. It really seemed like DirectX (and the "X Box" standalone console it spawned) were an attempt to lock game companies into Microsoft OSes by getting them to wire Microsoft exclusivity into their code at the lowest level, and for a while it really worked.
当PC 3D卡真正成为你可以买到的东西时,事情变得有点混乱。微软没有与OpenGL签约,而是决定开发自己的东西(Direct3D),一些3D卡供应商也开发了自己的API标准,因此有一段时间某些游戏只能在某些显卡上加速,编写游戏的人必须编写四次3D管道,一次作为软件渲染器,另一次为他们想要支持的每种卡类型单独编写。我的看法是,它是Direct3D,而不是OpenGL,它最终设法将所有这些都变成了一个标准,如果你当时使用的是非微软操作系统,它真的很糟糕。这真的看起来像DirectX(以及它所产生的“X Box”独立控制台)试图通过让游戏公司在最低级别将微软的排他性连接到他们的代码中来将他们锁定在微软的操作系统中,并且有一段时间它真的起作用了。

2000
It is the case though it wasn't very long into the Direct3D lifecycle before you started hearing from Direct3D users that it was much, much nicer to use than OpenGL, and OpenGL quickly got to a point where it was literally years behind Direct3D in terms of implementing critical early features like shaders, because the Architecture Review Board of card vendors that defined OpenGL would spend forever bickering over details whereas Microsoft could just implement stuff and expect the card vendor to work it out.
事实就是这样,虽然在Direct3D生命周期中没有很长时间,但是你开始听到Direct3D用户说它比OpenGL好用得多,而且OpenGL很快就达到了一个点,在实现关键的早期功能(如着色器)方面,它实际上落后于Direct3D多年。因为定义OpenGL的显卡供应商组成的体系结构审查委员会将永远在细节上争论不休,而微软只需实现一些东西并期待显卡供应商来解决。

Let's talk about shaders. The original OpenGL was a "fixed function renderer", meaning someone had written down the steps in a 3D renderer and it performed those steps in order.
让我们来谈谈着色器。最初的OpenGL是一个“固定函数渲染器”,这意味着有人在3D渲染器中写下了步骤,它按顺序执行这些步骤。

Modified Khronos Group image 修改后的Khronos Group图像

Each box in the "pipeline" had some dials on the side so you could configure how each feature behaved, but you were pretty much limited to the features the card vendor gave you. If you had shadows, or fog, it was because OpenGL or an extension had exposed a feature for drawing shadows or fog. What if you want some other feature the ARB didn't think of, or want to do shadows or fog in a unique way that makes your game look different from other games? Sucks to be you. This was obnoxious, so eventually "programmable shaders" were introduced. Notice some of the boxes above are yellow? Those boxes became replaceable. The (1) boxes got collapsed into the "Vertex Shader", and the (2) boxes became the "Fragment Shader"². The software would upload a computer program in a simple C-like language (upload the actual text of the program, you weren't expected to compile it like a normal program)³ into the video driver at runtime, and the driver would convert that into configurations of ALUs (or whatever the card was actually doing on the inside) and your program would become that chunk of the pipeline. This opened things up a lot, but more importantly it set card design on a kinda strange path. Suddenly video cards weren't specialized rendering tools anymore. They ran software.
“管道”中的每个盒子都有一些拨号盘,这样你就可以配置每个功能的行为方式,但你几乎只限于卡供应商给你的功能。如果你有阴影或雾,那是因为OpenGL或扩展已经公开了一个用于绘制阴影或雾的功能。如果你想要一些ARB没有想到的其他功能,或者想以一种独特的方式做阴影或雾,使你的游戏看起来不同于其他游戏,该怎么办?你真倒霉。这是令人讨厌的,所以最终引入了“可编程着色器”。注意到上面的一些盒子是黄色的吗?这些盒子是可以替换的。(1)个盒子被折叠成“顶点着色器”,(2)个盒子变成了“碎片着色器”²。 该软件将以简单的C语言上传计算机程序(上传程序的实际文本,你不希望像普通程序那样编译它)³到运行时的视频驱动程序中,驱动程序将其转换为ALU的配置(或者卡内部实际执行的任何操作),您的程序将成为管道的那一部分。这打开了很多东西,但更重要的是,它设置了一个有点奇怪的道路卡设计。突然间,视频卡不再是专门的渲染工具了。他们运行软件。

2004
Pretty shortly after this was another change. Handheld devices were starting to get to the point it made sense to do 3D rendering on them (or at least, to do 2D compositing using 3D video card hardware like desktop machines had started doing). DirectX was never in the running for these applications. But implementing OpenGL on mid-00s mobile silicon was rough. OpenGL was kind of… large, at this point. It had all these leftover functions from the SGI IRIX era, and then it had this new shiny OpenGL 2.0 way of doing things with the shaders and everything and not only did this mean you basically had two unrelated APIs sitting side by side in the same API, but also a lot of the OpenGL 1.x features were traps. The spec said that every video card had to support every OpenGL feature, but it didn't say it had to support them in Hardware, so there were certain early-90s features that 00s card vendors had decided nobody really uses, and so if you used those features the driver would render the screen, copy the entire screen into regular RAM, perform the feature on the CPU and then copy the results back to the video card. Accidentally activating one of these trap features could easily move you from 60 FPS to 1 FPS. All this legacy baggage promised a lot of extra work for the manufacturers of the new mobile GPUs, so to make it easier Khronos (which is what the ARB had become by this point) introduced an OpenGL "ES", which stripped out everything except the features you absolutely needed. Instead of being able to call a function for each polygon or each vertex you had to use the newer API of giving OpenGL a list of coordinates in a block in memory⁴, you had to use either the fixed function or the shader pipeline with no mixing (depending on whether you were using ES 1.x or ES 2.x), etc. This partially made things simpler for programmers, and partially prompted some annoying rewrites. But as with shaders, what's most important is the long-term strange-ing this change presaged: Starting at this point, the decisions of Khronos increasingly were driven entirely by the needs and wants of hardware manufacturers, not programmers.
不久之后,又发生了另一个变化。手持设备开始达到在其上进行3D渲染的意义(或者至少,使用3D视频卡硬件进行2D合成,就像台式机开始做的那样)。DirectX从未在这些应用程序中运行。但是在2000年代中期的移动的芯片上实现OpenGL是很困难的。OpenGL是一种。..大,在这一点上。它拥有SGI IRIX时代遗留下来的所有功能,然后它有了这个新的闪亮的OpenGL 2。这不仅意味着你基本上有两个不相关的API并排坐在同一个API中,而且还有很多OpenGL 1。X特征是陷阱。 规格说明书上说每一个显卡都必须支持OpenGL的所有功能,但它并没有说必须在硬件上支持这些功能,所以有一些90年代早期的功能,00年代的显卡供应商认为没有人真正使用,所以如果你使用这些功能,驱动程序会渲染屏幕,将整个屏幕复制到普通RAM中,在CPU上执行该功能,然后将结果复制回显卡。不小心激活其中一个陷阱功能可以很容易地将您从60 FPS移动到1 FPS。所有这些遗留的包袱都为新的移动的GPU的制造商带来了大量额外的工作,因此为了使其更容易Khronos(这就是ARB在这一点上已经成为)引入了OpenGL“ES”,它剥离了除了绝对需要的功能之外的所有功能。 而不是能够为每个多边形或每个顶点调用函数,您必须使用更新的API,为OpenGL提供内存块中的坐标列表,您必须使用固定函数或没有混合的着色器管道(取决于您是否使用ES 1)。x或ES 2.x)等。这部分地简化了程序员的工作,部分地促使了一些烦人的重写。但与着色器一样,最重要的是这种变化预示的长期奇怪:从这一点开始,Khronos的决策越来越多地完全由硬件制造商的需求和需求驱动,而不是程序员。

2008
With OpenGL ES devices in the world, OpenGL started to graduate from being "that other graphics API that exists, I guess" and actually take off. The iPhone, which used OpenGL ES, gave a solid mass-market reason to learn and use OpenGL. Nintendo consoles started to use OpenGL or something like it. OpenGL had more or less caught up with DirectX in features, especially if you were willing to use extensions. Browser vendors, in that spurt of weird hubris that gave us the original WebAudio API, adapted OpenGL ES into JavaScript as "WebGL", which makes no sense because as mentioned OpenGL ES was all about packing bytes into arrays full of geometry and JavaScript doesn't have direct memory access or even integers, but they added packed binary arrays to the language and did it anyway. So with all this activity, sounds like things are going great, right?
随着OpenGL ES设备的出现,OpenGL开始从“我想是存在的其他图形API”中毕业,并真正起飞。使用OpenGL ES的iPhone为大众市场提供了学习和使用OpenGL的坚实理由。任天堂游戏机开始使用OpenGL或类似的东西。OpenGL在功能上或多或少赶上了DirectX,特别是如果你愿意使用扩展的话。浏览器供应商,在给我们提供原始WebAudio API的那股奇怪的傲慢中,将OpenGL ES改编为JavaScript作为“WebGL”,这毫无意义,因为正如所提到的OpenGL ES都是关于将字节打包成充满几何图形的数组,而JavaScript没有直接的内存访问甚至整数,但他们将打包的二进制数组添加到语言中并以任何方式完成。这么多活动,听起来一切都很顺利,对吧?

2013
No! Everything was terrible! As it matured, OpenGL fractured into a variety of slightly different standards with varying degrees of cross-compatibility. OpenGL ES 2.0 was the same as OpenGL 3.3, somehow. WebGL 2.0 is very almost OpenGL ES 3.0 but not quite. Every attempt to resolve OpenGL's remaining early mistakes seemed to wind up duplicating the entire API as new functions with slightly different names and slightly different signatures. A big usability issue with OpenGL was even after the 2.0 rework it had a lot of shared global state, but the add-on systems that were supposed to resolve this (VAOs and VBOs) only wound up being even more global state you had to keep track of. A big trend in the 10s was "GPGPU" (General Purpose GPU); programmers started to realize that graphics cards worked as well as, but were slightly easier to program than, a CPU's vector units, so they just started accelerating random non-graphics programs by doing horrible hacks like stuffing them in pixel shaders and reading back a texture containing an encoded result. Before finally resolving on compute shaders (in other words: before giving up and copying DirectX's solution), Khronos's original steps toward actually catering to this were either poorly adopted (OpenCL) or just plain bad ideas (geometry shaders). It all built up. Just like in the pre-ES era, OpenGL had basically become several unrelated APIs sitting in the same header file, some of which only worked on some machines. Worse, nothing worked quite as well as you wanted it to; different video card vendors botched the complexity, implementing features slightly differently (especially tragically, implementing slightly different versions of the shader language) or just badly, especially in the infamously bad Windows OpenGL drivers.
不要啊!一切都糟透了!随着它的成熟,OpenGL分裂成各种略有不同的标准,具有不同程度的交叉兼容性。OpenGL ES 2.OpenGL 0与OpenGL 3相同。3、不知何故WebGL 2.0与OpenGL ES 3非常接近。0,但不完全。每一次试图解决OpenGL早期遗留的错误似乎都以复制整个API作为新函数而告终,这些新函数的名称和签名略有不同。OpenGL的一个大的可用性问题甚至是在2.0返工时,它有很多共享的全局状态,但本应解决此问题的附加系统(VAO和VBO)最终只会成为您必须跟踪的更多全局状态。 10年代的一个大趋势是“GPGPU”(通用GPU);程序员们开始意识到图形卡和CPU的矢量单元一样好用,但比CPU的矢量单元更容易编程,所以他们开始通过做可怕的黑客来加速随机的非图形程序,比如把它们塞进像素着色器,阅读回包含编码结果的纹理。在最终解析计算着色器之前(换句话说:在放弃和复制DirectX的解决方案之前),Khronos的最初步骤实际上迎合了这一点,要么采用得很差(OpenCL),要么只是简单的坏主意(几何着色器)。一切都在积累。就像在ES之前的时代一样,OpenGL基本上变成了几个不相关的API,它们位于同一个头文件中,其中一些只在某些机器上工作。 更糟的是,没有什么能像你希望的那样运作;不同的显卡供应商搞砸了复杂性,实现功能略有不同(特别是可悲的是,实现着色器语言的版本略有不同)或只是糟糕,特别是在臭名昭著的糟糕的Windows OpenGL驱动程序中。

The way out came from, this is how I see it anyway, a short-lived idea called "AZDO", which technically consisted of a single GDC talk⁵, but the idea the talk put name to is the underlying idea that spawned Vulkan, DirectX 12, and Metal. "Approaching Zero Driver Overhead". Here is the idea: By 2015 video cards had pretty much standardized on a particular way of working and that way was known and that way wasn't expected to change for ten years at least. Graphics APIs were originally designed around the functionality they exposed, but that functionality hadn't been a 1:1 map to how GPUs look on the inside for ten years at least. Drivers had become complex beasts that rather than just doing what you told them tried to intuit what you were trying to do and then do that in the most optimized way, but often they guessed wrong, leaving software authors in the ugly position of trying to intuit what the driver would intuit in any one scenario. AZDO was about threading your way through the needle of the graphics API in such a way your function calls happened to align precisely with what the hardware was actually doing, such that the driver had nothing to do and stuff just happened.
出路来自,这就是我如何看待它,一个名为“AZDO”的短命想法,从技术上讲,它只包含一个GDC演讲,但从技术上讲,它也是产生Vulkan,DirectX 12和Metal的想法。接近零驾驶员开销这是一个想法:到2015年,视频卡几乎已经标准化了一种特定的工作方式,这种方式是众所周知的,这种方式至少在十年内不会改变。图形API最初是围绕它们所公开的功能而设计的,但至少在十年内,这些功能并不是GPU内部外观的1:1映射。司机已经变成了复杂的野兽,而不是只做你告诉他们的事情,而是试图凭直觉去做你想做的事情,然后以最优化的方式去做,但他们经常猜错,让软件作者处于一个丑陋的位置,试图凭直觉去做司机在任何一种情况下都会直觉到的事情。 AZDO是关于通过图形API的针来线程化你的方式,以这样一种方式,你的函数调用碰巧与硬件实际正在做的事情精确地对齐,这样驱动程序就没有什么可做的了,事情就发生了。

2016
Or we could just design the graphics API to be AZDO from the start. That's Vulkan. (And DirectX 12, and Metal.) The modern generation of graphics APIs are about basically throwing out the driver, or rather, letting your program be the driver. The API primitives map directly to GPU internal functionality⁶, and the GPU does what you ask without second guessing. This gives you an incredible amount of power and control. Remember that "pipeline" diagram up top? The modern APIs let you define "pipeline objects"; while graphics shaders let you replace boxes within the diagram, and compute shaders let you replace the diagram with one big shader program, pipeline objects let you draw your own diagram. You decide what blocks of GPU memory are the sources, and which are the destinations, and how they are interpreted, and what the GPU does with them, and what shaders get called. All the old sources of confusion get resolved. State is bound up in neatly defined objects instead of being global. Card vendors always designed their shader compilers different, so we'll replace the textual shader language with a bytecode format that's unambiguous to implement and easier to write compilers for. Vulkan goes so far as to allow⁷ you to write your own allocator/deallocator for GPU memory.
或者我们可以从一开始就将图形API设计为AZDO。那是瓦肯(And DirectX 12和Metal。)现代一代的图形API基本上是抛弃驱动程序,或者更确切地说,让您的程序成为驱动程序。API原语直接映射到GPU内部功能,GPU会按照您的要求执行操作,而无需进行第二次猜测。这给了你一个令人难以置信的力量和控制。还记得上面的“管道”图吗?现代的API允许你定义“管道对象”;图形着色器允许你替换图中的框,计算着色器允许你用一个大的着色器程序替换图,管道对象允许你绘制你自己的图。您可以决定哪些GPU内存块是源,哪些是目标,以及如何解释它们,GPU对它们做什么,以及调用什么着色器。所有旧的困惑来源都得到了解决。状态被绑定在定义明确的对象中,而不是全局的。 卡供应商总是设计他们的着色器编译器不同,所以我们将取代文本着色器语言与字节码格式,明确的实现和更容易编写编译器。Vulkan甚至允许你为GPU内存编写自己的分配器/释放器。

So this is all very cool. There is only one problem, which is that with all this fine-grained complexity, Vulkan winds up being basically impossible for humans to write. Actually, that's not really fair. DX12 and Metal offer more or less the same degree of fine-grained complexity, and by all accounts they're not so bad to write. The actual problem is that Vulkan is not designed for humans to write. Literally. Khronos does not want you to write Vulkan, or rather, they don't want you to write it directly. I was in the room when Vulkan was announced, across the street from GDC in 2015, and what they explained to our faces was that game developers were increasingly not actually targeting the gaming API itself, but rather targeting high-level middleware, Unity or Unreal or whatever, and so Vulkan was an API designed for writing middleware. The middleware developers were also in the room at the time, the Unity and Epic and Valve guys. They were beaming as the Khronos guy explained this. Their lives were about to get much, much easier.
所以这一切都很酷。只有一个问题,那就是由于所有这些细粒度的复杂性,Vulkan最终基本上不可能由人类编写。实际上,这不太公平。DX12和Metal提供了或多或少相同程度的细粒度复杂性,而且据所有人说,它们写起来并不差。实际的问题是Vulkan不是为人类编写而设计的。真的Khronos不希望你写Vulkan,或者更确切地说,他们不希望你直接写它。2015年,Vulkan在GDC街对面宣布时,我就在房间里,他们向我们解释说,游戏开发人员越来越多地不是针对游戏API本身,而是针对高级中间件,Unity或Unreal或其他任何东西,因此Vulkan是一种为编写中间件而设计的API。中间件开发人员当时也在房间里,Unity、Epic和Valve的人。当克罗诺斯的人解释这件事时,他们喜气洋洋。 他们的生活将变得轻松许多。

My life was about to get harder. Vulkan is weird— but it's weird in a way that makes a certain sort of horrifying machine sense. Every Vulkan call involves passing in one or two huge structures which are themselves a forest of other huge structures, and every structure and sub-structure begins with a little protocol header explaining what it is and how big it is. Before you allocate memory you have to fill out a structure to get back a structure that tells you what structure you're supposed to structure your memory allocation request in. None of it makes any sense— unless you've designed a programming language before, in which case everything you're reading jumps out to you as "oh, this is contrived like this because it's designed to be easy to bind to from languages with weird memory-management techniques" "this is a way of designing a forward-compatible ABI while making no assumptions about programming language" etc. The docs are written in a sort of alien English that fosters no understanding— but it's also written exactly the way a hardware implementor would want in order to remove all ambiguity about what a function call does. In short, Vulkan is not for you. It is a byzantine contract between hardware manufacturers and middleware providers, and people like… well, me, are just not part of the transaction.
我的生活将变得更加艰难。Vulkan很奇怪--但它的奇怪之处在于某种可怕的机器意义。每个Vulkan调用都涉及传入一个或两个巨大的结构,这些结构本身就是其他巨大结构的森林,每个结构和子结构都以一个小协议头开始,解释它是什么以及它有多大。在分配内存之前,你必须填写一个结构,以获得一个结构,该结构告诉你应该用什么结构来构造内存分配请求。这一切都没有任何意义--除非你以前设计过一门编程语言,在这种情况下,你所阅读到的一切都会跳出来,“哦,这是人为的,因为它被设计成很容易从具有奇怪内存管理技术的语言绑定”“这是一种设计向前兼容ABI的方式,同时不对编程语言做任何假设”等等。 文档是用一种陌生的英语写的,这让人无法理解--但它也是按照硬件实现者想要的方式写的,以消除函数调用的所有歧义。总之,Vulkan不适合你。它是硬件制造商和中间件提供商之间的拜占庭式合同,人们喜欢。...好吧,我,只是不是交易的一部分。

Khronos did not forget about you and me. They just made a judgement, and this actually does make a sort of sense, that they were never going to design the perfectly ergonomic developer API anyway, so it would be better to not even try and instead make it as easy as possible for the perfectly ergonomic API to be written on top, as a library. Khronos thought within a few years of Vulkan⁸ being released there would be a bunch of high-quality open source wrapper libraries that people would use instead of Vulkan directly. These libraries basically did not materialize. It turns out writing software is work and open source projects do not materialize just because people would like them to⁹.
克洛诺斯没有忘记你和我。他们只是做了一个判断,这实际上是有道理的,他们永远不会设计出完美的人体工程学开发人员API,所以最好不要尝试,而是尽可能容易地将完美的人体工程学API写在上面,作为一个库。Khronos认为,在Vulkan发布的几年内,人们会直接使用一堆高质量的开源包装库,而不是Vulkan。这些图书馆基本上没有实现。事实证明,编写软件是一项工作,开源项目并不只是因为人们希望他们去做而实现。

2019
This leads us to the other problem, the one Vulkan developed after the fact. The Apple problem. The theory on Vulkan was it would change the balance of power where Microsoft continually released a high-quality cutting-edge graphics API and OpenGL was the sloppy open-source catch up. Instead, the GPU vendors themselves would provide the API, and Vulkan would be the universal standard while DirectX would be reduced to a platform-specific oddity. But then Apple said no. Apple (who had already launched their own thing, Metal) announced not only would they never support Vulkan, they would not support OpenGL, anymore¹⁰. From my perspective, this is just DirectX again; the dominant OS vendor of our era, as Microsoft was in the 90s, is pushing proprietary graphics tech to foster developer lock-in. But from Apple's perspective it probably looks like— well, the way DirectX probably looked from Microsoft's perspective in the 90s. They're ignoring the jagged-metal thing from the hardware vendors and shipping something their developers will actually want to use.
这将我们引向另一个问题,即Vulkan在事后开发的问题。苹果的问题。关于Vulkan的理论是,它将改变微软不断发布高质量尖端图形API的力量平衡,而OpenGL则是草率的开源追赶者。相反,GPU供应商自己将提供API,Vulkan将成为通用标准,而DirectX将被简化为特定于平台的奇怪东西。但苹果拒绝了。苹果(已经推出了自己的东西,Metal)宣布他们不仅永远不会支持Vulkan,他们也不会再支持OpenGL。从我的角度来看,这只是DirectX再次;我们这个时代的主要操作系统供应商,就像微软在90年代一样,正在推动专有的图形技术,以促进开发人员的锁定。但从苹果的角度来看,它可能看起来像-嗯,DirectX的方式可能看起来从微软的角度在90年代。 他们忽略了硬件供应商提供的锯齿状金属,并提供了开发人员真正想要使用的东西。

With Apple out, the scene looked different. Suddenly there was a next-gen API for Windows, a next-gen API for Mac/iPhone, and a next-gen API for Linux/Android. Except Linux has a severe driver problem with Vulkan and a lot of the Linux devices I've been checking out don't support Vulkan even now after it's been out seven years. So really the only platform where Vulkan runs natively is Android. This isn't that bad. Vulkan does work on Windows and there are mostly no problems, though people who have the resources to write a DX12 backend seem to prefer doing so. The entire point of these APIs is that they're flyweight things resting very lightly on top of the hardware layer, which means they aren't really that different, to the extent that a Vulkan-on-Metal emulation layer named MoltenVK exists and reportedly adds almost no overhead. But if you're an open source kind of person who doesn't have the resources to pay three separate people to write vaguely-similar platform backends, this isn't great. Your code can technically run on all platforms, but you're writing in the least pleasant of the three APIs to work with and you get the advantage of using a true-native API on neither of the two major platforms. You might even have an easier time just writing DX12 and Metal and forgetting Vulkan (and Android) altogether. In short, Vulkan solves all of OpenGL's problems at the cost of making something that no one wants to use and no one has a reason to use.
随着苹果的退出,情况看起来有所不同。突然之间,出现了下一代Windows API、下一代Mac/iPhone API和下一代Linux/Android API。除了Linux在Vulkan上有一个严重的驱动程序问题,而且我检查的很多Linux设备在Vulkan已经推出七年之后仍然不支持它。因此,Vulkan真正本地运行的唯一平台是Android。没那么糟Vulkan确实可以在Windows上工作,而且大多数情况下没有问题,尽管有资源编写DX12后端的人似乎更喜欢这样做。这些API的全部意义在于,它们是非常轻地位于硬件层之上的flyweight东西,这意味着它们并没有真正的不同,因为存在一个名为MoltenVK的Vulkan-on-Metal仿真层,据报道几乎没有增加任何开销。 但是如果你是一个开源的人,没有足够的资源来支付三个不同的人来编写模糊相似的平台后端,这不是很好。从技术上讲,您的代码可以在所有平台上运行,但您使用的是三种API中最不令人愉快的一种,并且您在两种主要平台上都没有使用真正的原生API。你甚至可能会更轻松地编写DX12和Metal,而完全忘记Vulkan(和Android)。简而言之,Vulkan解决了OpenGL的所有问题,但代价是制作了一些没有人想要使用的东西,也没有人有理由使用。

The way out turned out to be something called ANGLE. Let me back up a bit.
出路原来是一种叫做角度的东西。让我倒回去一点。

2010, again
WebGL was designed around OpenGL ES. But it was never exactly the same as OpenGL ES, and also technically OpenGL ES never really ran on desktops, and also regular OpenGL on desktops had Problems. So the browser people eventually realized that if you wanted to ship an OpenGL compatibility layer on Windows, it was actually easier to write an OpenGL emulator in DirectX than it was to use OpenGL directly and have to negotiate the various incompatibilities between OpenGL implementations of different video card drivers. The browser people also realized that if slight compatibility differences between different OpenGL drivers was hell, slight incompatibility differences between four different browsers times three OSes times different graphics card drivers would be the worst thing ever. From what I can only assume was desperation, the most successful example I've ever seen of true cross-company open source collaboration emerged: ANGLE, a BSD-licensed OpenGL emulator originally written by Google but with honest-to-goodness contributions from both Firefox and Apple, which is used for WebGL support in literally every web browser.
WebGL是围绕OpenGL ES设计的。但它从来没有完全相同的OpenGL ES,而且在技术上OpenGL ES从来没有真正运行在台式机上,而且在台式机上的常规OpenGL也有问题。因此,浏览器的人最终意识到,如果你想在Windows上发布OpenGL兼容层,实际上用DirectX编写OpenGL仿真器比直接使用OpenGL更容易,并且必须协商不同显卡驱动程序的OpenGL实现之间的各种不兼容性。浏览器的人也意识到,如果不同OpenGL驱动程序之间的轻微兼容性差异是地狱,那么四个不同浏览器乘以三个操作系统乘以不同显卡驱动程序之间的轻微不兼容性差异将是有史以来最糟糕的事情。 在我只能假设是绝望的情况下,我所见过的真正的跨公司开源协作的最成功的例子出现了:ANGLE,一个BSD许可的OpenGL模拟器,最初由Google编写,但有来自Firefox和Apple的诚实贡献,它用于每个Web浏览器中的WebGL支持。

But nobody actually wants to use WebGL, right? We want a "modern" API, one of those AZDO thingies. So a W3C working group sat down to make Web Vulkan, which they named WebGPU. I'm not sure my perception of events is to be trusted, but my perception of how this went from afar was that Apple was the most demanding participant in the working group, and also the participant everyone would naturally by this point be most afraid of just spiking the entire endeavor, so reportedly Apple just got absolutely everything they asked for and WebGPU really looks a lot like Metal. But Metal was always reportedly the nicest of the three modern graphics APIs to use, so that's… good? Encouraged by the success with ANGLE (which by this point was starting to see use as a standalone library in non-web apps¹¹), and mindful people would want to use this new API with WebASM, they took the step of defining the standard simultaneously as a JavaScript IDL and a C header file, so non-browser apps could use it as a library.
但是没有人真的想使用WebGL,对吗?我们想要一个“现代”的API,一个AZDO之类的东西。因此,W3C工作组坐下来制作Web Vulkan,他们将其命名为WebGPU。我不确定我对事件的看法是否值得信任,但我对这件事的看法是,苹果是工作组中要求最高的参与者,也是每个人在这一点上最害怕的参与者,所以据报道,苹果得到了他们所要求的一切,WebGPU看起来真的很像Metal。但据报道,Metal总是三个现代图形API中最好用的,所以这是。好吗 受到ANGLE成功的鼓舞(此时开始将其视为非Web应用程序中的独立库¹¹),并且有意识的人希望将这个新API与WebASM一起使用,他们采取了将标准同时定义为JavaScript IDL和C头文件的步骤,因此非浏览器应用程序可以将其用作库。

2023
WebGPU is the child of ANGLE and Metal. WebGPU is the missing open-source "ergonomic layer" for Vulkan. WebGPU is in the web browser, and Microsoft and Apple are on the browser standards committee, so they're "bought in", not only does WebGPU work good-as-native on their platforms but anything WebGPU can do will remain perpetually feasible on their OSes regardless of future developer lock-in efforts. (You don't have to worry about feature drift like we're already seeing with MoltenVK.) WebGPU will be on day one (today) available with perfectly equal compatibility for JavaScript/TypeScript (because it was designed for JavaScript in the first place), for C++ (because the Chrome implementation is in C, and it's open source) and for Rust (because the Firefox implementation is in Rust, and it's open source).
WebGPU是ANGLE和Metal的后代。WebGPU是Vulkan缺少的开源“人体工程学层”。WebGPU在Web浏览器中,微软和苹果是浏览器标准委员会的成员,所以他们“被收买了”,不仅WebGPU在他们的平台上工作得很好,而且WebGPU可以做的任何事情都将在他们的操作系统上永远可行,无论未来的开发人员如何锁定。(You不必担心功能漂移,就像我们已经在MoltenVK中看到的那样。)WebGPU将在第一天(今天)对JavaScript/TypeScript(因为它首先是为JavaScript设计的),C++(因为Chrome的实现是C,它是开源的)和Rust(因为Firefox的实现是Rust,它是开源的)具有完全平等的兼容性。

I feel like WebGPU is what I've been waiting for this entire time.
我觉得WebGPU是我一直在等待的。


What's it like? 是什么样的?

I can't compare to DirectX or Metal, as I've personally used neither. But especially compared to OpenGL and Vulkan, I find WebGPU really refreshing to use. I have tried, really tried, to write Vulkan, and been defeated by the complexity each time. By contrast WebGPU does a good job of adding complexity only when the complexity adds something. There are a lot of different objects to keep track of, especially during initialization (see below), but every object represents some Real Thing that I don't think you could eliminate from the API without taking away a useful ability. (And there is at least the nice property that you can stuff all the complexity into init time and make the process of actually drawing a frame very terse.) WebGPU caters to the kind of person who thinks it might be fun to write their own raymarcher, without requiring every programmer to be the kind of person who thinks it would be fun to write their own implementation of malloc.
我不能与DirectX或Metal相比,因为我个人也没有使用过。但特别是与OpenGL和Vulkan相比,我发现WebGPU使用起来真的令人耳目一新。我试过,真的试过,写Vulkan,每次都被复杂性击败。相比之下,WebGPU在增加复杂性方面做得很好,只有当复杂性增加了一些东西时。有很多不同的对象需要跟踪,特别是在初始化期间(见下文),但每个对象都代表一些Real Thing,我认为您无法在不带走有用功能的情况下从API中消除它们。(And至少有一个很好的特性,你可以把所有的复杂性都塞进init时间,使实际绘制框架的过程非常简洁。)WebGPU迎合了那些认为编写自己的raymarcher可能很有趣的人,而不要求每个程序员都是那种认为编写自己的 malloc 实现很有趣的人。

The Problems

There are three Problems. I will summarize them thusly:
有三个问题。我将总结如下:

  • Text
  • Lines
  • The Abomination

Text and lines are basically the same problem. WebGPU kind of doesn't… have them. It can draw lines, but they're only really for debugging– single-pixel width and you don't have control over antialiasing. So if you want a "normal looking" line you're going to be doing some complicated stuff with small bespoke meshes and an SDF shader. Similarly with text, you will be getting no assistance– you will be parsing OTF font files yourself and writing your own MSDF shader, or more likely finding a library that does text for you.
文本和线条基本上是同一个问题。WebGPU就没有。...拥有它们它可以画线,但它们只是真正用于调试-单像素宽度,你不能控制抗锯齿。因此,如果你想要一个“正常的外观”线,你将做一些复杂的事情与小定制网格和SDF着色器。与文本类似,您将得不到任何帮助-您将自己解析OTF字体文件并编写自己的MSDF着色器,或者更有可能找到一个为您提供文本的库。

This (no lines or text unless you implement it yourself) is a totally normal situation for a low-level graphics API, but it's a little annoying to me because the web browser already has a sophisticated anti-aliased line renderer (the original Canvas API) and the most advanced text renderer in the world. (There is some way to render text into a Canvas API texture and then transfer the Canvas contents into WebGPU as a texture, which should help for some purposes.)
这(没有线条或文本,除非你自己实现)对于低级图形API来说是完全正常的情况,但这对我来说有点烦人,因为Web浏览器已经有了一个复杂的抗锯齿线条渲染器(原始的Canvas API)和世界上最先进的文本渲染器。(有一些方法可以将文本渲染到Canvas API纹理中,然后将Canvas内容作为纹理传输到WebGPU中,这对于某些目的应该有所帮助。)

Then there's WGSL, or as I think of it, The Abomination. You will probably not be as annoyed by this as I am. Basically: One of the benefits of Vulkan is that you aren't required to use a particular shader language. OpenGL uses GLSL, DirectX uses HLSL. Vulkan used a bytecode, called SPIR-V, so you could target it from any shader language you wanted. WebGPU was going to use SPIR-V, but then Apple said no¹². So now WebGPU uses WGSL, a new thing developed just for WebGPU, as its only shader language. As far as shader languages go, it is fine. Maybe it is even good. I'm sure it's better than GLSL. For pure JavaScript users, it's probably objectively an improvement to be able to upload shaders as text files instead of having to compile to bytecode. But gosh, it would have been nice to have that choice! (The "desktop" versions of WebGPU still keep SPIR-V as an option.)
然后是WGSL,或者我认为是憎恶。你可能不会像我一样被这件事惹恼。基本上:Vulkan的好处之一是您不需要使用特定的着色器语言。OpenGL使用GLSL,DirectX使用HLSL。Vulkan使用了一个名为SPIR-V的字节码,因此您可以从任何您想要的着色器语言中将其作为目标。WebGPU本来打算使用SPIR-V,但后来苹果说不行。所以现在WebGPU使用WGSL,这是专为WebGPU开发的新事物,作为其唯一的着色器语言。就着色器语言而言,它很好。也许它甚至是好的。我相信它比GLSL好。对于纯JavaScript用户来说,能够将着色器作为文本文件上传,而不必编译为字节码,这可能是客观上的改进。但是天哪,如果有这样的选择就好了!(The WebGPU的“桌面”版本仍然保留SPIR-V作为选项。)


How do I use it? 我该如何使用它?

You have three choices for using WebGPU: Use it in JavaScript in the browser, use it in Rust/C++ in WebASM inside the browser, or use it in Rust/C++ in a standalone app. The Rust/C++ APIs are as close to the JavaScript version as language differences will allow; the in-browser/out-of-browser APIs for Rust and C++ are identical (except for standalone-specific features like SPIR-V). In standalone apps you embed the WebASM components from Chrome or Firefox as a library; your code doesn't need to know if the WebGPU library is a real library or if it's just routing through your calls to the browser.
使用WebGPU有三种选择:在浏览器的JavaScript中使用它,在浏览器内的WebASM中的Rust/C++中使用它,或者在独立应用程序中的Rust/C++中使用它。Rust/C++ API在语言差异允许的范围内尽可能接近JavaScript版本;Rust和C++的浏览器内/浏览器外API是相同的(除了像SPIR-V这样的独立特定功能)。在独立应用程序中,您可以将Chrome或Firefox中的WebASM组件作为库嵌入;你的代码不需要知道WebGPU库是一个真实的的库,还是它只是通过你的调用路由到浏览器。

Regardless of language, the official WebGPU spec document on w3.org is a clear, readable reference guide to the language, suitable for just reading in a way standard specifications sometimes aren't. (I haven't spent as much time looking at the WGSL spec but it seems about the same.) If you get lost while writing WebGPU, I really do recommend checking the spec.
无论语言如何,官方WebGPU规范文档在 www.example.com 是一个清晰易读的语言参考指南,适合于以标准规范有时不适合的方式阅读。(我没有花太多时间看WGSL规范,但看起来差不多。)如果你在编写WebGPU时迷路了,我真的建议你检查一下规范。

Most of the "work" in WebGPU, other than writing shaders, consists of the construction (when your program/scene first boots) of one or more "pipeline" objects, one per "pass", which describe "what shaders am I running, and what kind of data can get fed into them?"¹³. You can chain pipelines end-to-end within a queue: have a compute pass generate a vertex buffer, have a render pass render into a texture, do a final render pass which renders the computed vertices with the rendered texture.
WebGPU中的大多数“工作”,除了编写着色器,包括一个或多个“管道”对象的构造(当你的程序/场景第一次启动时),每个“通道”一个,描述“我正在运行什么着色器,以及什么样的数据可以输入它们?”“您可以在队列中端到端地链接管道:使计算通道生成顶点缓冲器,使渲染通道渲染到纹理中,进行最终渲染通道,该最终渲染通道用渲染的纹理渲染计算的顶点。

Here, in diagram form, are all the things you need to create to initially set up WebGPU and then draw a frame. This might look a little overwhelming. Don't worry about it! In practice you're just going to be copying and pasting a big block of boilerplate from some sample code. However at some point you're going to need to go back and change that copypasted boilerplate, and then you'll want to come back and look up what the difference between any of these objects is.
这里,以图表的形式,是您需要创建的所有内容,以初始设置WebGPU,然后绘制一个帧。这可能看起来有点压倒性。别担心!在实践中,您只需要从一些示例代码中复制和粘贴一大块样板代码。但是,在某些时候,您将需要返回并更改复制的样板文件,然后您将需要返回并查看这些对象之间的差异。

At init:
For each frame: 对于每个帧:

Some observations in no particular order:
一些观察结果没有特别的顺序:

  • When describing a "mesh" (a 3D model to draw), a "vertex" buffer is the list of points in space, and the "index" is an optional buffer containing the order in which to draw the points. Not sure if you knew that.
    当描述“网格”(要绘制的3D模型)时,“顶点”缓冲区是空间中的点的列表,并且“索引”是包含绘制点的顺序的可选缓冲区。不知道你知不知道。
  • Right now the "queue" object seems a little pointless because there's only ever one global queue. But someday WebGPU will add threading and then there might be more than one.
    现在“queue”对象似乎有点无意义,因为只有一个全局队列。但总有一天WebGPU会添加线程,然后可能会有不止一个。
  • A command encoder can only be working on one pass at a time; you have to mark one pass as complete before you request the next one. But you can make more than one command encoder and submit them all to the queue at once.
    命令编码器一次只能工作一遍;在请求下一个之前,您必须将一个传递标记为完成。但是您可以创建多个命令编码器,并将它们同时提交到队列中。
  • Back in OpenGL when you wanted to set a uniform, attribute, or texture on a shader, you did it by name. In WebGPU you have to assign these things numbers in the shader and you address them by number.¹⁴
    回到OpenGL中,当您想要在着色器上设置统一、属性或纹理时,您可以通过名称来完成。在WebGPU中,你必须在着色器中为这些东西分配数字,并通过数字来寻址它们。¹⁴
  • Although textures and buffers are two different things, you can instruct the GPU to just turn a texture into a buffer or vice versa.
    虽然纹理和缓冲区是两个不同的东西,但您可以指示GPU将纹理转换为缓冲区,反之亦然。
  • I do not list "pipeline layout" or "bind group layout" objects above because I honestly don't understand what they do. I've only ever set them to default/blank.
    我没有在上面列出“管道布局”或“绑定组布局”对象,因为我真的不明白它们是做什么的。我只将它们设置为默认/空白。
  • In the Rust API, a "Context" is called a "Surface". I don't know if there's a difference.
    在Rust API中,“Context”被称为“Surface”。我不知道这有什么区别。

Getting a little more platform-specific:
更具体的平台:

TypeScript / NPM world

The best way to learn WebGPU for TypeScript I know is Alain Galvin's "Raw WebGPU" tutorial. It is a little friendlier to someone who hasn't used a low-level graphics API before than my sandbag introduction above, and it has a list of further resources at the end.
我知道学习WebGPU for TypeScript的最好方法是Alain Galvin的“Raw WebGPU”教程。对于那些以前没有使用过低级图形API的人来说,它比我上面的沙袋介绍更友好一些,并且它在最后有一个进一步的资源列表。

Since code snippets don't get you something runnable, Alain's tutorial links a completed source repo with the tutorial code, and also I have a sample repo which is based on Alain's tutorial code and adds simple animation as well as Preact¹⁵. Both my and Alain's examples use NPM and WebPack¹⁶.
由于代码片段不能让你得到一些可运行的东西,Alain的教程将一个完整的源代码库与教程代码链接起来,我也有一个基于Alain的教程代码的示例代码库,并添加了简单的动画和Preact。我和Alain的例子都使用了NPM和WebPack。

If you don't like TypeScript: I would recommend using TypeScript anyway for WGPU. You don't actually have to add types to anything except your WGPU calls, you can type everything "any". But building that pipeline object involves big trees of descriptors containing other descriptors, and it's all just plain JavaScript dictionaries, which is nice, until you misspell a key, or forget a key, or accidentally pass the GPUPrimitiveState table where it wanted the GPUVertexState table. Your choices are to let TypeScript tell you what errors you made, or be forced to reload over and over watching things break one at a time.
如果你不喜欢TypeScript:无论如何,我还是建议使用TypeScript用于WGPU。除了WGPU调用之外,您实际上不必向任何东西添加类型,您可以键入“任何”。但是构建管道对象涉及到包含其他描述符的描述符的大树,而且都只是普通的JavaScript字典,这很好,直到你拼错了一个键,或者忘记了一个键,或者不小心把GPUPrimitiveState表传递到了它想要GPUVertexState表的地方。你的选择是让TypeScript告诉你你犯了什么错误,或者被迫一遍又一遍地重新加载,一次看一个东西坏掉。

I don't know what a NPM is I Just wanna write CSS and my stupid little script tags
我不知道什么是NPM,我只想写CSS和我愚蠢的小脚本标签

If you're writing simple JS embedded in web pages rather than joining the NPM hivemind, honestly you might be happier using something like three.js¹⁷ in the first place, instead of putting up with WebGPU's (relatively speaking) hyper-low-level verbosity. You can include three.js directly in a script tag using existing CDNs (although I would recommend putting in a subresource SHA hash to protect yourself from the CDN going rogue).
如果你正在编写嵌入在网页中的简单JS,而不是加入NPM hivemind,老实说,你可能会更喜欢使用像three这样的东西。js ¹,而不是忍受WebGPU的(相对而言)超低级冗长。你可以包括三个。js直接在脚本标签中使用现有的CDN(尽管我建议放入子资源SHA散列以保护您自己免受CDN的欺骗)。

But! If you want to use WebGPU, Alain Galvin's tutorial, or renderer.ts from his sample code, still gets you what you want. Just go through and anytime there's a little : GPUBlah wart on a variable delete it and the TypeScript is now JavaScript. And as I've said, the complexity of WebGPU is mostly in pipeline init. So I could imagine writing a single <script> that sets up a pipeline object that is good for various purposes, and then including that script in a bunch of small pages that each import¹⁸ the pipeline, feed some floats into a buffer mapped range, and draw. You could do the whole client page in like ten lines probably.
但是!如果你想使用WebGPU,Alain Galvin的教程或渲染器。ts从他的示例代码,仍然得到你想要的。只要通过,任何时候变量上有一个小的 : GPUBlah 疣,删除它,TypeScript现在就是JavaScript。正如我所说,WebGPU的复杂性主要在于管道初始化。因此,我可以想象编写一个 <script> ,它设置了一个适用于各种目的的管道对象,然后将该脚本包含在一堆小页面中,每个页面都导入管道,将一些浮点数送入缓冲区映射范围,并绘制。整个客户端页面大概只需要10行。

Rust

So as I've mentioned, one of the most exciting things about WebGPU to me is you can seamlessly cross-compile code that uses it without changes for either a browser or for desktop. The desktop code uses library-ized versions of the actual browser implementations so there is low chance of behavior divergence. If "include part of a browser in your app" makes you think you're setting up for a code-bloated headache, not in this case; I was able to get my Rust "Hello World" down to 3.3 MB, which isn't much worse than SDL, without even trying. (The browser hello world is like 250k plus a 50k autogenerated loader, again before I've done any serious minification work.)
正如我提到的,WebGPU对我来说最令人兴奋的事情之一是,你可以无缝地交叉编译使用它的代码,而无需为浏览器或桌面进行更改。桌面代码使用实际浏览器实现的库化版本,因此行为分歧的可能性很低。如果“在你的应用程序中包含一部分浏览器”让你觉得你正在为代码膨胀的头痛做准备,那么在这种情况下就不是这样了;我可以把我的Rust“Hello World”降到3。3 MB,这并不比SDL差多少,甚至不需要尝试。(The浏览器hello world就像250 k加上一个50 k自动生成的加载器,再次在我做任何严重的缩小工作之前。)

If you want to write WebGPU in Rust¹⁹, I'd recommend checking out this official tutorial from the wgpu project, or the examples in the wgpu source repo. As of this writing, it's actually a lot easier to use Rust WebGPU on desktop than in browser; the libraries seem to mostly work fine on web, but the Rust-to-wasm build experience is still a bit rough. I did find a pretty good tutorial for wasm-pack here²⁰. However most Rust-on-web developers seem to use (and love) something called "Trunk". I haven't used Trunk yet but it replaces wasm-pack as a frontend, and seems to address all the specific frustrations I had with wasm-pack.
如果你想在Rust中编写WebGPU,我建议你从wgpu项目中查看这个官方教程,或者wgpu源代码库中的示例。在撰写本文时,在桌面上使用Rust WebGPU实际上比在浏览器中容易得多;这些库在web上看起来工作得很好,但是Rust-to-wasm构建体验仍然有点粗糙。我在这里找到了一个很好的wasm-pack教程²。然而,大多数Rust-on-web开发人员似乎都使用(并且喜欢)一种名为“Trunk”的东西。我还没有使用过Trunk,但它取代了wasm-pack作为前端,似乎解决了我对wasm-pack的所有具体挫折。

I do have also a sample Rust repo I made for WebGPU, since the examples in the wgpu repo don't come with build scripts. My sample repo is very basic²¹ and is just the "hello-triangle" sample from the wgpu project but with a Cargo.toml added. It does come with working single-line build instructions for web, and when run on desktop with --release it minimizes disk usage. (It also prints an error message when run on web without WebGPU, which the wgpu sample doesn't.) You can see this sample's compiled form running in a browser here.
我也有一个为WebGPU制作的Rust代码库示例,因为wgpu代码库中的示例不附带构建脚本。我的样本仓库是非常基本的²¹,只是来自wgpu项目的“hello-triangle”样本,但带有一个Cargo。Toml补充道。它确实附带了用于Web的单行构建说明,并且当在桌面上运行时,使用 --release 可以最大限度地减少磁盘使用量。(It在没有WebGPU的Web上运行时,也会打印错误消息,而wgpu示例不会。)您可以在此处看到此示例的编译表单在浏览器中运行。

C++

If you're using C++, the library you want to use is called "Dawn". I haven't touched this but there's an excellently detailed-looking Dawn/C++ tutorial/intro here. Try that first.
如果你使用的是C++,你要使用的库叫做“Dawn”。我还没有接触过这个,但这里有一个非常详细的Dawn/C++教程/介绍。先试试那个。

Posthuman Intersecting Tetrahedron
后人类相交四面体

I have strange, chaotic daydreams of the future. There's an experimental project called rust-gpu that can compile Rust to SPIR-V. SPIR-V to WGSL compilers already exist, so in principle it should already be possible to write WebGPU shaders in Rust, it's just a matter of writing build tooling that plugs the correct components together. (I do feel, and complained above, that the WGSL requirement creates a roadblock for use of alternate shader languages in dynamic languages, or languages like C++ with a broken or no build system— but Rust is pretty good at complex pre-build processing, so as long as you're not literally constructing shaders on the fly then probably it could make this easy.)
我对未来有奇怪而混乱的白日梦。有一个名为rust-gpu的实验项目可以将Rust编译为SPIR-V。SPIR-V到WGSL编译器已经存在,所以原则上应该已经可以在Rust中编写WebGPU着色器,只是编写构建工具将正确的组件插入在一起。(我确实感觉到,并且在上面抱怨过,WGSL要求为在动态语言中使用替代着色器语言,或者像C++这样的语言使用破碎或没有构建系统-但Rust在复杂的预构建处理方面非常出色,所以只要你不是真正地在运行中构建着色器,那么它可能会使这变得容易。)

I imagine a pure-Rust program where certain functions are tagged as compile-to-shader, and I can share math helper functions between my shaders and my CPU code, or I can quickly toggle certain functions between "run this as a filter before writing to buffer" or "run this as a compute shader" depending on performance considerations and whim. I have an existing project that uses compute shaders and answering the question "would this be faster on the CPU, or in a compute shader?"²² involved writing all my code twice and then writing complex scaffold code to handle switching back and forth. That could have all been automatic. Could I make things even weirder than this? I like Rust for low-level engine code, but sometimes I'd prefer to be writing TypeScript for business logic/"game" code. In the browser I can already mix Rust and TypeScript, there's copious example code for that. Could I mix Rust and TypeScript on desktop too? If wgpu is already my graphics engine, I could shove in Servo or QuickJS or something, and write a cross-platform program that runs in browser as TypeScript with wasm-bindgen Rust embedded inside or runs on desktop as Rust with a TypeScript interpreter inside. Most Rust GUI/game libraries work in wasm already, and there's this pure Rust WebAudio implementation (it's currently not a drop-in replacement for wasm-bindgen WebAudio but that could be fixed). I imagine creating a tiny faux-web game engine that is all the benefits of Electron without any the downsides. Or I could just use Tauri for the same thing and that would work now without me doing any work at all.
我想象一个纯Rust程序,其中某些函数被标记为编译到着色器,我可以在着色器和CPU代码之间共享数学助手函数,或者我可以根据性能考虑和突发奇想在“在写入缓冲区之前将其作为过滤器运行”或“将其作为计算着色器运行”之间快速切换某些函数。我有一个现有的项目,使用计算着色器和回答的问题“这将是更快的CPU上,或在计算着色器?”“²²涉及到我所有的代码写两次,然后写复杂的脚手架代码来处理来回切换。这可能是自动的。我能把事情弄得更奇怪吗?我喜欢Rust作为底层引擎代码,但有时我更喜欢为业务逻辑/“游戏”代码编写TypeScript。在浏览器中,我已经可以混合使用Rust和TypeScript,有大量的示例代码。我可以在桌面上混合使用Rust和TypeScript吗? 如果wgpu已经是我的图形引擎,我可以插入Servo或QuickJS或其他东西,并编写一个跨平台程序,在浏览器中作为TypeScript运行,其中嵌入了wasm-bindgen Rust,或者在桌面上作为Rust运行,其中包含TypeScript解释器。大多数Rust GUI/游戏库已经在wasm中工作,并且有这个纯Rust WebAudio实现(它目前不是wasm-bindgen WebAudio的直接替代品,但可以修复)。我想象创建一个小型的人造网页游戏引擎,它是Electron的所有好处,没有任何缺点。或者我可以用金牛星做同样的事情,这样我就不用做任何工作了。

Could I make it weirder than that? WebGPU's spec is available as a machine-parseable WebIDL file; would that make it unusually easy to generate bindings for, say, Lua? If I can compile Rust to WGSL and so write a pure-Rust-including-shaders program, could I compile TypeScript, or AssemblyScript or something, to WGSL and write a pure-TypeScript-including-shaders program? Or if what I care about is not having to write my program in two languages and not so much which language I'm writing, why not go the other way? Write an LLVM backend for WGSL, compile it to native+wasm and write an entire-program-including-shaders in WGSL. If the w3 thinks WGSL is supposed to be so great, then why not?
我能说得更奇怪吗?WebGPU的规范可以作为机器可解析的WebIDL文件获得;这是否会使为Lua生成绑定变得异常容易?如果我可以将Rust编译为WGSL,从而编写一个包含纯Rust的着色器程序,那么我可以将TypeScript或AssemblyScript或其他东西编译为WGSL并编写一个包含纯TypeScript的着色器程序吗?或者,如果我关心的是不必用两种语言编写程序,而不是用哪种语言编写,为什么不走另一条路呢?为WGSL编写一个LLVM后端,将其编译为native+wasm,并在WGSL中编写一个包含着色器的完整程序。如果W3认为WGSL应该是如此伟大,那么为什么不呢?

Okay that's my blog post. 好吧,这是我的博客文章。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值