webgl 获取数据集名称_如何在WebGL中创建Apple Fifth Avenue多维数据集

webgl 获取数据集名称

webgl 获取数据集名称

Apple Fifth Avenue Cube featured image

In September 2019 Apple reopened the doors of its historic store in the Fifth Avenue and to celebrate the special event it made a landing page with a really neat animation of a cube made of glass. You can see the original animation in this video.

苹果于2019年9月重新开放了其位于第五大街的历史悠久的商店的门,并为庆祝这一特殊事件而制作了登陆页面,并用玻璃制成的立方体非常整洁的动画效果。 您可以在此视频中看到原始动画。

What caught my attention is the way they played with the famous glass cube to make the announcement.

引起我注意的是他们与著名的玻璃立方体一起演奏的方式。

As a Creative Technologist I constantly experiment and study the potential of web technologies, and I thought it might be interesting to try to replicate this using WebGL.

作为一名创新技术人员,我不断地尝试和研究Web技术的潜力,并且我认为尝试使用WebGL复制这种技术可能很有趣。

In this tutorial I’m going to explain step-by-step the techniques I used to recreate the animation.

在本教程中,我将逐步解释我用来重新创建动画的技术。

You will need an intermediate level of knowledge of WebGL. I will omit some parts of the code for brevity and assume you already know how to set up a WebGL application. The techniques I’m going to show are translatable to any WebGL library / framework.

您将需要WebGL的中级知识。 为了简洁起见,我将省略部分代码,并假设您已经知道如何设置WebGL应用程序。 我将展示的技术可以转换为任何WebGL库/框架。

Since WebGL APIs are very verbose, I decided to go with Regl for my experiment:

由于WebGL API非常冗长,因此我决定使用Regl进行实验:

Regl is a new functional abstraction for WebGL. Using Regl is easier than writing raw WebGL code because you don’t need to manage state or binding; it’s also lighter and faster and has less overhead than many existing 3d frameworks.

Regl是WebGL的新功能抽象。 使用Regl比编写原始WebGL代码更容易,因为您不需要管理状态或绑定。 与许多现有3d框架相比,它更轻,更快,开销也更少。

绘制立方体 (Drawing the cube)

The first step is to create the program to draw the cube.

第一步是创建绘制立方体的程序。

Since the shape we’re going to create is a prism made of glass, we must guarantee the following characteristics:

由于我们要创建的形状是玻璃制成的棱镜,因此我们必须保证以下特征:

  • It must be transparent

    它必须是透明的
  • Cube internal faces must reflect the internal content

    多维数据集内部面必须反映内部内容
  • The cube edges must distort the internal content

    多维数据集边缘必须扭曲内部内容

正面和背面(Front and back faces)

In order to get what we want, at render time we’ll draw the shape in two passes:

为了得到我们想要的,在渲染时我们将分两遍绘制形状:

  1. In the first pass we’ll draw only the back faces with the internal reflection.

    在第一遍中,我们将只绘制具有内部反射的背面

  2. In the second pass we’ll draw the front faces with the content after being masked and distorted at the edges.

    在第二遍中,我们将在内容被遮盖并在边缘变形后绘制带有内容的正面

Draw the shape in two passes means nothing but calling the WebGL program two times, but with a different configuration. WebGL has the concept of front facing and back facing and this gives us the ability to decide what to draw turning on the culling face feature.

两次绘制图形只不过是两次调用WebGL程序,但配置不同而已。 WebGL具有正面和背面的概念,这使我们能够决定要启用的特征来决定要绘制的内容。

With that feature turned on, WebGL defaults to “culling” back facing triangles. “Culling” in this case is a fancy word for “not drawing”.

启用该功能后,WebGL默认为“剔除”背面三角形。 在这种情况下,“剔除”是“不绘画”的花哨词。

WebGL FundamentalsWebGL基础知识
// draw front faces
gl.enable(gl.CULL_FACE);
gl.cullFace(gl.BACK);

// draw back faces
gl.enable(gl.CULL_FACE);
gl.cullFace(gl.FRONT);

Now that we have gone through the part of setting up the program, let’s start to render the cube.

现在我们已经完成了设置程序的部分,让我们开始渲染多维数据集。

彩色边框 (Coloured borders)

What we want to obtain is a transparent shape with coloured borders. From a flat white cube, in the first step we’ll add the rainbow color and then we’ll mask it with the borders:

我们想要获得的是带有彩色边框的透明形状。 在一个平坦的白色立方体中,第一步,我们将添加彩虹色,然后使用边框对其进行遮罩:

First of all create the GLSL function that returns the rainbow:

首先创建返回彩虹的GLSL函数:

const float PI2 = 6.28318530718;

vec4 radialRainbow(vec2 st, float tick) {
  vec2 toCenter = vec2(0.5) - st;
  float angle = mod((atan(toCenter.y, toCenter.x) / PI2) + 0.5 + sin(tick), 1.0);

  // colors
  vec4 a = vec4(0.15, 0.58, 0.96, 1.0);
  vec4 b = vec4(0.29, 1.00, 0.55, 1.0);
  vec4 c = vec4(1.00, 0.0, 0.85, 1.0);
  vec4 d = vec4(0.92, 0.20, 0.14, 1.0);
  vec4 e = vec4(1.00, 0.96, 0.32, 1.0);

  float step = 1.0 / 10.0;

  vec4 color = a;

  color = mix(color, b, smoothstep(step * 1.0, step * 2.0, angle));
  color = mix(color, a, smoothstep(step * 2.0, step * 3.0, angle));
  color = mix(color, b, smoothstep(step * 3.0, step * 4.0, angle));
  color = mix(color, c, smoothstep(step * 4.0, step * 5.0, angle));
  color = mix(color, d, smoothstep(step * 5.0, step * 6.0, angle));
  color = mix(color, c, smoothstep(step * 6.0, step * 7.0, angle));
  color = mix(color, d, smoothstep(step * 7.0, step * 8.0, angle));
  color = mix(color, e, smoothstep(step * 8.0, step * 9.0, angle));
  color = mix(color, a, smoothstep(step * 9.0, step * 10.0, angle));

  return color;
}

#pragma glslify: export(radialRainbow);

Glslify is a node.js-style module system that lets us split GLSL code into modules.

Glslify是一个node.js样式的模块系统,可让我们将GLSL代码拆分为模块。

https://github.com/glslify/glslifyhttps://github.com/glslify/glslify

Before going ahead, let’s talk a bit about gl_FragCoord.

在继续之前,让我们先谈谈gl_FragCoord

Available only in the fragment language, gl_FragCoord is an input variable that contains the window canvas relative coordinate (x, y, z, 1/w) values for the fragment.

gl_FragCoord仅在片段语言中可用,是一个输入变量,其中包含片段的窗口画布相对坐标(x,y,z,1 / w)值。

khronos.orgkhronos.org

If you notice, the function radialRainbow needs a variable called st as first parameter, whose values must be the pixel coordinates relative to the canvas and, like UVs, go between 0 and 1. The variable st is the result of the division of gl_FragCoord by the resolution:

如果您注意到,函数radialRainbow需要一个名为st的变量作为第一个参数,该变量的值必须是相对于画布的像素坐标,并且像UVs一样,介于0和1之间。变量stgl_FragCoord除以的结果分辨率:

/**
 * gl_FragCoord: pixel coordinates
 * u_resolution: the resolution of our canvas
 */
vec2 st = gl_FragCoord.xy / u_resolution;

The following image explains the difference between using UVs and st.

下图说明了使用UVsst的区别。

Once we’re able to render the radial gradient, let’s create the function to get the borders:

一旦我们能够渲染径向渐变,让我们创建函数来获取边界:

float borders(vec2 uv, float strokeWidth) {
  vec2 borderBottomLeft = smoothstep(vec2(0.0), vec2(strokeWidth), uv);

  vec2 borderTopRight = smoothstep(vec2(0.0), vec2(strokeWidth), 1.0 - uv);

  return 1.0 - borderBottomLeft.x * borderBottomLeft.y * borderTopRight.x * borderTopRight.y;
}

#pragma glslify: export(borders);

And then our final fragment shader:

然后是我们最终的片段着色器:

precision mediump float;

uniform vec2 u_resolution;
uniform float u_tick;

varying vec2 v_uv;
varying float v_depth;

#pragma glslify: borders = require(borders.glsl);
#pragma glslify: radialRainbow = require(radial-rainbow.glsl);

void main() {
  // screen coordinates
  vec2 st = gl_FragCoord.xy / u_resolution;

  vec4 bordersColor = radialRainbow(st, u_tick);

  // opacity factor based on the z value
  float depth = clamp(smoothstep(-1.0, 1.0, v_depth), 0.6, 0.9);

  bordersColor *= vec4(borders(v_uv, 0.011)) * depth;

  gl_FragColor = bordersColor;
}

绘制内容 (Drawing the content)

Please note that the Apple logo is a trademark of Apple Inc., registered in the U.S. and other countries. We are only using it here for demonstration purposes.

请注意, Apple徽标Apple Inc.在美国和其他国家/地区注册的商标。 我们在这里仅将其用于演示目的。

Now that we have the cube, it’s time to add the Apple logo and all texts.

现在我们有了多维数据集,是时候添加Apple徽标和所有文本了。

If you notice, the content is not only rendered inside the cube, but also on the three back faces as reflection – that means render it four times. In order to keep the performance high, we’ll draw it only once off-screen at render time to then use it in the various fragments.

如果您注意到,内容不仅呈现在多维数据集内部,而且还呈现在三个背面作为反射–这意味着将其呈现四次。 为了保持较高的性能,我们只在渲染时在屏幕外绘制一次,然后在各个片段中使用它。

In WebGL we can do it thanks to the FBO:

在WebGL中,我们可以借助FBO来做到这一点:

The frame buffer object architecture (FBO) is an extension to OpenGL for doing flexible off-screen rendering, including rendering to a texture. By capturing images that would normally be drawn to the screen, it can be used to implement a large variety of image filters, and post-processing effects.

帧缓冲区对象体系结构(FBO)是OpenGL的扩展,用于进行灵活的屏幕外渲染,包括渲染到纹理。 通过捕获通常会绘制到屏幕上的图像,它可以用于实现各种图像滤镜和后处理效果。

Wikipedia维基百科

In Regl it’s pretty simple to play with FBOs:

在Regl中,使用FBO非常简单:

...

// here we'll put the logo and the texts
const textures = [
  ...
]

// we create the FBO
const contentFbo = regl.framebuffer()

// animate is executed at render time
const animate = ({viewportWidth, viewportHeight}) => {
  contentFbo.resize(viewportWidth, viewportHeight)

  // we tell WebGL to render off-screen, inside the FBO
  contentFbo.use(() => {
    /**
     * – Content program
     * It'll run as many times as the textures number
     */
    content({
      textures
    })
  })

  /**
   * – Cube program
   * It'll run twice, once for the back faces and once for front faces
   * Together with front faces we'll render the content as well
   */
  cube([
    {
      pass: 1,
      cullFace: 'FRONT',
    },
    {
      pass: 2,
      cullFace: 'BACK',
      texture: contentFbo, // we pass the FBO as a normal texture
    },
  ])
}

regl.frame(animate)

And then update the cube fragment shader to render the content:

然后更新多维数据集片段着色器以呈现内容:

precision mediump float;

uniform vec2 u_resolution;
uniform float u_tick;
uniform int u_pass;
uniform sampler2D u_texture;

varying vec2 v_uv;
varying float v_depth;

#pragma glslify: borders = require(borders.glsl);
#pragma glslify: radialRainbow = require(radial-rainbow.glsl);

void main() {
  // screen coordinates
  vec2 st = gl_FragCoord.xy / u_resolution;

  vec4 texture;
  vec4 bordersColor = radialRainbow(st, u_tick);

  // opacity factor based on the z value
  float depth = clamp(smoothstep(-1.0, 1.0, v_depth), 0.6, 0.9);

  bordersColor *= vec4(borders(v_uv, 0.011)) * depth;

  if (u_pass == 2) {
    texture = texture2D(u_texture, st);
  }

  gl_FragColor = texture + bordersColor;
}

掩蔽(Masking)

In the Apple animation every cube face shows a different texture, that means that we have to create a special mask that follows the cube rotation.

在Apple动画中,每个立方体面都显示不同的纹理,这意味着我们必须在立方体旋转后创建一个特殊的蒙版。

We’ll render the informations to mask the textures inside an FBO that we’ll pass to the content program.

我们将渲染信息以掩盖FBO内部的纹理,并将其传递给内容程序。

To each texture let’s associate a different maskId – every ID corresponds to a color that we’ll use as test-data:

让我们为每个纹理关联一个不同的maskId每个ID对应于我们将用作测试数据的颜色:

const textures = [
  {
    texture: logoTexture,
    maskId: 1,
  },
  {
    texture: logoTexture,
    maskId: 2,
  },
  {
    texture: logoTexture,
    maskId: 3,
  },
  {
    texture: text1Texture,
    maskId: 4,
  },
  {
    texture: text2Texture,
    maskId: 5,
  },
]

To make each maskId correspond to a colour, we just have to convert it in binary and then read it as RGB:

为了使每个maskId对应一种颜色,我们只需要将其转换为二进制,然后将其读取为RGB:

MaskID 1 => [0, 0, 1] => Blue
MaskID 2 => [0, 1, 0] => Lime
MaskID 3 => [0, 1, 1] => Cyan
MaskID 4 => [1, 0, 0] => Red
MaskID 5 => [1, 0, 1] => Magenta

The mask will be nothing but our cube with the faces filled with one of the colours shown above – obviously in this case we just need to draw the front faces:

蒙版将不过是我们的立方体,其表面填充有上面显示的一种颜色–显然,在这种情况下,我们只需要绘制正面即可:

...

maskFbo.use(() => {
  cubeMask([
    {
      cullFace: 'BACK',
      colorFaces: [
        [0, 1, 1], // front face => mask 3
        [0, 0, 1], // right face => mask 1
        [0, 1, 0], // back face => mask 2
        [0, 1, 1], // left face => mask 3
        [1, 0, 0], // top face => mask 4
        [1, 0, 1], // bottom face => mask 5
      ]
    },
  ])
});

contentFbo.use(() => {
  content({
    textures,
    mask: maskFbo
  })
})

...

Our mask will look like this:

我们的面具看起来像这样:

Now that we have the mask available inside the fragment of the content program, let’s write down the test:

现在,我们在内容程序的片段中有可用的掩码,让我们写下测试:

precision mediump float;

uniform vec2 u_resolution;
uniform sampler2D u_texture;
uniform int u_maskId;
uniform sampler2D u_mask;

varying vec2 v_uv;

void main() {
  vec2 st = gl_FragCoord.xy / u_resolution;

  vec4 texture = texture2D(u_texture, v_uv);

  vec4 mask = texture2D(u_mask, st);

  // convert the mask color from binary (rgb) to decimal
  int maskId = int(mask.r * 4.0 + mask.g * 2.0 + mask.b * 1.0);

  // if the test passes then draw the texture
  if (maskId == u_maskId) {
    gl_FragColor = texture;
  } else {
    discard;
  }
}

失真(Distortion)

The distortion at the edges is that characteristic that gives the feeling of a glass material.

边缘处的变形是赋予玻璃材质感觉的特性。

The effect is achieved by simply shifting the pixels near the edges towards the center of each face – the following video shows how it works:

只需将边缘附近的像素移向每个脸部的中心即可获得效果-以下视频显示了其工作原理:

For each pixel to move we need two pieces of information:

对于要移动的每个像素,我们需要两条信息:

  1. How much to move the pixel

    移动像素多少
  2. The direction in which we want to move the pixel

    我们要移动像素的方向

These two pieces of information are contained inside the Displacement Map which, as before for the mask, we’ll store in an FBO that we’ll pass to the content program:

这两个信息包含在“置换贴图”中,就像以前的蒙版一样,我们将它们存储在FBO中,并将其传递给内容程序:

...

displacementFbo.use(() => {
  cubeDisplacement([
    {
      cullFace: 'BACK'
    },
  ])
});

contentFbo.use(() => {
  content({
    textures,
    mask: maskFbo,
    displacement: displacementFbo
  })
})

...

The displacement map we’re going to draw will look like this:

我们要绘制的置换图如下所示:

Let’s see in detail how it’s made.

让我们详细了解它的制作方法。

The green channel is the length, that is how much to move the pixel – the greener the greater the displacement. Since the distortion must be present only at the edges, we just have to draw a green frame on each face.

绿色通道是长度,即像素移动的数量–绿色越多,位移越大。 由于失真必须仅出现在边缘,因此我们只需要在每个面上绘制一个绿色框即可。

To get the green frame we just have to reuse the border function and put the result on the gl_FragColor green channel:

要获得绿色框,我们只需要重用border函数并将结果放在gl_FragColor绿色通道上:

precision mediump float;

varying vec2 v_uv;

#pragma glslify: borders = require(borders.glsl);

void main() {
  // Green channel – how much to move the pixel
  float length = borders(v_uv, 0.028) + borders(v_uv, 0.06) * 0.3;

  gl_FragColor = vec4(0.0, length, 0.0, 1.0);
}

The red channel is the direction, whose value is the angle in radians. Finding this value is more tricky because we need the position of each point relative to the world – since our cube rotates, even the UVs follow it and therefore we loose any reference. In order to compute the position of every pixel in relation to the center we need two varying variables from the vertex shader:

红色通道为方向,其值为弧度角。 找到这个值比较困难,因为我们需要每个点相对于世界的位置–因为我们的立方体旋转,所以即使UVs也会跟随它,因此我们失去了任何参考。 为了计算每个像素相对于中心的位置,我们需要顶点着色器提供两个不同的变量:

  1. v_point: the world position of the current pixel.

    v_point :当前像素的世界位置。

  2. v_center: the world position of the center of the face.

    v_center :脸部中心的世界位置。

The vertex shader:

顶点着色器:

precision mediump float;

attribute vec3 a_position;
attribute vec3 a_center;
attribute vec2 a_uv;

uniform mat4 u_projection;
uniform mat4 u_view;
uniform mat4 u_world;

varying vec3 v_center;
varying vec3 v_point;
varying vec2 v_uv;

void main() {
  vec4 position = u_projection * u_view * u_world * vec4(a_position, 1.0);
  vec4 center = u_projection * u_view * u_world * vec4(a_center, 1.0);

  v_point = position.xyz;
  v_center = center.xyz;
  v_uv = a_uv;

  gl_Position = position;
}

At this point, in the fragment, we just have to find the distance from the center, calculate the relative angle in radians and put the result on the gl_FragColor red channel – here the shader updated:

至此,在片段中,我们只需要找到到中心的距离,以弧度为单位计算相对角度,然后将结果放到gl_FragColor红色通道中即可-此处的着色器已更新:

precision mediump float;

varying vec3 v_center;
varying vec3 v_point;
varying vec2 v_uv;

const float PI2 = 6.283185307179586;

#pragma glslify: borders = require(borders.glsl);

void main() {
  // Red channel – which direction to move the pixel
  vec2 toCenter = v_center.xy - v_point.xy;
  float direction = (atan(toCenter.y, toCenter.x) / PI2) + 0.5;

  // Green channel – how much to move the pixel
  float length = borders(v_uv, 0.028) + borders(v_uv, 0.06) * 0.3;

  gl_FragColor = vec4(direction, length, 0.0, 1.0);
}

Now that we have our displacement map, let’s update the content fragment shader:

现在我们有了位移图,让我们更新内容片段着色器:

precision mediump float;

uniform vec2 u_resolution;
uniform sampler2D u_texture;
uniform int u_maskId;
uniform sampler2D u_mask;

varying vec2 v_uv;

void main() {
  vec2 st = gl_FragCoord.xy / u_resolution;

  vec4 displacement = texture2D(u_displacement, st);
  // get the direction by taking the displacement red channel and convert it in a vector2
  vec2 direction = vec2(cos(displacement.r * PI2), sin(displacement.r * PI2));
  // get the length by taking the displacement green channel
  float length = displacement.g;

  vec2 newUv = v_uv;
  
  // calculate the new uvs
  newUv.x += (length * 0.07) * direction.x;
  newUv.y += (length * 0.07) * direction.y;

  vec4 texture = texture2D(u_texture, newUv);

  vec4 mask = texture2D(u_mask, st);

  // convert the mask color from binary (rgb) to decimal
  int maskId = int(mask.r * 4.0 + mask.g * 2.0 + mask.b * 1.0);

  // if the test passes then draw the texture
  if (maskId == u_maskId) {
    gl_FragColor = texture;
  } else {
    discard;
  }
}

反射(Reflection)

Since reflection is quite a complex topic, I’ll just give you a quick introduction on how it works so that you can more easily understand the source I shared.

由于反射是一个非常复杂的主题,因此,我仅向您简要介绍反射的工作原理,以便您可以更轻松地了解我共享的源代码。

Before continuing, it’s necessary to understand the concept of camera in WebGL. The camera is nothing but the combination of two matrices: the view and projection matrix.

在继续之前,有必要了解WebGL中的摄影机概念。 相机不过是两个矩阵的组合:视图矩阵和投影矩阵

The projection matrix is used to convert world space coordinates into clip space coordinates. A commonly used projection matrix, the perspective matrix, is used to mimic the effects of a typical camera serving as the stand-in for the viewer in the 3D virtual world. The view matrix is responsible for moving the objects in the scene to simulate the position of the camera being changed.

投影矩阵用于将世界空间坐标转换为剪辑空间坐标。 常用的投影矩阵(透视矩阵)用于模仿3D虚拟世界中观看者的典型摄像头的效果视图矩阵负责在场景中移动对象,以模拟要更改的相机的位置。

developer.mozilla.orgdeveloper.mozilla.org

 I suggest that you also get familiar with these concepts before we dig deeper:

我建议您在深入研究之前也熟悉这些概念:

In a 3D environment, reflections are obtained by creating a camera for each reflective surface and placing it accordingly based on the position of the viewer – that is the eye of the main camera.

在3D环境中,通过为每个反射表面创建一个摄影机并根据查看者的位置(即主摄影机的眼睛)相应地放置它来获得反射。

In our case, every face of the cube is a reflective surface, that means we need 6 different cameras whose position depends on the viewer and the cube rotation.

在我们的案例中,立方体的每个面都是反射面,这意味着我们需要6个不同的摄像头,它们的位置取决于观察者和立方体的旋转。

WebGL立方体贴图(WebGL Cubemaps)

Every camera generates a texture for each inner face of the cube. Instead of creating a single framebuffer for every face, we can use the cube mapping technique.

每个摄像机都会为立方体的每个内表面生成一个纹理。 我们可以使用多维数据集映射技术来代替为每个面Kong创建单个帧缓冲区。

Another kind of texture is a cubemap. It consists of 6 textures representing the 6 faces of a cube. Instead of the traditional texture coordinates that have 2 dimensions, a cubemap uses a normal, in other words a 3D direction. Depending on the direction the normal points one of the 6 faces of the cube is selected and then within that face the pixels are sampled to produce a color.

另一种纹理是立方体贴图。 它由代表一个多维数据集的6个面的6个纹理组成。 代替使用2维的传统纹理坐标,立方体贴图使用法线(即3D方向)。 根据方向,选择立方体的6个面之一的法向点,然后在该面内对像素进行采样以产生颜色。

WebGL FundamentalsWebGL基础知识

So we just have to store what the six cameras “see” in the right cell – this is how our cubemap will look like:

因此,我们只需将六个摄像机“看到”的内容存储在正确的单元格中,这就是立方体贴图的外观:

Let’s update our animate function by adding the reflection:

让我们通过添加反射来更新动画功能:

...

// this is a normal FBO
const contentFbo = regl.framebuffer()

// this is a cube FBO, that means it composed by 6 textures
const reflectionFbo = regl.framebufferCube(1024)

// animate is executed at render time
const animate = ({viewportWidth, viewportHeight}) => {
  contentFbo.resize(viewportWidth, viewportHeight)

  contentFbo.use(() => {
    ...
  })

  /**
   * – Reflection program
   * we'll iterate 6 times over the reflectionFbo and draw inside the 
   * result of each camera
   */
  reflection({
    reflectionFbo,
    cameraConfig,
    texture: contentFbo
  })

  /**
   * – Cube program
   * with the back faces we'll render the reflection as well
   */
  cube([
    {
      pass: 1,
      cullFace: 'FRONT',
      reflection: reflectionFbo,
    },
    {
      pass: 2,
      cullFace: 'BACK',
      texture: contentFbo,
    },
  ])
}

regl.frame(animate)

And then update the cube fragment shader.

然后更新多维数据集片段着色器。

In the fragment shader we need to use a samplerCube instead of a sampler2D and use textureCube instead of texture2D. textureCube takes a vec3 direction so we pass the normalized normal. Since the normal is a varying and will be interpolated we need to normalize it.

在片段着色器中,我们需要使用samplerCube而不是sampler2D并使用textureCube而不是texture2D。 textureCube采用vec3方向,因此我们传递了归一化的法线。 由于法线是变化的并将被插值,因此我们需要对其进行归一化。

WebGL FundamentalsWebGL基础知识
precision mediump float;

uniform vec2 u_resolution;
uniform float u_tick;
uniform int u_pass;
uniform sampler2D u_texture;
uniform samplerCube u_reflection;

varying vec2 v_uv;
varying float v_depth;
varying vec3 v_normal;

#pragma glslify: borders = require(borders.glsl);
#pragma glslify: radialRainbow = require(radial-rainbow.glsl);

void main() {
  // screen coordinates
  vec2 st = gl_FragCoord.xy / u_resolution;

  vec4 texture;
  vec4 bordersColor = radialRainbow(st, u_tick);

  // opacity factor based on the z value
  float depth = clamp(smoothstep(-1.0, 1.0, v_depth), 0.6, 0.9);

  bordersColor *= vec4(borders(v_uv, 0.011)) * depth;

  // if u_pass is 1, we're drawing back faces
  if (u_pass == 1) {
    vec3 normal = normalize(v_normal);
    texture = textureCube(u_reflection, normal);
  }

  // if u_pass is 1, we're drawing back faces
  if (u_pass == 2) {
    texture = texture2D(u_texture, st);
  }

  gl_FragColor = texture + bordersColor;
}

结论(Conclusion)

This article may give you a general idea of the techniques I used to replicate the Apple animation. If you want to learn more, I suggest you download the source and have a look ot how it works. If you have any questions, feel free to ask me on Twitter (@lorenzocadamuro); hope you have enjoyed it!

本文可能会让您对我用来复制Apple动画的技术有一个大致的了解。 如果您想了解更多信息,建议您下载源代码并查看其工作方式。 如果您有任何疑问,请随时在Twitter( @lorenzocadamuro )上问我; 希望您喜欢它!

翻译自: https://tympanus.net/codrops/2019/12/20/how-to-create-the-apple-fifth-avenue-cube-in-webgl/

webgl 获取数据集名称

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值