视觉效果图样本

Visual Effect Graph empowers you to author next-generation visual effects through its node-based behaviors and GPU-based compute power. For starters, we released an introduction blog post that summarizes the philosophy of the editor. Since the initial preview release at Unite LA 2018, we’ve also been publishing various sample VFXs to our GitHub repository. Take a look at them and use them to build your own effects!

视觉效果图使您能够通过其基于节点的行为和基于GPU的计算能力来创作下一代视觉效果。 首先,我们发布了一个简介博客文章 ,总结了编辑器的原理。 自从Unite LA 2018首次发布预览版以来,我们还一直在将各种示例VFX发布到我们的GitHub存储库中。 看一看它们,并使用它们来构建自己的效果!

These samples illustrate different production scenarios that the Visual Effect Graph can handle, from simple particle systems to more complex systems with really specific behavior. All these effects are presented in separate scenes so you can browse and learn separately.

这些示例说明了视觉效果图可以处理的不同生产方案,从简单的粒子系统到具有真正特定行为的更复杂的系统。 所有这些效果都显示在单独的场景中,因此您可以分别浏览和学习。

获取样品 (Getting the Samples)

The first step for getting these samples is to make sure you’re running Unity 2018.3: the more recent 2018.3 editor version, the better. I advise you use Unity Hub to ensure you get the latest. Visual Effect Graph samples are working on Windows and Mac editors.

获取这些示例的第一步是确保您正在运行Unity 2018.3 :更新的2018.3编辑器版本越好。 我建议您使用Unity Hub来确保获得最新版本。 视觉效果图示例可在Windows和Mac编辑器上使用。

When you have the right version of the editor open, get the samples project. You can download one of the source code zip or tar.gz archives from the VFX Graph Releases GitHub page, or clone the repository if you want to update regularly.

打开正确版本的编辑器后,获取示例项目。 您可以从VFX Graph Releases GitHub页面下载zip或tar.gz源代码档案之一,如果您想定期更新,则可以克隆存储库。

样本项目结构 (Sample Project Structure)

Each sample is located in a subdirectory of the Assets/Samples directory. The main scene (used when building a player) is the root of /Assets. This scene is used to sequentially load all samples declared in the scene build list in the Build Settings window.

每个样本位于Assets / Samples目录的子目录中。 主要场景(在构建玩家时使用)是/ Assets的根。 该场景用于按顺序加载“构建设置”窗口中场景构建列表中声明的所有样本。

If you need to build a player, you will have to ensure the VisualEffectsSamples scene is included in build settings at index zero, then just add all other scenes you want to cycle.

如果需要构建播放器,则必须确保VisualEffectsSamples场景包含在索引为零的生成设置中,然后仅添加要循环的所有其他场景。

示例#01 – Unity多维数据集 (Sample #01 – Unity Cube)

This sample is historically one of the very first effects prototyped using the early versions of Visual Effect Graph, It showcases a system of 400 thousand particles with a moving emitting source attracting particles towards a volume Unity cube.

从历史上看,此示例是使用早期版本的Visual Effect Graph原型化的最早效果之一,它展示了一个由40万个粒子组成的系统,该系统具有移动的发射源,将粒子吸引到Unity立方体中。

The emitting sphere and its motion is self-contained in the effect and the position is animated using a combination of per-axis sin(Time). What’s interesting with this computation is that we can determine the sub-frame positions in order to reduce sphere position discretization. You can toggle this option to check the difference between the two modes. In the example below, when only the time is used the sphere is moving so fast that you can see its shape discretized over space. However, when using Per-Particle total time this artifact is totally gone.

发射球体及其运动完全独立于效果,并且使用每轴sin(Time)的组合对位置进行动画处理。 这种计算的有趣之处在于,我们可以确定子帧位置,以减少球体位置离散化。 您可以切换此选项以检查两种模式之间的差异。 在下面的示例中,当仅使用时间时,球体移动得如此之快,以至于您可以看到其形状在空间上离散。 但是,当使用“每粒子总时间”时,此工件完全消失了。

Once released by the sphere, the particles are driven by two vector fields: an attractor towards the Unity cube, and a noise that will enrich the motion while being attracted. The particles are also colliding with the emitting sphere.

粒子一旦被球体释放,就会受到两个矢量场的驱动:一个朝向Unity立方体的吸引子,以及一个在被吸引时会丰富运动的噪声。 粒子也与发射球碰撞。

The color of the particles is driven by two gradients. One for the particles nearing the moving emitting sphere, that cycles every 5 seconds. The other gradient is a blue to pink standard Color over Life.

粒子的颜色由两个渐变驱动。 一个用于接近移动发射球的粒子,该粒子每5秒循环一次。 另一个渐变是蓝色到粉红色的标准“整个生命周期”。

Using this masking trick, we simulate that the emitting source is applying some fake lighting to all the particles near it.

使用此掩盖技巧,我们模拟了发射源正在向其附近的所有粒子施加一些伪造的光照。

样本#02 –变形脸 (Sample #02 – Morphing Face)

Morphing Face showcases the use of Point Caches to set the initial position of a particle, and also store other attributes such as normals. Particles are spawned randomly from a point cache we baked in Houdini. But we could also have used the Point Cache Bake tool (via Window/Visual Effects/Utilities/Point Cache Bake Tool) to generate this point cache from a Unity mesh.

“变形脸”展示了使用点缓存来设置粒子的初始位置,以及存储其他属性(例如法线)。 从我们在Houdini中烘焙的点缓存中随机生成粒子。 但是我们也可以使用Point Cache Bake工具 (通过Window / Visual Effects / Utilities / Point Cache Bake Tool) 从Unity网格生成此点缓存。

Point caches files are imported into unity and generate an asset with one texture per attribute (attribute map). Then, you can use the Point Cache node to reference this asset: it will populate all attribute maps and display one connector per-attribute. Then you can plus these into Attribute from Map blocks to fetch these values. In the example above we sample points randomly from this point cache to create the particles.

点缓存文件被导入为一个整体,并生成每个属性具有一个纹理的资产(属性贴图)。 然后,您可以使用Point Cache节点来引用此资产:它将填充所有属性映射并为每个属性显示一个连接器。 然后,您可以将它们添加到 Map 块的 Attribute中 以获取这些值。 在上面的示例中,我们从该点缓存中随机采样点以创建粒子。

Once created, these particles aren’t updated in a simulation (the system does not have an update context) as they stay fixed in space, and don’t age or die. We just compute a mask over time in the output context (shown above with green/red coloring).

一旦创建,这些粒子就不会在模拟中更新(系统没有更新上下文),因为它们在空间中保持固定,并且不会老化或死亡。 我们只是在输出上下文中随时间计算蒙版(如上图所示,为绿色/红色)。

This mask enables us to control many parameters of the particles by blending from two states: small non-metallic cubes and longer metallic sticks. The orientation is also blended between aligned cubes and randomly oriented sticks.

该遮罩使我们能够通过两种状态的混合来控制粒子的许多参数:小的非金属立方体和更长的金属棒。 方向也混合在对齐的立方体和随机定向的棒之间。

The scene uses also moving lights to display material changes while animating the mask.

场景还使用移动光来显示动画时对蒙版进行的材质更改。

样本#03 –蝴蝶 (Sample #03 – Butterflies)

The Butterflies sample is an example of using multiple outputs for rendering one particle. In this sample, we simulate a swarm of butterflies orbiting around a central, vertical axis. Every butterfly is defined by only one particle element and only its trajectory is simulated in the update context. In the example below, butterfly particles are highlighted by the red dots.

蝴蝶样本是使用多个输出渲染一个粒子的示例。 在此示例中,我们模拟了围绕中心垂直轴旋转的蝴蝶群。 每只蝴蝶仅由一个粒子元素定义,并且仅在更新上下文中模拟其轨迹。 在下面的示例中,蝴蝶粒子由红点突出显示。

The animation of the wings and the body is then computed in 3 different output contexts, one for each wing and one for the body.

然后,在3个不同的输出上下文中计算机翼和身体的动画,其中一个用于每个机翼,一个用于身体。

To orient a butterfly, we use a combination of its forward (velocity) vector and an up vector that we tilt rear a little, so the body isn’t aligned to the trajectory but instead lifts up the head from the belly. The body is animated using a sine with a random frequency per-butterfly. The wings angles are also animated using a sine with the same frequency but slightly offset in time to simulate damping and inertia of the body.

为了使蝴蝶定向,我们将其前向(速度)矢量和向上向后倾斜一点的向上矢量结合使用,因此身体不与轨迹对齐,而是将头部从腹部抬起。 使用每蝴蝶具有随机频率的正弦对身体进行动画处理。 机翼的角度也可以使用正弦动画,频率相同,但时间上略有偏移,以模拟车身的阻尼和惯性。

样本#04 –草风 (Sample #04 – Grass Wind)

Grass Wind is an example that showcases the simulation of something totally different from regular particles: grass on a terrain. Using a point cache generated from terrain data, we spawn grass crops on a terrain with an up vector blended from the terrain normal and a world up vector.

草风是一个示例,展示了与常规粒子完全不同的模拟:地形上的草。 使用从地形数据生成的点缓存,我们在地形上生成草类作物,并使用了从地形法线和世界向上矢量混合而成的向上矢量。

Every element is then interacting with the player by using a Position, Radius and Velocity parameter, sent to the effect and based on the player’s character values.

然后,每个元素都通过使用“位置”,“半径”和“速度”参数与玩家互动,并发送到效果并基于玩家的角色值。

Simulation is then driven by these rules:

然后,由以下规则驱动仿真:

    To simulate crop bending, we store values into unused attributes: velocity and alpha.

    为了模拟作物弯曲,我们将值存储到未使用的属性中:速度和alpha。

      Alpha attribute upon stepping on a crop goes down at a given rate, until it reaches min value (-2.0). When not stepped on it regrows at a specific rate until it reaches 1.0. When transitioning from 0.0 to 1.0, the velocity value will release and diminish until the crop becomes vertical anew.

      踩下作物时,Alpha属性以给定速率下降,直到达到最小值(-2.0)。 如果不踩踏它,则以特定速率重新生长,直到达到1.0。 从0.0过渡到1.0时,速度值将释放并减小,直到农作物重新变为垂直。

      For all crops that aren’t affected by stepping and bending we apply an additional wind noise in the output to make it less static when idle.

      对于不受步进和弯曲影响的所有农作物,我们在输出中施加额外的风噪声,以使其在闲置时不易产生静电。

      样品#05 –体积 (Sample #05 – Volumetric)

      Volumetric sample is rather simple but it demonstrates the integration into HD Render Pipeline lighting and volumetric fog. The scene is setup with a split-environment and its background sky is a simple gray. Two light sources are used, one orange and one blue. To cast shadows, each source is composed of one spotlight oriented towards camera, with real-time shadows on. In order to simulate punctual source, we configured another spotlight for each light source, in opposite directions.

      体积样本非常简单,但是它演示了如何集成到HD Render Pipeline照明和体积雾中。 场景设置为分裂环境,背景天空为简单的灰色。 使用了两种光源,一种是橙色,另一种是蓝色。 为了投射阴影,每个光源都由一个朝向照相机的聚光灯组成,并具有实时阴影。 为了模拟准时光源,我们为每个光源配置了另一个聚光灯,方向相反。

      Opaque Particles are spawned from an animated source with a flipbook texture to simulate multiple elements per particle (this helps us keep the mass rich without having to use six times as many particles). The particle mass is evolving around using a noise, and is attracted towards a position nearing the camera.

      不透明的粒子是从具有动画书纹理的动画源中生成的,可以模拟每个粒子多个元素(这有助于我们保持质量丰富,而不必使用六倍的粒子)。 粒子质量随着噪声的作用而旋转,并被吸引到靠近相机的位置。

      Particles are rendered with cast shadows on, and use a diffusion profile with transmittance so the light is leaking through the particles.

      渲染带有投射阴影的粒子,并使用具有透射率的扩散轮廓,以便光线从粒子中泄漏出来。

      Here’s a breakdown of the lighting we used for this sample.

      这是我们用于此示例的照明的明细。

      样本#06 –门户 (Sample #06 – Portal)

      After seeing this Houdine tutorial, we wanted to challenge ourselves by re-creating an effect from a CG Package and add our own improvements. We also took some inspiration from the RiseFX Houdini demoreel.

      看完这篇 Houdine教程之后 ,我们想通过从CG包重新创建效果并添加我们自己的改进来挑战自己。 我们还从 RiseFX Houdini demoreel中 获得了一些启发 。

      As a breakdown, the effect is composed of a single particle system, a inner distortion circle, and a lighting rig made of 8 line lights, all rotating in play mode.

      作为分解,效果由一个粒子系统,一个内部变形圆和一个由8个线光源组成的照明设备组成,所有这些光源均以播放模式旋转。

      The particles are categorized at spawn in two groups: swift corona and colliding particles. Even though all particles collide on the ground.

      粒子在生成时分为两类:快速电晕粒子和碰撞粒子。 即使所有粒子都在地面上碰撞。

      样本#07 – AR雷达 (Sample #07 – AR Radar)

      AR Radar showcases a complex effect with many systems that work together, with both internal sequencing and that is sequenced externally into a timeline through a single float [0…1] parameter: Initialize.

      AR Radar展示了许多协同工作的系统的复杂效果,这些系统既可以内部排序,又可以通过单个float [0…1]参数从外部排序到时间轴中: 初始化。

      This parameter is used numerous times through the graph, to control the deploy effect while initializing the grid:

      此参数在图形中多次使用,以在初始化网格时控制部署效果:

        Enemy ships are triggered after the deployment of the base effect, using Timeline VFX Dedicated track. This track sends multiple times an event to spawn enemy ships around.

        部署基本效果后,将使用“时间轴VFX专用”轨道触发敌人飞船。 该轨道多次发送事件,以产生周围的敌方船只。

        At the center is a blinking dot that is controlled by a Position Parameter Binder, to link it to a scene point light.

        中心是一个闪烁的点,该点由“位置参数绑定器”控制,以将其链接到场景点光源。

        Here’s a breakdown:

        这是一个细分:

        样本#08 –体素化地形 (Sample #08 – Voxelized Terrain)

        VoxelizedTerrain is a simulation of a heightfield driven by particles that renders each as a cube.

        VoxelizedTerrain是由粒子驱动的高度场的模拟,该粒子将每个粒子渲染为立方体。

        Each particle is a point on a 2D Grid (256×256) and is sampling from a 2D Texture based on object-space coordinates.  The coordinates can be offset and scaled so the terrain scales and pans.

        每个粒子都是2D网格(256×256)上的一个点,并基于对象空间坐标从2D纹理采样。 坐标可以偏移和缩放,以便地形缩放和平移。

        By sampling this heightmap and storing the value in Scale.y, we can deform all points to set the actual sampled Height, color the cube based on its height, and adjust material properties (for instance smoothness for water).

        通过对该高度图进行采样并将值存储在Scale.y中,我们可以变形所有点以设置实际采样的高度,根据其高度为立方体着色,并调整材质属性(例如,对水的平滑度)。

        You can adjust the water level as well as the input height (read from the texture) and the final elevation. All these parameters are exposed and controlled by a global script (VoxelizedTerrainController.cs).

        您可以调整水位以及输入高度(从纹理读取)和最终高程。 所有这些参数均由全局脚本(VoxelizedTerrainController.cs)公开和控制。

        This script handles the mouse / keyboard events to pan, scale and rotate the camera, and set all the parameters to the Visual Effect component. This script relies on the Helpful ExposedParameter struct that caches the string value of the parameter and returns its integer index (from Shader.PropertyToID()).

        该脚本处理鼠标/键盘事件以平移,缩放和旋转摄像机,并将所有参数设置为“视觉效果”组件。 该脚本依赖于有用的 ExposedParameter 结构,该结构缓存参数的字符串值并返回其整数索引(来自Shader.PropertyToID())。

        1

        2
        3
        4
        5
        6
        7
        8
        9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        dist = Mathf.Clamp(dist, CameraMinMaxDistance.x, CameraMinMaxDistance.y);
                   ViewingCamera.transform.position = CameraRoot.transform.position + dist * dir;
                   VisualEffect.SetVector2(Position, m_Position);
                   VisualEffect.SetVector2(WorldSize, m_WorldSize);
                   // Sliders
                   float inputHeightMapScale = Mathf.Lerp(InputHeightLevel.x, InputHeightLevel.y, InputHeightMapScaleSlider.value);
                   float elevation = Mathf.Lerp(ElevationRange.x, ElevationRange.y, ElevationSlider.value);
                   float waterElevation = Mathf.Lerp(WaterElevationRange.x, WaterElevationRange.y, WaterElevationSlider.value);
                   CameraRoot.transform.position = new Vector3(CameraRoot.transform.position.x, waterElevation, CameraRoot.transform.position.z);
                   ViewingCamera.transform.LookAt(CameraRoot.transform);
                   VisualEffect.SetFloat(InputHeightMapScale, inputHeightMapScale);
                   VisualEffect.SetFloat(Elevation, elevation);
                   VisualEffect.SetFloat(WaterElevation, waterElevation);

        1

        2
        3
        4
        5
        6
        7
        8
        9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        dist = Mathf . Clamp ( dist , CameraMinMaxDistance . x , CameraMinMaxDistance . y ) ;
                   ViewingCamera . transform . position = CameraRoot . transform . position + dist * dir ;
                   VisualEffect . SetVector2 ( Position , m_Position ) ;
                   VisualEffect . SetVector2 ( WorldSize , m_WorldSize ) ;
                   // Sliders
                   float inputHeightMapScale = Mathf . Lerp ( InputHeightLevel . x , InputHeightLevel . y , InputHeightMapScaleSlider . value ) ;
                   float elevation = Mathf . Lerp ( ElevationRange . x , ElevationRange . y , ElevationSlider . value ) ;
                   float waterElevation = Mathf . Lerp ( WaterElevationRange . x , WaterElevationRange . y , WaterElevationSlider . value ) ;
                   CameraRoot . transform . position = new Vector3 ( CameraRoot . transform . position . x , waterElevation , CameraRoot . transform . position . z ) ;
                   ViewingCamera . transform . LookAt ( CameraRoot . transform ) ;
                   VisualEffect . SetFloat ( InputHeightMapScale , inputHeightMapScale ) ;
                   VisualEffect . SetFloat ( Elevation , elevation ) ;
                   VisualEffect . SetFloat ( WaterElevation , waterElevation ) ;

        样本#09 –精灵 (Sample #09 – Genie)

        The Genie effect is a composition of many systems that share some parameters and that connect to each other by using some internal sequencing. The sample uses a simple script to toggle the effect on and off by clicking on the magic lamp.

        Genie效果是许多系统的组成部分,这些系统共享一些参数,并且通过使用某些内部排序相互连接。 该示例使用一个简单的脚本通过单击魔术灯来打开和关闭效果。

        The scene contains four points that will define the bezier points to drive the magic flow out of the lamp. To drive the particles, we don’t use velocity but instead a position along this bezier over the life of the particles, plus an offset computed from vector field noise.

        场景包含四个点,这些点将定义贝塞尔曲线点以驱动魔术从灯中流出。 为了驱动粒子,我们不使用速度,而是使用粒子整个生命周期内沿着该贝塞尔曲线的位置,再加上从矢量场噪声计算出的偏移量。

        The last point of the bezier holds the position of the genie and is animated within the visual effect with a 3D sine wave animation. This drives the last point of the bezier as well as the Genie’s body and its eyes.

        贝塞尔曲线的最后一点保持精灵的位置,并在视觉效果内使用3D正弦波动画进行动画处理。 这驱动了贝塞尔曲线的最后一点以及精灵的身体和眼睛。

        The scene is setup using a single timeline and a control rig that makes it run forward or backwards. Using VFX Event Tracks we control the start and the stop of the spawn of particles. Moreover, this timeline controls Cinemachine camera blending as well as a simple control rig.

        使用单个时间轴和使之向前或向后运行的控制装置来设置场景。 使用VFX事件跟踪,我们可以控制粒子生成的开始和停止。 此外,此时间轴还可以控制Cinemachine摄像机的混合以及简单的控制装置。

        其他视觉效果和将来的示例发布 (Other Visual Effects and Future Sample Releases)

        All new samples will be under the 2019.1 release track of the Visual Effect Graph package (5.x.x-preview). Which means that every sample up to now will be part of the new release track, but sadly no more updates will be done on the 2018.3 samples. We invite you to stay tuned to our Twitter and Facebook in order to be the first to grab these new samples when we will release them for 2019.1.

        所有新样本都将位于Visual Effect Graph软件包(5.xx-preview)的2019.1版本中。 这意味着迄今为止的每个样本都将成为新版本的一部分,但遗憾的是,将不再对2018.3样本进行任何更新。 我们邀请您继续关注我们的TwitterFacebook ,以便在我们将在2019.1发布它们时抢先使用这些新示例。

        Also, you will be able to find visual effects in the Fontainebleau Demo as well as the FPS Sample repository pretty soon, with other production cases and solutions that you can use to get inspired for your own projects.

        此外,您很快就能在Fontainebleau演示以及FPS示例存储库中找到视觉效果,并可以使用其他生产案例和解决方案来激发自己的项目灵感。

        See you pretty soon for more visual effect adventures!

        很快再见,以了解更多视觉效果冒险!

        翻译自: https://blogs.unity3d.com/2019/03/06/visual-effect-graph-samples/

        • 0
          点赞
        • 0
          收藏
          觉得还不错? 一键收藏
        • 0
          评论

        “相关推荐”对你有帮助么?

        • 非常没帮助
        • 没帮助
        • 一般
        • 有帮助
        • 非常有帮助
        提交
        评论
        添加红包

        请填写红包祝福语或标题

        红包个数最小为10个

        红包金额最低5元

        当前余额3.43前往充值 >
        需支付:10.00
        成就一亿技术人!
        领取后你会自动成为博主和红包主的粉丝 规则
        hope_wisdom
        发出的红包
        实付
        使用余额支付
        点击重新获取
        扫码支付
        钱包余额 0

        抵扣说明:

        1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
        2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

        余额充值