全身捕捉vr_在VR内部制作动画:混合运动捕捉和关键帧

全身捕捉vr

In the Unity Labs Authoring Tools Group, we explore the future of content creation, namely around how we can use XR to make 3D content creation faster, easier, and more approachable.

在Unity Labs创作工具小组中,我们探讨了内容创建的未来,即围绕如何使用XR使3D内容创建更快,更容易,更易上手的问题进行了探讨。

We’ve shipped EditorVR, which brings Unity into the headset and minimizes VR development iteration time, and we’re developing Carte Blanche, opening virtual world and experience design to a new audience of non-professional creators. We take an experiment-driven approach to our projects, since XR software is still far from having concrete standards, and we believe that there’s still far more to discover than there is known.

我们已经发布了EditorVR ,它将Unity引入头戴式耳机并最大程度地减少了VR开发的迭代时间,并且我们正在开发Carte Blanche ,向非专业创作者的新读者开放虚拟世界和体验设计。 我们对项目采用实验驱动的方法,因为XR软件离制定具体的标准还差得很远,而且我们相信还有更多的发现要发现。

Since we first started working on these XR authoring tools, one topic we’ve been very interested to tackle is animation tools: how could we use virtual objects to quickly sketch out a sequence, and what could XR do to make 3D animation more accessible to everyone?

自从我们开始使用这些XR创作工具以来,我们一直非常感兴趣的主题是动画工具:我们如何使用虚拟对象快速绘制序列,以及XR可以做什么,使3D动画更易于访问大家?

目标:保持小巧和专注; 建立别人的所作所为 (Goal: Keep it small & focused; build off what others have done)

An animation tool could easily constitute a year-long project, but we explicitly set out to make this one quick: one developer, one month. The goal was to test out UX paradigms that can work their way back into our larger immersive projects.

动画工具很容易构成一个为期一年的项目,但是我们明确地着手使这一过程变得Swift:一名开发人员一个月。 目的是测试UX范式,这些范式可以重新应用到我们更大的沉浸式项目中。

We’re big fans of the VR animation app Tvori. In Tvori, the user builds out a scene, and then records animation per-object with real-time motion capture from the controllers. We’ve loved playing with it, and with many of us having experience in flatscreen animation tools (Maya, After Effects, Flash, etc), we were hungry to also have editable splines and a full-blown keyframe/track-based timeline. So those specific features were our focus in building this project.

我们是VR动画应用程序Tvori的忠实拥护者 。 在Tvori中,用户构建一个场景,然后记录每个对象的动画以及来自控制器的实时运动捕捉。 我们很喜欢玩它,而且我们许多人都拥有平面动画工具(Maya,After Effects,Flash等)的经验,我们很渴望拥有可编辑的样条线和完善的基于关键帧/轨道的时间轴。 因此,这些特定功能是我们构建此项目的重点。

我们的混合解决方案 (Our hybrid solution)

In our project, the user starts in an empty grid scene, with a simple palette of objects and a blank timeline in front of them. They can assemble the scene from the objects, and then ‘arm’ the recording, so that motion capture will begin once they start moving an object. When they release the object, recording stops, the new motion curve is visible, and the timeline shows keyframes for the beginning and end of the motion. The user can reach into the motion curve and adjust its points with a smooth falloff, and adjust the keyframes on the timeline to speed up or slow down the entire motion.

在我们的项目中,用户从一个空的网格场景开始,它带有一个简单的对象调色板以及一个空白的时间线。 他们可以从对象组装场景,然后“武装”记录,以便一旦开始移动对象就可以开始运动捕捉。 当他们释放对象时,录制停止,新的运动曲线可见,时间轴显示运动开始和结束的关键帧。 用户可以进入运动曲线并以平滑的衰减调整其点,并在时间轴上调整关键帧以加快或减慢整个运动。

我们学到了什么 (What we learned)

用户反馈和视觉修饰就是一切。 (User feedback and visual polish is everything.)

一点点走很长的路 (A little bit goes a long way)

It’s tempting when building a new UI to just build it out of white cubes, or to think of user feedback (visual changes, sounds, haptics) as “just” polish. But that feedback and visual polish is hugely important, and even a little bit goes a long way in making a UI discoverable, meaningful, and testable. If we have to explain to a new tester what each button does, then we’re not testing the usability of the system, and moreover we’re forcing the user to keep a complicated mental model in their head, taking up bandwidth that they should be spending on actually using the tool.

在构建新的UI时,它只是从白色立方体中构建出来,或者将用户反馈(视觉变化,声音,触觉)视为“仅仅是”抛光,这很诱人。 但是,这种反馈和视觉修饰非常重要,甚至在使UI变得可发现,有意义和可测试方面,还有很长的路要走。 如果我们必须向新的测试人员解释每个按钮的功能,那么我们就不是在测试系统的可用性,而且,我们在强迫用户保持复杂的思维模型,占用他们应有的带宽在实际使用该工具上花钱。

In this project, any time we introduced a new UI element, we’d make sure to take a minute to actually model out a basic icon, making sure that testers found UI elements self-explanatory.  We don’t think of it as “polishing the art” (it was still programmer art, after all!), but just making something that early testers actually can use and give meaningful feedback on.

在这个项目中,每当我们引入一个新的UI元素时,我们都要花一点时间来实际建模一个基本的图标,以确保测试人员发现UI元素是不言而喻的。 我们不认为它是“抛光艺术”(毕竟,它仍然是程序员艺术!),而只是做出一些早期测试人员实际上可以使用并提供有意义的反馈的东西。

提供尽可能多的反馈:触觉,听觉,视觉 (Give as much feedback as possible: haptic, aural, visual)

Ultimately, we find that with giving the user feedback, we should use every outlet we have. If the user is hitting a button, it should light up, make a noise, and vibrate the controller. This doesn’t just apply to the moment of impact, but at every stage of the interaction: we have hover start/stay/stop, and attach start/stay/stop, so could potentially have at least six pieces of interaction feedback per element. We try to at least provide feedback for hover, attach, and end/confirm. In 2D UI, you often get these feedback patterns for free, but in XR, you have to build them from scratch.

最终,我们发现,在向用户提供反馈的同时,我们应该使用我们拥有的每一个网点。 如果用户按下某个按钮,则该按钮应亮起,发出声音并振动控制器。 这不仅适用于影响的时刻,还包括互动的每个阶段:我们将鼠标悬停在开始/停留/停止和附加开始/停留/停止的位置,因此每个元素可能至少有六个交互反馈。 我们至少尝试提供有关悬停,附加和结束/确认的反馈。 在2D UI中,通常会免费获得这些反馈模式,但是在XR中,必须从头开始构建它们。

To help think through what feedback to give, we drew out a spreadsheet of each state (default, hover, selected, confirmation) and each element (animatable object, motion curve, keyframe, each button), so we could identify which elements were or were not reflecting different interactions.

为了帮助您思考要提供什么反馈,我们绘制了一个电子表格,其中包含每个状态(默认,悬停,选中,确认)和每个元素(可动画对象,运动曲线,关键帧,每个按钮),因此我们可以确定哪些元素是或没有反映不同的互动。

抓取vs选择 (Grab vs select)

We’ve tried a few different approaches for selection versus manipulation of objects in our authoring projects, and this time made the most explicit distinction yet: the primary trigger (Touch index trigger / Vive trigger) will select an object, and the secondary trigger (Touch hand trigger / Vive grip) will manipulate it. This turned out to work really well in this project, since everything you can select can also be moved, and we wanted to avoid accidentally moving anything.

我们在创作项目中尝试了几种不同的对象选择和操作方法,但是这次做出了最明确的区分:主触发器(Touch index触发器/ Vive触发器)将选择一个对象,第二触发器(触摸手扳机/ Vive握把)即可对其进行操作。 事实证明,此方法在此项目中非常有效,因为您可以选择的所有内容都可以移动,因此我们希望避免意外移动任何内容。

EditorVR has a similar concept, where you can move Workspaces using the secondary trigger and interact with them with the primary trigger, and select objects at a distance vs. manipulate them directly (both using the primary trigger).

EditorVR具有类似的概念,您可以使用辅助触发器移动工作区并通过主要触发器与其交互,选择远处的对象与直接操纵它们(均使用主要触发器)。

使UI靠近用户,并让他们召唤它 (Keep UI close to the user, and let them summon it)

When designing 2D interfaces, we can simply stick a UI control in the upper-left corner of the window, and be done with it. Not so in VR. Especially on a room-scale setup, the user could start the app from anywhere in the room. Some apps will simply plant their UI in the center of the tracking volume, which often means the user will start out on the wrong side of the interface, or worse, inside it. The solution that we’ve found works well in each of our authoring tools is to start any UI within arms’ reach of the user, and, if they walk away, let them “summon” the panel back to an interactable range.

在设计2D界面时,我们可以简单地将UI控件粘贴在窗口的左上角,并完成此操作。 在VR中并非如此。 特别是在房间规模的设置中,用户可以从房间中的任何位置启动应用程序。 某些应用会将用户界面简单地放置在跟踪体积的中心,这通常意味着用户会从界面的错误一侧开始,或更糟糕的是,从界面内部开始。 我们发现的解决方案在我们的每个创作工具中都很好用,它是在用户可以触及的范围内启动任何UI,如果他们走开了,让他们“召集”面板回到可交互的范围。

给你的UI一些身体 (Give your UI some physicality)

Flatscreen interfaces generally don’t have inertia, and it can be surprising and even unpleasant when they do. A mouse is a superhuman input device for manipulating points and data, and hardly ever thought of as a literal representation of your physical body.

平板界面通常没有惯性,当它们具有惯性时可能会令人惊讶甚至不愉快。 鼠标是用于操纵点和数据的超人输入设备,几乎从未被认为是您身体的字面表示。

In VR, the exact opposite is true: since we do very much embody tracked input devices, objects must have inertia and physicality. If we grab an object in VR and give it a hard push, it’s very jarring for the object to suddenly stop in its tracks when we let go. This is obvious when we’re talking about throwing a virtual rock, but less clear in the case of interface panels.

在VR中,情况恰恰相反:由于我们非常采用跟踪输入设备,因此对象必须具有惯性和物理性。 如果我们在VR中抓住一个物体并用力推动,那么当我们放开物体时,物体突然停在其轨迹中会非常令人讨厌。 当我们谈论扔虚拟石头时,这是显而易见的,但在界面面板的情况下,这一点并不清楚。

But in our experiments, and using other VR apps that do or don’t apply physicality to their UI, we find that it’s just as essential. Of course there’s a balance to strike, because you probably don’t want your UI to clatter to the ground after you throw it. The solution we’re using in the Animator is a simple non-kinematic, non-gravity-affected Rigidbody with some drag; you can give it a good push and it’ll float away, but also slow down quickly and stay close enough that you won’t have to go hunt down where all your UI has floated off to. To be exact, we use Drag = 8, Angular Drag = 16 (because accidental rotation when you release a panel is very annoying), which makes for a pretty subtle, but nice, effect.

但是在我们的实验中,并使用在其UI中使用或不应用物理功能的其他VR应用程序,我们发现它同样重要。 当然,要保持平衡,因为您可能不希望将UI扔在地上后摔倒。 我们在Animator中使用的解决方案是一个简单的,不受运动影响,不受重力影响的刚体,并带有一些阻力。 您可以给它一个好的推动力,它会漂浮起来,但也会Swift减速并保持足够近的距离,从而您不必在所有UI都漂浮到的地方进行搜寻。 确切地说,我们使用Drag = 8,Angular Drag = 16(因为释放面板时的意外旋转非常烦人),这产生了非常微妙但不错的效果。

()

结语 (Wrapping it up)

There’s always more to do and explore, especially on a project intentionally kept small in scope; this one’s no exception. We’d love to experiment with meaningful depth in the timeline interface, both for element interactions and animation-specific uses. We’re curious to try moving away from the central timeline workspace mentality and instead have smaller individual timelines attached to each object. We have more questions about how to smoothly combine both motion capture and strict keyframe timing.

总是有更多的事情要做和探索,尤其是在故意将项目范围缩小的项目上。 这个也不例外。 我们很乐意在时间轴界面中进行有意义的深度实验,以用于元素交互和特定于动画的用途。 我们很好奇,尝试摆脱中心时间轴工作空间的思维方式,而是将较小的单个时间轴附加到每个对象。 关于如何平稳地将运动捕捉和严格的关键帧时序结合起来,我们还有更多问题。

But, even more than all of that, we’re eager to apply what we’ve learned so far to our other projects, and to continue experimenting with new ideas. Most of these remaining curiosities and questions will very likely make a comeback in the next project.

但是,更重要的是,我们渴望将到目前为止所学的知识应用于其他项目,并继续尝试新的想法。 这些剩余的好奇心和问题中的大多数将很可能在下一个项目中卷土重来。

We think animation tools in XR are a genuinely useful topic, and we’re eager to see what comes out of the community. In the meantime, check out our build. We hope you enjoy playing with it, and are able to take and expand upon some of these designs in your own projects.

我们认为XR中的动画工具是一个真正有用的主题,并且我们渴望看到社区中出现的东西。 同时, 检查我们的构建 。 我们希望您喜欢它,并能够在自己的项目中采用和扩展其中的一些设计。

We may open-source the project in the future, depending on community interest. In the meantime, if you’re interested in building on this tool, collaborating, or have some feedback for us, get in touch at labs@unity3d.com!

我们可能会在将来根据社区的利益开源该项目。 同时,如果您有兴趣使用此工具,进行协作或对我们有任何反馈,请联系labs@unity3d.com

翻译自: https://blogs.unity3d.com/2017/07/24/animating-inside-vr-mixing-motion-capture-and-keyframes/

全身捕捉vr

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值