Qt Quick 3D:交互式2D内容

Qt Quick 3D: interactive 2D content

Qt Quick 3D:交互式2D内容

Tuesday January 25, 2022 by Shawn Rutledge | Comments

​2022年1月25日星期二 Shawn Rutledge | 评论​

Qt Quick 3D has some new features in 6.2. One of them is that you can map interactive Qt Quick scenes onto 3D objects.

​Qt Quick 3D在6.2中有一些新功能。其中之一是可以将交互式Qt Quick场景映射到3D对象上。​

During a hackathon last year, we developed the Kappa Tau Station demo: a model of a space station in which you can use the WASD keys to walk around (as in many games), but also containing some 2D UI elements on some surfaces. For example you can:

​在去年的一次黑客竞赛中,我们开发了Kappa Tau空间站演示:一个空间站模型,你可以在其中使用WASD键四处走动(就像在许多游戏中一样),但在某些表面上也包含一些2D UI元素。例如,你可以:

  • edit the text on the green "blackboard"
  • 编辑绿色“黑板”上的文本
  • press a 3D button on a keypad on the desk, to launch a Wayland application which will be shown on the screen at that desk (only on Linux)
  • 按下桌上键盘上的3D按钮,启动Wayland应用程序,该应用程序将显示在桌上的屏幕上(仅在Linux上)
  •  interact with the Wayland application (for example, operate kcalc just as you would on your KDE desktop)
  • 与Wayland应用程序交互(例如,像在KDE桌面上一样操作kcalc)
  • press virtual touchscreens to open the doors or to fire the particle weapon
  • 按下虚拟触摸屏打开门或发射粒子武器

The demo is available in this respository, along with some others:

该演示可在以下位置获得:
KappaTau · master · public-demos / qtquick3d · GitLab

It's worthwhile to read the qml files to see how we did everything.

这是值得阅读的qml文件,来了解我们是如何做的。

You have a couple of choices for how to map a 2D scene into a 3D scene. One is to simply declare your 2D items (like a Rectangle, or your own component in another QML file) inside a Node. Then the Node sets the position and orientation of an infinite plane in 3D, onto which the 2D scene is mapped.

对于如何将2D场景映射到3D场景,您有两种选择。一种方法是简单地在节点Node内声明2D项(比如矩形Rectangle,或者在另一个QML文件中声明自己的组件)。然后,节点设置3D中无限平面的位置和方向,2D场景映射到该平面上。

import QtQuick
import QtQuick3D

View3D {
    width: 600; height: 480

    environment: SceneEnvironment {
        clearColor: "#111"
        backgroundMode: SceneEnvironment.Color
    }

    PerspectiveCamera { z: 600 }

    DirectionalLight { }

    Node {
        position: "-128, 128, 380"
        eulerRotation.y: 25
        Rectangle {
            width: 256; height: 256; radius: 10
            color: "#444"
            border { color: "cyan"; width: 2 }
            Text {
                color: "white"
                anchors.centerIn: parent
                text: "hello world"
            }
        }
    }
}

Qt Quick doesn't clip child Items by default, and we don't want to create an arbitrary "edge" here either, so the 2D scene sits on an infinite plane in the unified 2D/3D scene graph.  Maybe your scene emits 2D particles that should keep going past the edge of the declared items, for example:

默认情况下,Qt Quick不会剪裁子项,我们也不希望在此处创建任意“边”,因此2D场景位于统一2D/3D场景图中的无限平面上。可能您的场景会发射2D粒子,这些粒子应该一直经过已声明项的边缘,例如:


Another way is to map the 2D scene onto the surface of a 3D Model; in that case, it's declared as one of the textures in the Material, as in this snippet from MacroKeyboard.qml:

另一种方法是将2D场景映射到3D模型Model的表面;在这种情况下,它被声明为材质Material中的一种纹理,就像MacroKeyboard.qml中的这段代码一样:

Model {
    id: miniScreen
    source: "#Rectangle"
    pickable: true  // <-- needed for interactive content
    position: Qt.vector3d(...)
    scale: Qt.vector3d(...)
    materials: DefaultMaterial {
    emissiveFactor: Qt.vector3d(1, 1, 1)
    emissiveMap: diffuseMap
    diffuseMap: Texture {
        sourceItem: Rectangle { // 2D subscene begins here
            width: ...; height: ...
            color: tap.pressed ? "red" : "beige"
            TapHandler { id: tap }
                ...
            }
        }
    }
}

Because the model source is #Rectangle, it will make a limited-size planar subscene; the diffuseMap sets the color at each pixel inside the rectangle by sampling the texture that the 2D scene is rendering. If you only set diffuseMap, you need to apply suitable lighting to this scene to be able to see it. But in this case, we turn on light emission by setting emissiveFactor (the brightness of the red, green and blue channels); and by binding diffuseMap to emissiveMap , the color of each pixel in the 2D scene controls the color emitted from that pixel of the rectangle model. So this will appear like a little glowing mini touchscreen that you could apply to the top of a 3D button, or to some other suitable location in your 3D scene. Because there is a 2D subscene, it provides the opportunity to have a TapHandler; but for example if you want to be able to press the "button", the tap.pressed property can be bound to the z coordinate of the model to make it move down when you press the TapHandler.

因为模型源是#Rectangle,所以它将生成一个有限大小的平面亚场景;diffuseMap通过采样2D场景渲染的纹理来设置矩形内每个像素的颜色。如果只设置“diffuseMap”,则需要对该场景应用适当的照明才能看到它。但在这种情况下,我们通过设置emissiveFactor(红色、绿色和蓝色通道的亮度)来打开灯光;通过将diffuseMap绑定到emissiveMap,2D场景中每个像素的颜色控制矩形模型中该像素发射的颜色。因此,它看起来就像一个发光的迷你触摸屏,你可以将其应用到3D按钮的顶部,或者3D场景中其他合适的位置。因为有一个2D子场景,所以它提供了一个TapHandler的机会;但例如,如果你想按下“按钮”,tap.pressed属性可以绑定到模型的z坐标,使其在按下TapHandler时向下移动。

In addition to #Rectangle, we have a few more built-in primitive models. If you map a 2D scene onto a #Cube, it will be repeated on each face of the cube, and you'll be able to interact with it on any face. In general, if you are creating your own models (for example in Blender), you need to control the mapping of texture (UV) coordinates to the part of the model where you want the 2D scene (texture) to be displayed.

​除了#Rectangle,我们还有一些内置的基本模型。如果你把一个2D场景映射到一个#Cube上,它会在立方体的每个面上重复,你可以在任何面上与它交互。通常,如果要创建自己的模型(例如在Blender中),则需要控制纹理(UV)坐标到模型中希望显示2D场景(纹理)的部分的映射。

So far in Qt Quick 3D, interactive content needs to be needs to put into a 2D "subscene" in the 3D scene. (We have done experiments with using input handlers directly on 3D objects, but have not yet shipped this feature.) Making this possible required some extensive refactoring in the Qt Quick event delivery code. Now, each QQuickWindow has a QQuickDeliveryAgent, and each 2D subscene in 3D has a separate QQuickDeliveryAgent. When you press a mouse button or a touchpoint on your application window, the window's delivery agent looks for delivery targets in the outer 2D scene; those items are planar, and as we "visit" each item, most of the time we only need to translate the press point according to the item's position in the scene. But then we come to the View3D, an Item subclass that contains the rendering of your 3D scene: and event delivery gets more complicated. At the time that the press occurs, View3D needs to do "picking": pretend that a ray is being directed downwards into the scene under the press point, find the 3D nodes that the ray intersects, on which facet of the model, at which UV coordinates. Those intersections get sorted by distance from the camera; and then we can continue trying to deliver the event to any items or handlers that might be attached to the 3D objects.

​到目前为止,在Qt Quick 3D中,交互内容需要放在3D场景中的2D“子场景”中。(我们已经做过直接在3D对象上使用输入处理程序的实验,但还没有发布此功能。)要实现这一点,需要在Qt Quick事件交付代码中进行一些广泛的重构。现在,每个QQuickWindow都有一个QQuickDeliveryAgent,3D中的每个2D子场景都有一个单独的QQuickDeliveryAgent。当您按下应用程序窗口上的鼠标按钮或接触点时,窗口的传递代理将在外部2D场景中查找传递目标;这些项目是平面的,当我们“访问”每一个项目时,大多数时候我们只需要把点击的位置,转换成项目在场景中的位置。但接下来我们来看看View3D,这是一个包含3D场景渲染的项子类:事件传递变得更加复杂。在点击时,View3D需要进行“拾取”:假设光线正被向下引导到点击的场景中,找到光线相交的3D节点,在模型的哪个面上,在哪个UV坐标处。这些交叉点按与摄像机的距离排序;然后,我们可以继续尝试将事件传递给可能连接到3D对象的任何项目或处理程序。

To keep the 2D scene working the same as it has always been in Qt Quick, we need grabbing to continue to work (QPointerEvent::setExclusiveGrabber() and addPassiveGrabber()). So on press, picking is done from scratch; a 2D item or handler may grab the QEventPoint, and that requires us to additionally remember which facet of which 3D model contains which 2D subscene in which the grab occurred. As you continue to drag your finger or your mouse, we need to repeat the ray-casting, but only to find the intersection of the ray with that same 2D scene.

为了保持2D场景的工作状态与Qt Quick中的工作状态相同,我们需要抓取以继续工作(QPointerEvent::setExclusiveGrabber()和addPassiveGrabber()。所以在点击时,拾取是从头开始的;2D项目或处理程序可能会抓取QEventPoint,这要求我们另外记住哪个3D模型的哪个方面包含发生抓取的2D子场景。继续拖动手指或鼠标时,我们需要重复光线投射,但只需找到光线与同一2D场景的交点。

But in general 3D scenes, the models may be moved, rotated and scaled arbitrarily, at any speed; so for delivery of hover events, we cannot assume anything: the models may be moving, and your mouse may be moving too. So delivery of each hover event requires picking in the 3D scene.

但在一般的3D场景中,模型可以以任何速度任意移动、旋转和缩放;因此,对于悬停事件的交付,我们不能假设任何情况:模型可能在移动,您的鼠标也可能在移动。因此,每个悬停事件的交付都需要在3D场景中进行拾取。

Adding a Wayland surface to a 2D scene isn't hard. In the Kappa Tau project, it's confined to compositor.qml (so that when you are running on a non-Linux platform, main.cpp can load View.qml instead, and you will see everything else, without the Wayland functionality). Because the scene contains a set of discrete virtual "screens", it's a good fit for the Wayland IVI extension, to be able to give each "screen" an ID. Externally on the command line, while the demo is running, you can launch arbitrary Wayland applications on those virtual screens by setting the QT_IVI_SURFACE_ID environment variable, e.g.

​将Wayland曲面添加到2D场景并不困难。在Kappa Tau项目中,它仅限于compositor.qml(这样,当您在非Linux平台上运行时,main.cpp可以加载View.qml,而您将看到其他所有内容,而无需Wayland功能)。由于场景包含一组离散的虚拟“屏幕”,因此非常适合Wayland IVI扩展,能够为每个“屏幕”提供一个ID。在命令行外部,在演示运行时,可以通过设置QT_IVI_SURFACE_ID环境变量在这些虚拟屏幕上启动任意Wayland应用程序,例如:。

QT_IVI_SURFACE_ID=2 qml -platform wayland my.qml

We made use of a few more features that are new in 6.2:

我们还利用了6.2中新增的一些功能:

  • Qt Multimedia, for sound effects
  • Qt Multimedia,用于音效
  • 3D particles, for the particle weapon, and some bubbles and sparks inside the station
  • 3D粒子,用于粒子武器,以及空间站内的一些气泡和火花
  • morphing, for one of the virtual screens to "unfold" as it animates into view
  • 变形,使其中一个虚拟屏幕在动画进入视图时“展开”

I have little experience writing interactive 3D applications so far, but I found that Qt Quick 3D is easy enough to get started if you already know your way around Qt Quick and QML. So if you ever wanted to build something like that for fun, but thought the learning curve would take too long, you may be pleasantly surprised.

到目前为止,我几乎没有编写交互式3D应用程序的经验,但我发现,如果您已经了解Qt Quick和QML,那么Qt Quick 3D很容易入门。因此,如果你曾经想做这样的事情来取乐,但觉得学习曲线会太长,你可能会惊喜不已。

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值