swiftui_如何在SwiftUI中使用UIViewRepresentable

本文介绍了如何在SwiftUI中利用UIViewRepresentable来整合UIKit组件,详细讲解了其工作原理和使用方法,帮助开发者更好地在SwiftUI界面中集成自定义UIView。
摘要由CSDN通过智能技术生成

swiftui

I like SwiftUI. It is a great addition to the Swift language. But I keep hearing developers say it isn’t quite ready for prime time, and I have to agree somewhat. With that said, I had this nagging feeling there was maybe more I could do with UIViewRepresentable on that front.

我喜欢SwiftUI。 它是Swift语言的重要补充。 但是我不断听到开发人员说,黄金时间还没有准备好,我必须有所同意。 话虽如此,我UIViewRepresentable this的感觉,在那方面,我可能还可以使用UIViewRepresentable做更多的事情。

In this short article, I am going to try to do just that. I am going to implement a few UIKit gestures in SwiftUI through UIViewRepresentable. It should also be a great reference/template for those wanting to add more of UIKit’s extensive library of buttons, knobs, and hooks — so to speak — that is missing in SwiftUI.

在这篇简短的文章中,我将尝试做到这一点。 我将通过UIViewRepresentable在SwiftUI中实现一些UIKit手势。 对于那些想要添加更多UIKit丰富的按钮,旋钮和钩子库(可以这么说)的人来说,它应该是一个很好的参考/模板,而SwiftUI中缺少这些库。

Let’s get started. UIKit has seven gestures in all. SwiftUI has five. Arguably, the most important ones are tap, long press, rotate, drag, and magnification (you can customize magnification and sort of fabricate the others, but I don’t want to rewrite the iOS library). If we’ve already got it in UIKit, let’s use it. UIKit has tap, pinch, rotate, pan, swipe, screen edge pan, and long press. The implementation of some of these is quite subtle.

让我们开始吧。 UIKit共有七个手势。 SwiftUI有五个。 可以说,最重要的是点击,长按,旋转,拖动和放大(您可以自定义放大并以其他方式进行伪造,但我不想重写iOS库)。 如果我们已经在UIKit中获得了它,请使用它。 UIKit具有点击,捏,旋转,平移,滑动,屏幕边缘平移和长按的功能。 其中一些的实现非常微妙。

Under UIKit, swipe understands which way you are holding your device up and gets it right, so a left swipe is always a swipe to the left. Try to implement it with drag. You can, but you need much more code to take into account the way the device is being held. This is a rather glaring example of a full implementation in UIKit vs. a partial one SwiftUI.

在UIKit下,滑动可了解您以哪种方式举起设备并将其正确放置,因此向左滑动始终是向左滑动。 尝试通过拖动来实现它。 可以,但是您需要更多代码来考虑设备的固定方式。 这是UIKit中完整实现与部分SwiftUI相比的一个非常明显的例子。

Anyway, let’s get to it. We start with a struct, of course, and the two required methods you need with UIViewRepresentable:

无论如何,让我们开始吧。 当然,我们从一个结构开始,以及UIViewRepresentable需要的两个必需方法:

Within the first function (makeView), we’re going to define our gestures. Within the second (updateUIView), we’re going to refresh them if need be. We already defined a view that I called v.

在第一个函数( makeView )中,我们将定义手势。 在第二个( updateUIView )中,如果需要,我们将刷新它们。 我们已经定义了一个称为v的视图。

Next, we need to define the method we will be calling to pass the gesture data back to SwiftUI:

接下来,我们需要定义将调用的方法以将手势数据传递回SwiftUI:

OK, we’re all set. Now, all we need to do is add some gestures and that ubiquitous framework called Combine to glue it all together. Within makeView, we can define a couple of swipe gestures:

好,我们都准备好了。 现在,我们要做的就是添加一些手势,然后使用无处不在的名为Combine的框架将其粘合在一起。 在makeView ,我们可以定义几个滑动手势:

Which we then add to the view in the same method, of course:

然后,我们当然会以相同的方法将其添加到视图中:

v.addGestureRecognizer(leftSwipe)
v.addGestureRecognizer(rightSwipe)

And the target functions for the Coordinator() class I defined earlier on:

我之前在上面定义的Coordinator()类的目标函数是:

And finally, the SwiftUI code to bring the whole thing together:

最后,SwiftUI代码将整个过程整合在一起:

And there you have it: a quick implementation guide to getting UIKit talking directly to SwiftUI. Here is a gist bringing it all together. There is a tad more code, but nothing significant:

在那里,您将获得:快速实现指南,以使UIKit直接与SwiftUI进行通信。 这是总结所有内容的要点。 还有一点代码,但没有什么意义:

结论 (Conclusion)

If you enjoyed reading this and want to know more, I need to point you in the direction of my colleague Anupam Chugh’s excellent article on the same subject. It is an advanced version of this piece.

如果您喜欢阅读本文并想了解更多信息,那么我需要向您介绍我的同事Anupam Chugh在同一主题上的出色文章 。 它是此作品的高级版本。

翻译自: https://medium.com/better-programming/how-to-use-uiviewrepresentable-in-swiftui-1b9a0a7c1358

swiftui

SwiftUI是苹果公司推出的一款全新的UI框架,可以用于开发iOS、iPadOS、macOS、watchOS等操作系统上的应用程序。在SwiftUI,开发者可以使用简单易懂的语法和组件来构建应用程序的UI界面,同时也可以方便地实现各种交互效果和动画效果。 对于视频播放器的开发,SwiftUI也提供了相关的组件和API,可以轻松实现视频的播放和控制。同时,SwiftUI也提供了对核心图像的实时处理支持,开发者可以使用Metal等技术对视频图像进行实时处理,实现各种特效和滤镜效果。 在SwiftUI,实现视频播放器可以使用`AVPlayer`和`AVPlayerLayer`等组件,同时也可以使用`VideoPlayer`组件快速构建一个基本的视频播放器。对于实时处理核心图像,可以使用`MTKView`组件和`MetalPerformanceShaders`框架来实现。 下面是一个示例代码,演示如何使用SwiftUI实现一个简单的视频播放器,并对视频进行实时处理: ```swift import SwiftUI import AVKit import MetalKit import MetalPerformanceShaders struct VideoPlayerView: UIViewRepresentable { let player: AVPlayer let filter: MPSImageGaussianBlur func makeUIView(context: Context) -> some UIView { let playerLayer = AVPlayerLayer(player: player) playerLayer.videoGravity = .resizeAspectFill playerLayer.frame = UIScreen.main.bounds let metalView = MTKView(frame: UIScreen.main.bounds) metalView.device = MTLCreateSystemDefaultDevice() metalView.colorPixelFormat = .bgra8Unorm let commandQueue = metalView.device?.makeCommandQueue() let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: .bgra8Unorm, width: Int(metalView.frame.width), height: Int(metalView.frame.height), mipmapped: false) let texture = metalView.device?.makeTexture(descriptor: textureDescriptor) filter.encode(commandBuffer: commandQueue!.makeCommandBuffer()!, sourceTexture: texture!, destinationTexture: texture!) metalView.texture = texture let stackView = UIStackView(frame: UIScreen.main.bounds) stackView.axis = .vertical stackView.distribution = .fillEqually stackView.addArrangedSubview(playerLayer) stackView.addArrangedSubview(metalView) return stackView } func updateUIView(_ uiView: UIViewType, context: Context) { } } struct ContentView: View { let player: AVPlayer var body: some View { VideoPlayerView(player: player, filter: MPSImageGaussianBlur(device: MTLCreateSystemDefaultDevice()!, sigma: 10.0)) } } struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView(player: AVPlayer(url: URL(string: "https://example.com/video.mp4")!)) } } ``` 在上面的示例代码,`VideoPlayerView`是一个UIViewRepresentable,用于将AVPlayer和实时处理的MetalView结合起来。在`makeUIView`方法,首先创建一个AVPlayerLayer和一个MTKView,并将它们添加到一个UIStackView。然后,创建一个MPSImageGaussianBlur实例,并使用MetalPerformanceShaders框架对视频图像进行高斯模糊处理,最后将处理结果渲染到MTKView。 在`ContentView`,创建AVPlayer实例并将其传递给VideoPlayerView。通过这种方式,可以轻松实现一个支持实时处理核心图像的视频播放器。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值