MIND & MIND-SSC: Contrast- and Modality-invariant Image Similarity for Multimodal Image Registration

👉 OBELISK 方法通过可变形卷积实现深度学习,从而减少层数来解决 3D 多器官分割问题1


👉 Contrast- and modality-invariant image similarity

  • 模态独立邻域描述符(MIND) 是一种多维局部图像描述符,可实现多模态配准。在配准单模态的扫描时,它还被证明可以提高准确性和鲁棒性。每个 MIND 描述符只计算在一个 patch 内的距离(一个扫描的局部邻域内)。MIND 的比较是以采样样例的平方/绝对差之和来表示的。

    The Modality independent neighbourhood descriptor (MIND) is a multi-dimensional local image descriptor, which enables multi-modal registration. It has also been shown to improve accuracy and robustness when registering scans of the same modality. Each MIND descriptor is calculated based on patch distances (within the local neighbourhood of the same scan). Comparison of MIND representations is performed as sum of squared/absolute differences of its entries.

  • 自相似上下文(SSC)是 MIND 的改进,它重新定义了邻域布局以提高匹配的鲁棒性。它还带有有效的量化方案,允许使用 Hamming weight 计算成对距离。

    The self-similarity context (SSC) is an improvement of MIND, which redefines the neighbourhood layout to improve the robustness of the matching. It also comes with an efficient quantisation scheme, which allows the computation of pair-wise distances using the Hamming weight. Matlab code is available to extract MIND/SSC descriptors for 3D volumes and calculate a distance image. Derivatives can be estimated using finite differences.

MIND: Modality Independent Neighbourhood Descriptor for Multi-Modal Deformable Registration

Heinrich, M., M. Jenkinson, at el. “Mind: Modality Independent Neighbourhood Descriptor for Multi-Modal Deformable Registration.” Medical image analysis 16 7 (2012): 1423-35.

Abstract

在医学图像分析中,从不同模式获得图像的可变形配准仍然是一项具有挑战性的任务。本文针对这一重要问题,提出了一种用于线性和可变形多模态配准的模态无关邻域描述符(MIND)。该描述子运用图像自相似性的概念,基于一幅图像中小块图像的相似性,它的目标是提取一个局部邻域中的独特结构,该结构跟在多模态中的表现具有一致性。原则上,MIND 将适用于任意方式的配准。

Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this important problem and proposes a modality independent neighbourhood descriptor (MIND) for both linear and deformable multi-modal registration. Based on the similarity of small image patches within one image, it aims to extract the distinctive structure in a local neighbourhood, which is preserved across modalities. The descriptor is based on the concept of image self-similarity.2

图像引导干预通常依靠可变形的多模态配准来对齐治疗前和术中扫描。对自动图像配准有许多要求,例如,对于具有不同噪声分布和对比度(different noise distributions and contrast)的不同模式的扫描,需要一个稳健的相似性度量,对损失函数进行有效优化,以实现快速的配准,以及避免敏感的配准参数选择,以减少实际临床应用中的延迟。

Image-guided interventions often rely on deformable multimodal registration to align pre-treatment and intra-operative scans. There are a number of requirements for automated image registration for this task, such as a robust similarity metric for scans of different modalities with different noise distributions and contrast, an efficient optimisation of the cost function to enable fast registration for this time-sensitive application, and an insensitive choice of registration parameters to avoid delays in practical clinical use. 3

在这项工作中,我们基于多模态相似性的结构图像表示概念。基于“自相似上下文”(self-similarity context, SSC),对多模态扫描密集提取鉴别描述符。导出了一种有效的量化表示法,该表示法能够非常快速地计算描述符之间的逐点距离。使用带有扩散正则化(diffusion regularisation )的对称多尺度离散优化(multi-scale discrete optimisation )来寻找平滑变换。该方法用于神经外科 3D US 和 MRI 脑扫描的配准,与常用的相似性度量相比,配准误差显著降低(平均 2.1 mm),每次 3D 配准的计算时间少于 30 秒。

In this work, we build upon the concept of structural image representation for multi-modal similarity. Discriminative descriptors are densely extracted for the multi-modal scans based on the “self-similarity context”. An efficient quantised representation is derived that enables very fast computation of point-wise distances between descriptors. A symmetric multi-scale discrete optimisation with diffusion regularisation is used to find smooth transformations. The method is evaluated for the registration of 3D ultrasound and MRI brain scans for neurosurgery and demonstrates a significantly reduced registration error (on average 2.1 mm) compared to commonly used similarity metrics and computation times of less than 30 seconds per 3D registration.3

Introduction

可变形多模态配准在图像引导干预中起着重要作用,通常使用不同的模式获取扫描。多模态扫描的对齐是很难的,因为扫描之间可能有大量的浮动,术中扫描的质量通常低于诊断扫描,并且不同模态之间的对比度不存在函数关系。

Deformable multi-modal registration plays an important role for image-guided interventions, where scans are often acquired using different modalities, e.g. to propagate segmentation information for image-guided radiotherapy. The alignment of multi-modal scans is difficult, because there can be a large amount of motion between scans, the intra-operative scan is often of lower scan quality than diagnostic scans and no functional relationship between intensities across modalities exists.

Methods

SSC是基于一个 patch 的自相似性以类似于 LSS 或 MIND2 的方式进行估计的,但其目的不是提取局部形状或几何体的表示,而是找到感兴趣的体素周围的上下文。

SSC is estimated based on patch-based self-similarities in a similar way as e.g. LSS or MIND, but rather than extracting a representation of local shape or geometry, it aims to find the context around the voxel of interest.
0
1

Conclusion

本文提出了一种对图像噪声敏感度较低的“自相似上下文”(SSC)图像描述符,并提出了一种利用加权海明进行快速距离估计的量化方案。当在具有随机相似项抽样的离散优化框架中使用时,在标准 CPU 和 SOTA 配准精度上,计算时间不到半分钟,平均误差为 2.12 mm。

A novel image descriptor the “self-similarity context” (SSC) is presented, with low sensitivity to image noise, and a quantisation scheme for fast distance evaluations using Hamming weights. When used in a discrete optimisation framework with a stochastic similarity term sampling, a computation time of less than half a minute is achieved on a standard CPU and state-of-the-art registration accuracy with an average error of 2.12 mm, which is a statistically significant improvement over previous self-similarity based metrics [4] and mutual information. In the future, we plan a GPU implementation (which could then lead to real-time performance) and further comparisons to other structural image representations (e.g. gradient orientation [2]) and the application of our approach to further applications of image-guided interventions.


  1. OBELISK-Net: Fewer Layers to Solve 3D Multi-Organ Segmentation with Sparse Deformable Convolutions, MIDL 2018 best paper ↩︎

  2. Heinrich, M., M. Jenkinson, at el. “Mind: Modality Independent Neighbourhood Descriptor for Multi-Modal Deformable Registration.” Medical image analysis 16 7 (2012): 1423-35. ↩︎ ↩︎

  3. Heinrich, M., M. Jenkinson, Bartlomiej W. Papiez, M. Brady, and J. A. Schnabel. “Towards Realtime Multimodal Fusion for Image-Guided Interventions Using Self-Similarities.” Medical image computing computer-assisted intervention(MICCAI) 16 Pt 1 (2013): 187-94. ↩︎ ↩︎

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Skr.B

WUHOOO~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值