Local Intensity Order Transformation for Robust Curvilinear Object Segmentation(LIOT)


Artical DOI: 10.1109/TIP.2022.3155954
Artical Code: github


1 Main Contribution

  1. In this paper, we aim to improve the generalizability by introducing a novel local intensity order transformation (LIOT).
  2. faces some particular challenges [7]: 1) thin, long, and tortuosity shapes; 2) inadequate contrast between curvilinear structures and the surrounding background; 3) uneven background illumination; 4) various image appearances.That is the paper gogal

2 Datasets

Cross-datasets includes DRIVE, STARE, CHASEDB1 and CrackTree:
DRIVE: 40 565584 color retinal images,which split into 20 traing images and 20 test test images
STARE: consists of 20 700
605 color retina images divided into 10 training and 10 test images
CHASEDB1: contains 28 999960 color retinal images, which are split into 20 training images and 8 test images
CrackTree: contains 206 800
600 pavement images with different kinds of cracks having curvilinear structuree,which ard split into 160 training and 46 test images.

3 Data Augment

  1. transforms.RandomRotation(180)
  2. transforms.RandomHorizontalFlip()
  3. transforms.RandomVerticalFlip()
  4. shear from -0.1 to 0.1
  5. randomly shifted from -0.1 to 0.1
  6. randomly zoomed from 0.8 to 1.2
  7. transforms.RandomCrop(128)

4 Initialization

Loss: Cross-entropy and topological loss(Topo)
Evaluation protocol: TPs,TNs,FPs,FNs are used for F1 score; Acc、Se(sensitivity)、Sp(specificity) and area under the receiver operating characteristics curve (AUC).
Batch_size: 32
Epoch: 1000
Optimizer: Adam abd lr is 0.001

5 Comparison Range

5.1 Apply the model trained on the CrackTree dataset to segment retinal blood vessels:

在这里插入图片描述
and the precies results:
在这里插入图片描述

5.2 Apple the model trained on the retinal dataset to evaluate the CrackTree:

在这里插入图片描述
and the pricies results:

在这里插入图片描述

Generalization to images with different curvilinear objects: appley the model trained on the DRIVE dataset to images with different types of curvilinear structures:
在这里插入图片描述

6 Conclusion

  1. we propose the LIOT that converts a grayscale image to a novel representation that is invariant to increasing contrast changes.
  2. extensive crossdataset experiments on three widely adopted retinal blood vessel segmentation datasets and CrackTree dataset demonstrate that LIOT can improve the classical segmentation pipeline that directly operates on the original image
    LIOT forms a simple yet effective way to improve the generalization performance of different models.

7 Core Ways

Core Methode:
在这里插入图片描述

They convert the given image into a gray-scale one. Secondly, they rely on the local intensity oredr to computer four directional binary codes, forming a 4-channel image that captures the curvilinear structure charcteristic.Last, They feed this contrast invariant 4-channel image into a segmentation network.
The Fig.4 show the creationof 4-channels images, for each pixel, Among the 8 pixels in the four directions, the pixel greater than or equal to the center point is set as 0, and the pixel less than the center point is set as 1. Then, the image of the four channels 0-255 is obtained by converting binary to decimal.

### 定制化用于医学图像分割的Segment Anything模型 对于医学图像分割而言,定制化的Segment Anything (SAM) 模型能够提供更精确的结果。为了实现这一目标,通常需要对原始模型进行调整以适应特定的数据集和应用场景。 #### 数据预处理 医学影像数据具有独特的特点,如高分辨率、多模态以及标注稀缺等问题。因此,在训练之前需对数据进行适当预处理[^1]: - **标准化**:由于不同设备采集到的图片可能存在亮度差异,故而要先做灰度值归一化; - **裁剪与填充**:考虑到实际应用中的视野范围有限,可依据临床需求选取感兴趣区域(ROI),并对其进行统一大小规整; ```python import numpy as np from skimage import exposure, transform def preprocess_image(image): """Preprocess a single image.""" # Normalize intensity values to [0, 1] normalized = exposure.rescale_intensity(image.astype(float), out_range=(0., 1.)) # Resize the image while preserving aspect ratio resized = transform.resize(normalized, output_shape=[256, 256], mode='reflect') return resized ``` #### 模型微调 基于现有的Segment Anything架构,通过迁移学习的方式可以快速构建起针对医疗领域的新版本网络结构。具体操作如下所示: - 对编码器部分保持冻结状态不变,仅更新解码端各层连接权值; - 增加额外辅助损失函数项促进边界平滑性和连通性约束; ```python import torch.nn.functional as F from segment_anything import sam_model_registry model_type = "vit_b" device = 'cuda' if torch.cuda.is_available() else 'cpu' sam_checkpoint = "./checkpoints/sam_vit_b_01ec64.pth" # Load pre-trained SAM model model = sam_model_registry[model_type](checkpoint=sam_checkpoint).to(device) for param in model.image_encoder.parameters(): param.requires_grad_(False) ``` #### 后处理优化 完成预测之后还需要进一步精炼输出结果的质量,比如去除孤立噪点、填补孔洞等后处理手段有助于提升最终呈现效果。常用方法有形态学运算、条件随机场(CRFs)建模等技术。 ```python from scipy.ndimage.morphology import binary_fill_holes from skimage.measure import label, regionprops def post_process_mask(mask): """Post-processes mask by filling holes and removing small objects""" filled = binary_fill_holes(mask>0.5) labeled = label(filled) regions = regionprops(labeled) cleaned = np.zeros_like(filled,dtype=np.uint8) for r in regions: if r.area >= min_area_threshold: coords = tuple(map(tuple,r.coords.T)) cleaned[coords]=1 return cleaned ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Philo`

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值