CT流程与CT图像的windowing操作(转载+整理)

根据[4]中的定义:

  CT 的特点是能够分辨人体组织密度的轻微差别,所采用的标准是根据各种组织对X 线的线性吸收系数(μ值) 来决定的。

[5]中提到了一个公式:

CT值单位CT值单位

所以比赛中的dicom的灰度值需要转化为CT值。

CT值的物理意义就是CT射线照了你的身体,辐射经过你的身体时的辐射强度的衰减程度。

-----------------------------------------------------------------------------------------------------------------------------

[1]中对CT图像的windowing的定义:]

Windowing, also known as grey-level mappingcontrast stretchinghistogram modification or contrast enhancement is the process in which the CT image greyscale component of an image is manipulated via the CT numbers; doing this will change the appearance of the picture to highlight particular structures. The brightness of the image is, adjusted via the window level. The contrast is adjusted via the window width.

说白了就是灰度映射为辐射强度,然后提高对比度

 

根据[2],dicom里面,如果给出了intercept和slope,就认为灰度值和辐射强度之间的转化是线性关系.

否则就是非线性关系,是否是线性关系可以根据对dicom的数据读取来判断(下面的代码输出内容中有)

 

然后是过滤处理资料:

 

也就是根据Hu(CT)值来筛选我们想要的部位的图片,其他部位全部抹黑或者抹白(对应代码中的img_min或者img_max),目的是为了增加对比度.

 

下面来自[3]的代码,这个代码是用来进行windowing操作和滤波的:

 

# Viewing Dicom CT images with correct  windowing

CT image values correspond to [Hounsfield units](https://en.wikipedia.org/wiki/Hounsfield_scale) (HU).  
But the values stored in CT Dicoms are not Hounsfield units, 
but instead a scaled version.  
To extract the Hounsfield units we need to apply a linear transformation, 
which can be deduced from the Dicom tags.

Once we have transformed the pixel values to Hounsfield units, we can apply a *windowing*: 
the usual values for a head CT are a center of 40 and a width of 80, but we can also extract this from the Dicom headers.
from glob import glob
import os
import pandas as pd
import numpy as np
import re
from PIL import Image
import seaborn as sns
from random import randrange

#checnking the input files
print(os.listdir("../input/rsna-intracranial-hemorrhage-detection/"))

## Load Data
#reading all dcm files into train and text
train = sorted(glob("../input/rsna-intracranial-hemorrhage-detection/stage_1_train_images/*.dcm"))
test = sorted(glob("../input/rsna-intracranial-hemorrhage-detection/stage_1_test_images/*.dcm"))
print("train files: ", len(train))
print("test files: ", len(test))

pd.reset_option('max_colwidth')
train_df = pd.read_csv('../input/rsna-intracranial-hemorrhage-detection/stage_1_train.csv')


def window_image(img, window_center,window_width, intercept, slope):

    img = (img*slope +intercept)#灰度值转化为CT辐射强度,转化后的结果其实可以理解为"医用像素值"
    img_min = window_center - window_width//2 # "-"后面的先计算
    img_max = window_center + window_width//2
#     下面其实是一个滤波器,过滤掉噪音
    img[img<img_min] = img_min
    img[img>img_max] = img_max
    return img 
# 这里的img是一个二维矩阵





def get_first_of_dicom_field_as_int(x):
    #get x[0] as in int is x is a 'pydicom.multival.MultiValue', otherwise get int(x)
    if type(x) == pydicom.multival.MultiValue:#如果有很多个值
        return int(x[0])
    else:
        return int(x)

def get_windowing(data):
#     下面是获取dicom数据库中某个图片的各个参数的方式,并不是坐标
    dicom_fields = [data[('0028','1050')].value, #window center
                    data[('0028','1051')].value, #window width
                    data[('0028','1052')].value, #intercept
                    data[('0028','1053')].value] #slope
#     上面的这个种(0028,1053)在资料中被称为Tag
    return [get_first_of_dicom_field_as_int(x) for x in dicom_fields]



import pydicom
#图片数据库
import matplotlib.pyplot as plt
print(len(train))
case = 199
# train是个list类型
data = pydicom.dcmread(train[case]) #指定某张照片

plt.imshow(img, cmap=plt.cm.bone)
print("-------------------------------------1--------------------------------")
window_center , window_width, intercept, slope = get_windowing(data)#从dicom数据库中获取data的参数


#displaying the image
img = pydicom.read_file(train[case]).pixel_array

img = window_image(img, window_center, window_width, intercept, slope)#windowing操作以及过滤噪声
plt.imshow(img, cmap=plt.cm.bone)
plt.grid(False)
print("---------------------------------------2------------------------------")
print(data)

上述代码运行后会输出dicom的信息以及一张颅内图片的预览:

---------------------------------------2------------------------------
(0008, 0018) SOP Instance UID                    UI: ID_00145de6f
(0008, 0060) Modality                            CS: 'CT'
(0010, 0020) Patient ID                          LO: 'ID_e58c888d'
(0020, 000d) Study Instance UID                  UI: ID_c69165e24e
(0020, 000e) Series Instance UID                 UI: ID_49ed8e3bef
(0020, 0010) Study ID                            SH: ''
(0020, 0032) Image Position (Patient)            DS: ['-125.000000', '-124.697983', '223.549103']
(0020, 0037) Image Orientation (Patient)         DS: ['1.000000', '0.000000', '0.000000', '0.000000', '0.927184', '-0.374607']
(0028, 0002) Samples per Pixel                   US: 1
(0028, 0004) Photometric Interpretation          CS: 'MONOCHROME2'
(0028, 0010) Rows                                US: 512
(0028, 0011) Columns                             US: 512
(0028, 0030) Pixel Spacing                       DS: ['0.488281', '0.488281']
(0028, 0100) Bits Allocated                      US: 16
(0028, 0101) Bits Stored                         US: 16
(0028, 0102) High Bit                            US: 15
(0028, 0103) Pixel Representation                US: 1
(0028, 1050) Window Center                       DS: "30"
(0028, 1051) Window Width                        DS: "80"
(0028, 1052) Rescale Intercept                   DS: "-1024"
(0028, 1053) Rescale Slope                       DS: "1"
(7fe0, 0010) Pixel Data                          OW: Array of 524288 elements

## Visualize Sample Images

TRAIN_IMG_PATH = "../input/rsna-intracranial-hemorrhage-detection/stage_1_train_images/"
TEST_IMG_PATH = "../input/rsna-intracranial-hemorrhage-detection/stage_1_test_images/"

def view_images(images, title = '', aug = None):
    width = 5
    height = 2
    fig, axs = plt.subplots(height, width, figsize=(15,5))
    
    for im in range(0, height * width):
        data = pydicom.read_file(os.path.join(TRAIN_IMG_PATH,images[im]+ '.dcm'))
        image = data.pixel_array
        window_center , window_width, intercept, slope = get_windowing(data)#从dicom中获取参数
        image_windowed = window_image(image, window_center, window_width, intercept, slope)


        i = im // width
        j = im % width
        axs[i,j].imshow(image_windowed, cmap=plt.cm.bone) 
        axs[i,j].axis('off')
        
    plt.suptitle(title)
    plt.show()
train_df['image'] = train_df['ID'].str.slice(stop=12)#因为图片名称的前半部分是ID
train_df['diagnosis'] = train_df['ID'].str.slice(start=13)#因为图片名称的后半部分是出血类型

print("------------------------------------从下面开始每个类型的图片都看十张------------------------------------------------------")

view_images(train_df[(train_df['diagnosis'] == 'epidural') & (train_df['Label'] == 1)][:10].image.values, title = 'Images with epidural')

 

下面代码与上面一句类似,都是浏览图片(结果略)

view_images(train_df[(train_df['diagnosis'] == 'intraparenchymal') & (train_df['Label'] == 1)][:10].image.values, title = 'Images with intraparenchymal')

view_images(train_df[(train_df['diagnosis'] == 'intraventricular')& (train_df['Label'] == 1)][:10].image.values, title = 'Images with intraventricular')

view_images(train_df[(train_df['diagnosis'] == 'subarachnoid')& (train_df['Label'] == 1)][:10].image.values, title = 'Images with subarachnoid')

s'] == 'subdural') & (train_df['Label'] == 1)][:10].image.values, title = 'Images with subarachnoid')

 

Reference:

[1]https://radiopaedia.org/articles/windowing-ct

[2]https://stackoverflow.com/questions/10193971/rescale-slope-and-rescale-intercept

[3]https://www.kaggle.com/omission/eda-view-dicom-images-with-correct-windowing

[4]http://www.xctmr.com/baike/ct/d054abd3bf1a96110b623e4cc2b58575.html

[5]https://baike.baidu.com/item/CT值单位/15635363

CT图像图像增强是通过处理图像,提高图像重要细节信息或者目标的辨识度,使其比原始图像更适应于特定应用。这可以通过修改HU (Hounsfield Units)参数来实现。HU是一种衡量组织密度的单位,通过调整HU参数,可以突出关键的组织结构,使图像更易于分析。\[1\] 在进行CT图像图像增强时,可以使用不同的方法。其中,常用的方法包括mat2gray和imadjust。mat2gray函数可以将CT值转换为灰度值,并根据指定的窗宽和窗位进行图像拉伸,然后将其另存为BMP格式图像。imadjust函数可以对图像进行灰度级的调整,通过指定拉伸的低和高阈值,将图像的对比度进行调整,然后将其另存为BMP格式图像。\[3\] 总结起来,对CT图像进行图像增强可以使用mat2gray和imadjust函数,分别对图像进行灰度值的转换和对比度的调整,以突出图像中的重要细节信息。 #### 引用[.reference_title] - *1* [CT-Windowing医学CT图像增强](https://blog.csdn.net/weixin_44334901/article/details/127189514)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* *3* [MATLAB学习之图像增强、线性运算、非线性运算、CT图像的开窗显示](https://blog.csdn.net/weixin_45965683/article/details/116494829)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值