- Author: Zongwei Zhou
- Weibo: @MrGiovanni
- Email: zongweiz@usc.edu
Introduction
In the past two days, I used Keras library to build deep neural network for non-small cell lung cancer (NSCLC) from 18F-FDG PET/CT images segmentation. This deep neural network yielded an IOU (“intersection over union”) of 35.39% based on test images, and can be a good staring point for further, more serious approaches. The architecture was inspired by U-Net: Convolutional Networks for Biomedical Image Segmentation. The code is refered to Deep Learning Tutorial for Kaggle Ultrasound Nerve Segmentation competition, using Keras. So basically, my job is to apply U-Net to our 18F-FDG PET/CT datasets and evaluate the performance of this deep neural network architecture.
Key Words: Fully connected networks; Non-small Cell Lung Cancer; PET/CT images; Segmentation
Overview
Datasets
Our datasets consist of 1383 three-dimentional PET/CT images sized 101 × 101 × 101 pixel. Each sample has been fine-marked, including the edge of tumor, tumor as ‘true’, background as ‘false’.
The first row represents CT images, the second row represents PET images, and the final row represents segmentation respectively.
Pre-processing
Since each tumor in datasets located in the center of the image, I cut nine 80 × 80 pixel patchs from each layer in three-dimentional image to let the tumors locate in different place. I choose 11 layer of Axial, Coronal and Sagittal plane in each three-dimentional image.
Therefore, one three-dimentional sample should be augmented into 9 × 3 × 11=297 different fine-labeled two-dimentional samples. Applying this strategy into CT and PET images, two-channel input patchs are designed, e.g., CT channel and PET channel.
The last but not the least important, please don’t miss data centering and normalization by using z-score, because it’s really useful.