An artificial intelligence platform for the multihospital collaborative management of congenital cataracts
一、Method
(1)Date collection and labelling
------Row datas :410 ocular images and 476 images of normal eyes from children .No special requirements for imaging pixels.
Labels :Independently ladelled by two ophthalmologists,and the disagreement case was judged by a third ophthalmologist.(They have no access to the deep-learning predictions.)
-------For identification network
There were no agree-upon gold-standard criteria for this stage's data preparing.
------For evaluation network
1.Opacity area was labeled as "extensive" when cover than 50% of the pupil.
was labeled as "limited",otherwise.
2.Opacity dense was labeled as "dense" when fully disrupted the version.
was labeled as "non-dense" ,otherwise.
3.Opacity location was labeled as "center" when fully covered the visual axis area.
otherwise,labeled as "peripheral"
-----For strategist network
1.patients who showing dense opacity fully cover the visual aera were considered to required immediate surgery.
------For preprocessing
auto-cutting the image to 256x256 pixels.
(2)deep-learning convolutional neural network
--------ILSVRC2014
This model contains five convoluational layers and three fully connected layers, first seven layers were used to extract 4096 features from input datas,the last layer was a Softmax layer.
Specifically, several techniques, including convolution25, overlapping pooling26–28, local response normalization26, a rectified linear unit26 and stochastic gradient descent29, were also integrated into this algorithm. Dropout methods were used in the fully connected layers to reduce the effect of overfitting30. Moreover, data augmentation was conducted by extracting random 224 × 224 patches (and their horizontal reflections) from the 256 × 256 images for training27,31,32. A summary of the detailed parameters of each layer is presented in Supplementary Table 2. All codes employed in our study were executed in the Caffe (Convolutional Architecture for Fast Feature Embedding) framework with Ubuntu 14.04 64bit + CUDA (Compute Unified Device Architecture) 6.533. The detailed methodology of the deep convolutional neural network is provided in the Supplementary Information.
(3)In silico test and validation independence
K-fold
(4)Multihospital clinical trial
(5)Website-based study
(6)Find a needle in a haystack's test
(7)Comparative test
二、results
(1)In silico test
----identification network:98.87%accuracy
----evaluation network :Opacity aeras 93.98%
Opacity density 95.06%
Opacity location 95.12%
----strategist network :97.56%
(2)Multihospital clinical trial
----identification network:98.25%accuracy
----evaluation network :Opacity aeras 100%
Opacity density 92.86%
Opacity location 100%
----strategist network :92.86%
(3)Website-base study
----identification network:92.45%accuracy
----evaluation network :Opacity aeras 94.87%
Opacity density 84.62%
Opacity location 94.87%
----strategist network :89.74%
(4)comparison test