Introduction
Congenital heart disease (CHD) is the most common congenital anomaly in children, and the reported incidence of CHD is approximately 0.69%–0.93%, accounting for one-third of all major congenital anomalies.1 With surgical intervention, the mortality for children with CHD can be reduced to as low as 3%.2 Therefore, early detection of CHD is very important.
At present, echocardiography reading mainly relies on manual labor, but the training cycle of echocardiographic doctors is long, and much experience is needed as the basis for accurate diagnosis. A European Union study recommended that beginners in echocardiography need to undergo >350 tests to achieve basic practical competence.3 This grim situation has led researchers to focus on the application of artificial intelligence.
Deep learning has revolutionized image classification and recognition because of its high accuracy; in some cases, it demonstrates performance comparable to or exceeding that of medical experts. There are many practical applications of deep learning in the medical field, especially in medical images, including image registration/localization, cell structure detection, disease diagnosis/prognosis, etc.4 The tissue boundary of ultrasonic images is fuzzy, the image has more noise interference, and the selection and interpretation of images is subjective. The processing and interpretation of ultrasonic images are always difficult in medical imaging. Therefore, the application of deep learning in echocardiography is relatively lagging compared with other medical imaging, and the application of this technology in CHD also started late.
Unlike computed tomography (CT), magnetic resonance imaging (MRI), and other medical images, the selection of echocardiographic views is quite subjective. Therefore, the selection and standardization of echocardiographic views are crucial for the application of deep learning. In 2013, a work US government-led work called fetal intelligent navigation echocardiography (FINE) succeeded in automating the selection of standard fetal echocardiographic views, which can automatically select nine stamdard echocardiographic views including four-chamber, five-chamber, left ventricular outflow tract, etc. Subsequently, 54 fetuses were tested between 18.6 and 37.2 weeks of gestation were examined, demonstrating that the FINE system can automatically select nine standard views for both normal fetuses and fetuses with CHD and can better visualize the abnormal features of complex CHD.5 Studies conducted by several centers in the past 10 years have shown that the detection rate and specificity of fetal CHD with echocardiography are <50% under the traditional screening method.6–8 A study at the University of California, San Francisco, used deep learning techniques to train, test, and verify 4108 fetuses (0.9% CHD) with more than one million echocardiographic images, and the results of the deep learning model were exceptional, with 95% sensitivity (95% confidence interval (CI)=84% to 99%) and 96% specificity (95% CI=95% to 97%) in distinguishing normal from abnormal hearts.9 Other studies have shown that the accuracy of CHD diagnosis can be significantly improved by convolutional neural network (CNN) preprocessing and segmentation of fetal echocardiographic images and then using a deep learning model for image diagnosis.10
There are relatively few studies on the application of deep learning in pediatric echocardiography, and the research progress in the whole field is relatively slow compared with that in fetal echocardiography. Diller et al proved that deep learning technology can effectively remove artifacts and noise in both normal and CHD images.11 After processing specific pediatric echocardiographic images by deep learning technology, the average individual leaflet segmentation accuracy in children with hypoplastic left heart syndrome is close to the limit of human eye resolution.12 A research center has developed an unsupervised deep learning model named DGACNN in view of the characteristics of echocardiographic images, and its ability to automatically label and screen fetal echocardiographic images (four-chamber view images only) has surpassed that of middle-level professional doctors in related fields.13 Capital Medical University and Carnegie Mellon University designed a five-channel CNN with a single-branch that can diagnose negative samples and ventricular septal defect (VSD) or atrial septal defect (ASD) classifications with an accuracy over 90%.14 Most of the deep learning studies in pediatric echocardiography are basic technical studies with fewer clinical applications. Major clinical studies tend to achieve classification with five (four) views or even a single view.
This paper aims to use a new method of view selection named the seven views15 approach, and apply the deep learning model to analyze pediatric echocardiographic images, to achieve mass detection of pediatric CHD.