dc.description.abstract | A crucial step in the diagnosis and treatment of ocular disorders such as diabetic
retinopathy, uveitis, cataracts, macular degeneration, and others is the analysis of lesions that
occur in fundus pictures of the retina and the precise determination of the optic disc and macular.
However, a lot of current research still heavily depends on the manual resolution work done by
subject-matter specialists. Their work is frequently tedious, repetitive, and time-consuming,
which leads to a decrease in accuracy and efficiency in evaluating the outcome. It has become
more necessary to create an automated computer-aided system to help ophthalmologists provide
recommendations that are more accurate and trustworthy. Although there have been significant
advances in the deep learning field, particularly with neural networks and other approaches,
these current methods still have several issues, such as poor performance, missing minor lesions,
etc. To tackle this problem, this study applied deep learning to two tasks, including segmenting
retinal lesions and detecting the macula and optic disc. Both of these assist in identifying the
macula and the optic disc, two significant eye structures, and the regions of damage in the fundus
images aid in identifying retinopathy-related symptoms. The model is constructed using both the
Faster R-CNN and Mask R-CNN with backbone combining ResNet 50 or ResNet 101 with
Regional Request Network (RPN). In comparison to the IDRiD dataset with the macula and
optic disc detection, which only obtained 66.4% mAP on the test dataset, the Faster R-CNN
model achieved 93% mAP@50-95 in the DRIVE dataset. Additionally, the Mask R-CNN
obtained results in the IDRiD dataset with the segmentation of retinal lesions is 22.56% mAP. In
general, normal human eye pictures performed better than in diabetic retinopathy patients in
recognizing the macula and optic disc in fundus images. Regarding the segmentation of the
lesion, the big plaque regions on the fundus imaging cannot be segmented the most possible. | en_US |