Addressing Catastrophic Forgetting in Neural Networks: A Review and Comparative Analysis of Algorithms for Crack Detection
DOI:
https://doi.org/10.56042/jsir.v84i10.18406Keywords:
ResNet18, surface cracks, continous learning, catastrophic forgetting, image classificationAbstract
Deep neural networks have demonstrated impressive results on a huge area of interest, but they suffer from catastrophic forgetting – when learning new tasks, they tend to forget what has been previously learned. Catastrophic forgetting in neural networks has garnered significant attention in recent years. The problem is too complex to generate general conclusions independently of the types of analyzed networks and specific issues. This paper addresses catastrophic forgetting in neural networks applied to crack detection. Given the limited research in this area, the existing research gap is highlighted in this study by reviewing recent works on catastrophic forgetting that explore various learning methods to mitigate forgetting. A comprehensive survey of techniques is conducted. Three representative algorithms used in crack detection datasets are evaluated - one from each major approach. Finally, a comparative analysis establishes a foundation for a novel algorithm that integrates the strengths of these different approaches.