Prognosis of unsupported railway sleepers and diagnosis of their severity using machine learning
Sleeper detection not supported
From the development of the machine model for the detection of unsupported sleepers, the accuracy of each model is shown in Table 4.
From the table, it can be seen that each model works well. The accuracy of each model is greater than 90% when the data processing is appropriate. CNN performs best based on its accuracy. When CNN is applied with FFT and padding, the accuracies are the first and second highest compared to other models. For RNN and ResNet, accuracies are greater than 90% when specific data processing is used. However, the accuracies become about 80% when another data processing technique is used. For FCN, data processing is not necessary. The FCN model can achieve an accuracy of 95%. According to the table, the most accurate models are respectively CNN, RNN, FCN and ResNet. ResNet’s complicated architecture does not guarantee the highest accuracy. Moreover, the formation time of ResNet (46 s/epoch) is the longest followed by RNN (6 s/epoch), FCN (2 s/epoch) and CNN (1 s/epoch) respectively. It can be concluded that the CNN model is the best model to detect assisted sleepers in this study because it offers the highest accuracy i.e. 100% while the training time is the lowest. At the same time, easy data processing like padding is good enough to provide a good result. It is better than FFT in CNN model which requires longer data processing. The test data accuracy of each model is shown in Fig. 8.
The tuned hyperparameters of the CNN model with infill data are shown in Table 5.
Compared to the previous study, Sysyn et al.1 applied statistical methods and KNN which provided the best detection accuracy of 65%. The accuracy of the CNN model developed in this study is significantly higher. It can be assumed that the machine learning techniques used in this study are more powerful than those used in the previous study. Moreover, CNN has proven that it is suitable for pattern recognition.
Severity classification of unsupported sleepers
For the severity classification of unsupported sleepers, the performance of each model is shown in Table 6.
From the table, it can be seen that the CNN model still performs best with an accuracy of 92.89% and provides good results with both data treatments. However, the accuracies of RNN and ResNet drop dramatically when inappropriate data processing is performed. For example, the accuracy of the RNN model with padding drops to 33.89%. The best performance that RNN can achieve is 71.56%, which is the lowest compared to other models. It is due to the limitation of RNN that the trailing gradient occurs when the time series data is too long. In this study, the number of data points for the padding data is 1181, which may cause the issue. Therefore, RNN does not work well. ResNet performs well with an accuracy of 92.42% close to CNN while the accuracy of FCN is quite good. For training time, CNN is the fastest model with a training time of 1 s/epoch followed by FCN (2 s/epoch), RNN (5 s/epoch) and ResNet (32 s/epoch) respectively. From these, it can be concluded that the CNN model is the best model for the severity classification of unsupported sleepers in this study. Moreover, it can be concluded that CNN and ResNet are suitable for padding data while RNN is suitable for FFT data. The test data accuracy of each model is shown in Fig. 9.
The CNN model confusion matrix is shown in Table 7.
To clearly demonstrate the performance of each model, the precision and recall are shown in Table 8.
From the table, the precisions and recalls of CNN and ResNet are quite good with values above 80% while RNN is the worst. Some RNN accuracies are below 60% which cannot be used in realistic situations. CNN seems to be the better model than ResNet because all the accuracies are above 90%. Although some ResNet accuracies are better than CNN, Class 2 accuracy is around 80%. Therefore, using the CNN model is preferable.
For hyperparameter tuning, the tuned hyperparameters of CNN are shown in Table 9.