Machine Vision Inspection Systems, Machine Learning-Based Approaches. Группа авторов
Чтение книги онлайн.

Читать онлайн книгу Machine Vision Inspection Systems, Machine Learning-Based Approaches - Группа авторов страница 22

СКАЧАТЬ the proposed capsule layers-based Siamese network model, the accuracy of the within language classifications depends on two factors: the number of characters in the alphabet and visual difference between characters. Some alphabets have visually similar characters. In such cases, although the number of characters in the alphabet is small, the classification accuracy becomes low. Thus, the system architecture can be improved with the representation of the image features using transfer learning. Here, features can be extracted from each character image, using a pre-trained deep neural network, and those images can pass to the Siamese network.

      2.5.3 Conclusion

      Character recognition is a critical module in applications such as document scanning and optical character recognition. With the emergence of deep learning techniques, languages like English have achieved high classification accuracies. However, the applicability of those deep learning methods is constrained in low resource languages, because of the lack of well-developed datasets. This study has focused on implementing a viable method for classification of handwritten characters in low resource languages. Due to the restrictions on the size of available dataset, this problem is modelled as a one-shot learning problem and solved using Siamese networks based on Capsule networks. Siamese network is a de facto type of network use in one-shot learning, but when it comes to image-related tasks, they still need a large number of training dataset. However, the use of Capsule layers-based Siamese network, which can mitigate information losses in Convolutional neural networks allowed to train a Siamese network with a small number of parameters, datasets and get on par performance as a convolutional network. This model is tested with Omniglot dataset and achieved 30–85% accuracy for different alphabets. Further, the model has shown a classification accuracy of 74.5% for MNIST dataset.

      1. Vorugunti, C.S., Gorthi, R.K.S., Pulabaigari, V., Online Signature Verification by Few-Shot Separable Convolution Based Deep Learning. International Conference on Document Analysis and Recognition (ICDAR), IEEE, pp. 1125–1130, 2019.

      2. Wu, Y., Liu, H., Fu, Y., Low-shot face recognition with hybrid classifiers, in: IEEE International Conference on Computer Vision Workshops, pp. 1933–1939, 2017.

      3. Gui, L.-Y., Wang, Y.-X., Ramanan, D., Moura, J.M., Few-shot human motion prediction via meta-learning, in: European Conference on Computer Vision (ECCV), pp. 432–450, 2018.

      4. Fe-Fei, L., A Bayesian approach to unsupervised one-shot learning of object categories, in: 9th IEEE International Conference on Computer Vision, IEEE, pp. 1134–1141, 2003.

      5. Arica, N. and Yarman-Vural, F.T., Optical character recognition for cursive handwriting. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, pp. 801–813, 2002.

      6. Lake, B., Salakhutdinov, R., Gross, J., Tenenbaum, J., One shot learning of simple visual concepts, in: Annual Meeting of the Cognitive Science Society, 2011.

      7. Koch, G., Zemel, R., Salakhutdinov, R., Siamese neural networks for one-shot image recognition, in: 32nd International Conference on MachineLearning, Lille, France, pp. 1–8, 2015.

      8. Chopra, S., Hadsell, R., Lecun, Y., Learning a similarity metric discriminatively, with application to face verification, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, pp. 539–546, 2005.

      9. Sabour, S., Frosst, N., Hinton, G.E., Dynamic routing between capsules, in: 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, pp. 3856–3866, 2017.

      10. Lehtonen, E. and Laiho, M., CNN using memristors for neighborhood connections, in: 12th International Workshop on Cellular Nanoscale Networks and their Applications, IEEE, pp. 1–4, 2010.

      11. Hinton, G.E., Krizhevsky, A., Wang, S.D., Transforming auto-encoders, in: International Conference on Artificial Neural Networks, Springer, pp. 44–51, 2011.

      12. Sethy, A., Patra, P.K., Nayak, S.R., Offline Handwritten Numeral Recognition Using Convolution Neural Network, in: Machine Vision Inspection Systems: Image Processing, Concepts, Methodologies and Applications, M. Malarvel, S.R. Nayak, S.N. Panda, P.K. Pattnaik, N. Muangnak (Eds.), cp. 9, pp. 197–212, John Wiley & Sons Inc, New York, United States, 2020.

      13. Chen, Y., Jiang, H., Li, C., Jia, X., Ghamisi, P., Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens., 54, 6232–6251, 2016.

      14. Garcia-Gasulla, D., Pares, F., Vilalta, A., Moreno, J., Ayguadé, E., Labarta, J., Cortés, U., Suzumura, T., On the behavior of convolutional nets for feature extraction. J. Artif. Intell. Res., 61, 563–592, 2018.

      15. Liu, B., Yu, X., Zhang, P., Yu, A., Fu, Q., Wei, X., Supervised deep feature extraction for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens., 56, 1909–1921, 2017.

      16. Sethy, A., Patra, P.K., Nayak, S.R., Nayak, D.R., A Gabor Wavelet Based Approach for Off-Line Recognition of ODIA Handwritten Numerals. Int. J. Eng. Technol., 7, 253–257, 2018.

      17. Krüger, V. and Sommer, G., Gabor wavelet networks for object representation, in: Multi-Image Analysis, R. Klette, G. Gimel’farb, T. Huang (Eds.), LNCS, 2032, pp. 115–128, Springer, Berlin, Heidelberg, 2001.

      18. Kaushal, A. and Raina, J., Face detection using neural network & Gabor wavelet transform. Int. J. Comput. Sci. Technol., 1, 58–63, 2010.

      19. Nayak, S.R., Mishra, J., Palai, G., Analysing roughness of surface through fractal dimension: A review. Image Vision Comput., 89, 21–34, 2019.

      20. Nayak, S.R., Mishra, J., Palai, G., A modified approach to estimate fractal dimension of gray scale images. Optik, 161, 136–145, 2018.

      21. Nayak, S., Khandual, A., Mishra, J., Ground truth study on fractal dimension of color images of similar texture. J. Text. Inst., 109, 1159–1167, 2018.

      22. Sethy, A. and Patra, P.K., Off-line Odia Handwritten Character Recognition: an Axis Constellation Model Based Research. Int. J. Innov. Technol. Explor. Eng., 8, 788–793, 2019.

      23. Zhang, J., Zhu, Y., Du, J., Dai, L., Radical analysis network for zero-shot learning in printed Chinese character recognition, in: IEEE International Conference on Multimedia and Expo, IEEE, pp. 1–6, 2018.

      24. Bertinetto, L., Henriques, J.F., Valmadre, J., Torr, P., Vedaldi, A., Learning feed-forward one-shot learners, in: 30th International Conference on Neural Information Processing Systems, ACM, pp. 523–531, 2016.

      25. СКАЧАТЬ