EFFICIENCY OF SUPPLEMENTARY OUTPUTS IN SIAMESE NEURAL NETWORKS

Main Article Content

Artem Melnychenko
Kostyantyn Zdor

Abstract

In the world of image analysis, effectively handling large image datasets is a complex challenge that requires using deep neural networks. Siamese neural networks, known for their twin-like structure, offer an effective solution to image comparison tasks, especially when data volume is limited. This research explores the possibility of enhancing these models by adding supplementary outputs that improve classification and help find specific data features. The article shows the results of two experiments using the Fashion MNIST and PlantVillage datasets, incorporating additional classification, regression, and combined output strategies with various weight loss configurations.  The results from the experiments show that for simpler datasets, the introduction of supplementary outputs leads to a decrease in model accuracy. Conversely, for more complex datasets, optimal accuracy was achieved through the simultaneous integration of regression and classification supplementary outputs. It should be noted that the observed increase in accuracy is relatively marginal and does not guarantee a substantial impact on the overall accuracy of the model.

Article Details

How to Cite
Melnychenko , A. ., & Zdor , K. . (2023). EFFICIENCY OF SUPPLEMENTARY OUTPUTS IN SIAMESE NEURAL NETWORKS. Advanced Information Systems, 7(3), 49–53. https://doi.org/10.20998/2522-9052.2023.3.07
Section
Intelligent information systems
Author Biographies

Artem Melnychenko , National Technical University of Ukraine «Igor Sikorsky Kyiv Polytechnic Institute», Kyiv

PhD Student, assistant of Department of Digital Technologies in Energy

Kostyantyn Zdor , National Technical University of Ukraine «Igor Sikorsky Kyiv Polytechnic Institute», Kyiv

PhD Student, assistant of Department of Digital Technologies in Energy

References

Lim, E.A., Wei, T. and Kadri, J.A. (2019), “A new formula to determine the optimal dataset size for training neural networks”, ARPN Journal of Engineering and Applied Sciences, vol. 14, pp. 52-61, available at: http://www.arpnjournals.org/jeas/research_papers/rp_2019/jeas_0119_7525.pdf.

Hunt, B., Kwan, E., Dosdall, D., MacLeod R.S. and Ranjan, R. (2021), “Siamese Neural Networks for Small Dataset Classification of Electrograms”, 2021 Computing in Cardiology (CinC), pp. 1-4, doi: 10.23919/CinC53138.2021.9662707.

Wiggers, K.L., Britto, A.S., Heutte, L., Koerich, A.L. and Oliveira, L.S. (2019), “Image Retrieval and Pattern Spotting using Siamese Neural Network”, 2019 Int. Joint Conf. on NNw, pp. 1–10, doi: https://doi.org/10.1109/IJCNN.2019.8852197

Guillerme, T., Puttick, M.N., Marcy, A.E. and Weisbecker, V. (2020), “Shifting spaces: Which disparity or dissimilarity measurement best summarize occupancy in multidimensional spaces?”, Ecology and Evolution, vol. 10, pp.7761-7775, doi: https://doi.org/10.1002/ece3.6452.

Ivan, Dokmanic, Reza, Parhizkar, Juri, Ranieri and Martin, Vetterli (2015), “Euclidean Distance Matrices: Essential Theory, Algorithms and Applications”, IEEE Signal Processing Magazine, vol. 32, no. 6, pp. 12–30, doi: 10.1109/MSP.2015.2398954.

Malialisa, K., Panayiotoua, C.G. and Polycarpou, M.M. (2020), “Data-efficient Online Classification with Siamese Networks and Active Learning”, Int. Joint Conf. on Neural Networks (IJCNN), pp.1-7, doi: https://doi.org/10.48550/arXiv.2010.01659.

Lialin, V., Deshpande, V. and Rumshisky, A. (2023), “Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning”, arXiv preprint arXiv:2303.15647, pp.5–12, available at: https://arxiv.org/abs/2303.15647.

Szegedy, C., Liu, W., Jia, Ya., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and Rabinovich, A. (2014), “Going Deeper with Convolutions”, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, doi: https://doi.org/10.1109/CVPR.2015.7298594.

Han, Xiao, Kashif, Rasul and Roland, Vollgraf (2017), “Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms”, arXiv arXiv:1708.07747, pp. 2–3, available at: https://arxiv.org/abs/1708.07747.

De, S., Mukherjee, A. and Ullah, E. (2018), “Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration”, arXiv:1807.06766, pp. 5-10, available at: https://arxiv.org/abs/1807.06766.

Groenendijk, R., Karaoglu, S., Gevers, T. and Mensink, T. (2020), “Multi-Loss Weighting with Coefficient of Variations”, 2021 IEEE Winter Conf. on Appl. of Comp. Vision, pp. 1468-1477, doi: https://doi.org/10.1109/WACV48630.2021.00151.

Singh, D., Jain, N., Jain, P., Kayal, P., Kumawat, S. and Batra, N. (2019), “PlantDoc: A Dataset for Visual Plant Disease Detection”, Association for Computing Machinery, pp. 249–253, doi: https://doi.org/10.1145/3371158.3371196.

Kaiming, He, Xiangyu, Zhang, Shaoqing, Ren and Jian, Sun (2016), “Identity Mappings in Deep Residual Networks”, Computer Vision – ECCV, pp. 2–13, doi: https://doi.org/10.1007/978-3-319-46493-0_38.