HYBRID RECOMMENDER FOR VIRTUAL ART COMPOSITIONS WITH VIDEO SENTIMENTS ANALYSIS

Main Article Content

Heorhii Kuchuk
Andrii Kuliahin

Abstract

Topicality. Recent studies confirm the growing trend to implement emotional feedback and sentiment analysis to improve the performance of recommender systems. In this way, a deeper personalization and current emotional relevance of the user experience is ensured. The subject of study in the article is a hybrid recommender system with a component of video sentiment analysis. The purpose of the article is to investigate the possibilities of improving the effectiveness of the results of the hybrid recommender system of virtual art compositions by implementing a component of video sentiment analysis. Used methods: matrix factorization methods, collaborative filtering method, content-based method, knowledge-based method, video sentiment analysis method. The following results were obtained. A new model has been created that combines a hybrid recommender system and a video sentiment analysis component. The average absolute error of the system has been significantly reduced. Added system reaction to emotional feedback in the context of user interaction with virtual art compositions. Conclusion. Thus, the system can not only select the most suitable virtual art compositions, but also create adaptive and dynamic content, which will increase user satisfaction and improve the immersive aspects of the system. A promising direction of further research may be the addition of a subsystem with a generative neural network, which will create new virtual art compositions based on the conclusions of the developed recommendation system.

Article Details

How to Cite
Kuchuk , H. ., & Kuliahin , A. . (2024). HYBRID RECOMMENDER FOR VIRTUAL ART COMPOSITIONS WITH VIDEO SENTIMENTS ANALYSIS. Advanced Information Systems, 8(1), 70–79. https://doi.org/10.20998/2522-9052.2024.1.09
Section
Intelligent information systems
Author Biographies

Heorhii Kuchuk , National Technical University "Kharkiv Polytechnic Institute", Kharkiv

Doctor of Technical Sciences, Professor, Professor of Computer Engineering and Programming Department

Andrii Kuliahin , National Aerospace University “Kharkiv Aviation Institute”, Kharkiv

PhD Student of Department of Computer Systems, Networks and Cybersecurity

References

Kuliahin, A., Narozhnyi, V., Tkachov, V. and Kuchuk, H. (2022), “Study of methods of building recommendation system for solving the problem of selecting the most relevant video when creating virtual art compositions”, Control, Navigation and Communication Systems, No. 4(70), PNTU, Poltava, pp. 94–99, doi: https://doi.org/10.26906/SUNZ.2022.4.094

Kuliahin, A. (2023), “Using recognized emotion as implicit feedback for a recommender system”, Control, Navigation and Communication Systems, No. 3(73), PNTU, Poltava, pp. 115–119, doi: https://doi.org/10.26906/SUNZ.2023.3.115

Deldjoo, Ya., Schedl, M., Hidasi, B., Wei, Yi. and He, X. (2022), “Multimedia Recommender Systems: Algorithms and Challenges”, Recommender Systems Handbook, Springer, New York, doi: https://doi.org/10.1007/978-1-0716-2197-4_25

Tsai, Ch. and Brusilovsky, P. (2021), “The effects of controllability and explainability in a social recommender system”, User Modeling and User-Adapted Interaction, Vol. 31, pp. 591–627, doi: https://doi.org/10.1007/s11257-020-09281-5

Messina, P., Dominguez, V., Parra, D., Trattner, Ch. and Soto, A. (2019), “Content-based artwork recommendation: integrating painting metadata with neural and manually-engineered visual features”, User Modeling and User-Adapted Interaction, vol. 29, pp. 251–290, doi: https://doi.org/10.1007/s11257-018-9206-9

Fernández del Amo Blanco, Iñigo & Erkoyuncu, John & Farsi, M. and Ariansyah, D. (2021), “Hybrid recommendations and dynamic authoring for AR knowledge capture and re-use in diagnosis applications”, Knowledge-Based Systems, Vol. 239, 107954, doi: https://doi.org/10.1016/j.knosys.2021.107954

Liu, J. and Lv, H. (2022), “Recommendation of Micro Teaching Video Resources Based on Topic Mining and Sentiment Analysis”, Int. Journal of Emerging Techn. in Learning (IJET), vol. 17, pp. 243–256, doi: https://doi.org/10.3991/ijet.v17i06.30011

Qian, Y., Zhang, Y., Ma, X., Yu, H. and Peng, L. (2018), “EARS: Emotion-Aware Recommender System Based on Hybrid Information Fusion”, Information Fusion, vol. 46, pp. 141–146, doi: https://doi.org/10.1016/j.inffus.2018.06.004

Pessemier, T., Coppens, I. and Martens, L. (2020), “Evaluating facial recognition services as interaction technique for recommender systems”, Multimedia Tools and Appl., Vol. 79, pp. 47–70, doi: https://doi.org/10.1007/s11042-020-09061-8

Berkovsky, S. (2015), “Emotion-Based Movie Recommendations”, EMPIRE '15: Proceedings of the 3rd Workshop on Emotions and Personality in Personalized Systems 2015, doi: https://doi.org/10.1145/2809643.2815362

Li, J., Li, Z., Ma, X., Zhao, Q., Zhang, Ch. and Yu, G. (2023), “Sentiment Analysis on Online Videos by Time-Sync Comments”, Entropy, vol. 25 (7), 1016, doi: https://doi.org/10.3390/e25071016

Chalkias, I., Tzafilkou, K., Karapiperis, D. and Tjortjis, C. (2023), “Learning Analytics on YouTube Educational Videos: Exploring Sentiment Analysis Methods and Topic Clustering”, Electronics, vol. 12, 3949, doi: https://doi.org/10.3390/electronics12183949

Chen, R., Zhou, W., Li, Y. and Zhou, H. (2022), “Video-Based Cross-Modal Auxiliary Network for Multimodal Sentiment Analysis”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, is. 12, pp. 8703–8716, doi: https://doi.org/10.1109/TCSVT.2022.3197420

Zadeh, A., Zellers, R, Pincus, E. and Morency, L-P. (2016), “MOSI: Multimodal Corpus of Sentiment Intensity and Subjectivity Analysis in Online Opinion Videos”, Computation and Language, arXiv:1606.06259, doi: https://doi.org/10.48550/arXiv.1606.06259

Zadeh, A.A. B., Liang P.P., Poria, S. Cambria, E. and Morency L.-P. (2018), “Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph”, ACL, Association for Computational Linguistics, Melbourne, Australia, pp. 2236–2246, doi: https://doi.org/10.18653/v1/P18-1208

Barsoum, E., Zhang, Ch., C. F., Cristian, and Zhang, Z. (2016), “Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution”, ACM International Conference on Multimodal Interaction (ICMI), pp. 279–283, doi: https://doi.org/10.1145/2993148.2993165

Pu, H., Sun, Y., Song, R., Chen, X., Jiang, H., Liu, Y. and Cao, Z.(2024), “Going Beyond Closed Sets: A Multimodal Perspective for Video Emotion Analysis”, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 14430 LNCS, pp. 233–244, doi: https://doi.org/10.1007/978-981-99-8537-1_19

Yaloveha, V., Hlavcheva, D., Podorozhniak, A. and Kuchuk, H. (2019), “Fire hazard research of forest areas based on the use of convolutional and capsule neural networks”, 2019 IEEE 2nd Ukraine Conference on Electrical and Computer Engineering, UKRCON 2019 – Proceedings, doi: http://dx.doi.org/10.1109/UKRCON.2019.8879867

Tran, Du, Wang, Heng, Torresani, Lorenzo, Ray, Jamie, Le Cun, Yann and Paluri, Manohar (2017), “A Closer Look at Spatiotemporal Convolutions for Action Recognition”, Computer Vision and Pattern Recognition, arXiv:1711.11248, doi: https://doi.org/10.48550/arXiv.1711.11248

Kovalenko, A. and Kuchuk, H. (2022), “Methods to Manage Data in Self-healing Systems”, Studies in Systems, Decision and Control, Vol. 425, pp. 113–171, doi: https://doi.org/10.1007/978-3-030-96546-4_3

Dotsenko, N., Chumachenko, I., Galkin, A., Kuchuk, H. and Chumachenko, D. (2023), “Modeling the Transformation of Configuration Management Processes in a Multi-Project Environment”, Sustainability (Switzerland), Vol. 15(19), 14308, doi: https://doi.org/10.3390/su151914308

Ezzameli, K. and Mahersia, H. (2023), “Emotion recognition from unimodal to multimodal analysis: A review”, Information Fusion, vol. 99, 101847, doi: https://doi.org/10.1016/j.inffus.2023.101847

Dun B., Zakovorotnyi, O. and Kuchuk, N. (2023), “Generating currency exchange rate data based on Quant-Gan model”, Advanced Information Systems, Vol. 7, no. 2, pp. 68–74, doi: http://dx.doi.org/10.20998/2522-9052.2023.2.10

Yaloveha, V., Podorozhniak, A. and Kuchuk, H. (2022), “Convolutional neural network hyperparameter optimization applied to land cover classification”, Radioelectronic and Computer Systems, No. 1(2022), pp. 115–128, DOI: https://doi.org/10.32620/reks.2022.1.09

Kuliahin, A. (2023), “Regression neural model for video sentiments analysis”, Global science: prospects and innovations, Proceedings of the 5th International scientific and practical conference, Cognum Publishing House, Liverpool, United Kingdom, pp. 173–178, available at: https://sci-conf.com.ua/v-mizhnarodna-naukovo-praktichna-konferentsiya-global-science-prospects-and-innovations-28-30-12-2023-liverpul-velikobritaniya-arhiv/

Gomes, C.J., Gil-González, A.B., Luis-Reboredo, A., Sánchez-Moreno, D. and Moreno-García, M.N. (2022), “Song Recommender System Based on Emotional Aspects and Social Relations”, Lecture Notes in Networks and Systems, vol. 327 LNNS, pp. 88–97, doi: https://doi.org/10.1007/978-3-030-86261-9_9

Xiong, W. and Zhang, Y. (2023), “An intelligent film recommender system based on emotional analysis”, PeerJ Computer Science, vol. 9, pp. 1–15, doi: https://doi.org/10.7717/PEERJ-CS.1243

Zhou, S., Jia, J., Wang, Q., Dong, Y., Yin, Y. and Lei, K. (2018), “Inferring emotion from conversational voice data: A semi-supervised multi-path generative neural network approach”, 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, pp. 579–586, doi: DOI: https://doi.org/10.1609/aaai.v32i1.11280

Kujani, T. and Kumar, V.D. (2022), “Emotion Understanding from Facial Expressions using Stacked Generative Adversarial Network (GAN) and Deep Convolution Neural Network (DCNN)”, International Journal of Engineering Trends and Technology, 70(10), pp. 98–110, doi: https://doi.org/10.14445/22315381/IJETT-V70I10P212