POSSIBLE EVALUATION OF THE CORRECTNESS OF EXPLANATIONS TO THE END USER IN AN ARTIFICIAL INTELLIGENCE SYSTEM

Main Article Content

Serhii Chalyi
Volodymyr Leshchynskyi

Abstract

The subject of this paper is the process of evaluation of explanations in an artificial intelligence system. The aim is to develop a method for forming a possible evaluation of the correctness of explanations for the end user in an artificial intelligence system. The evaluation of the correctness of explanations makes it possible to increase the user's confidence in the solution of an artificial intelligence system and, as a result, to create conditions for the effective use of this solution. Aims: to structure explanations according to the user's needs; to develop an indicator of the correctness of explanations using the theory of possibilities; to develop a method for evaluating the correctness of explanations using the possibilities approach. The approaches used are a set-theoretic approach to describe the elements of explanations in an artificial intelligence system; a possibility approach to provide a representation of the criterion for evaluating explanations in an intelligent system; a probabilistic approach to describe the probabilistic component of the evaluation of explanations. The following results are obtained. The explanations are structured according to the needs of the user. It is shown that the explanation of the decision process is used by specialists in the development of intelligent systems. Such an explanation represents a complete or partial sequence of steps to derive a decision in an artificial intelligence system. End users mostly use explanations of the result presented by an intelligent system. Such explanations usually define the relationship between the values of input variables and the resulting prediction. The article discusses the requirements for evaluating explanations, considering the needs of internal and external users of an artificial intelligence system. It is shown that it is advisable to use explanation fidelity evaluation for specialists in the development of such systems, and explanation correctness evaluation for external users. An explanation correctness assessment is proposed that uses the necessity indicator in the theory of possibilities. A method for evaluation of explanation fidelity is developed. Conclusions. The scientific novelty of the obtained results is as follows. A possible method for assessing the correctness of an explanation in an artificial intelligence system using the indicators of possibility and necessity is proposed. The method calculates the necessity of using the target value of the input variable in the explanation, taking into account the possibility of choosing alternative values of the variables, which makes it possible to ensure that the target value of the input variable is necessary for the explanation and that the explanation is correct.

Article Details

How to Cite
Chalyi , S. ., & Leshchynskyi , V. . (2023). POSSIBLE EVALUATION OF THE CORRECTNESS OF EXPLANATIONS TO THE END USER IN AN ARTIFICIAL INTELLIGENCE SYSTEM. Advanced Information Systems, 7(4), 75–79. https://doi.org/10.20998/2522-9052.2023.4.10
Section
Intelligent information systems
Author Biographies

Serhii Chalyi , Kharkiv National University of Radio Electronics, Kharkiv

Doctor of Technical Sciences, Professor, Professor of Information Control Systems Department

Volodymyr Leshchynskyi , Kharkiv National University of Radio Electronics, Kharkiv

Candidate of Technical Sciences, Associate Professor, Associate Professor of Software Engineering Department

References

Adadi, A. and Berrada, M. (2018), “Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)”, IEEE Access (Vol. 6), pp. 52138–52160, doi: http://dx.doi.org/10.1109/ACCESS.2018.2870052

Miller T. (2019), “Explanation in artificial intelligence: Insights from the social sciences”, Artificial Intelligence, vol. 267, pp.1–38, doi: https://doi.org/10.1016/j.artint.2018.07.007

Hoa Khanh, Dam, Truyen, Tran and Aditya, Ghose (2018), “Explainable software analytics”, ICSE-NIER '18: Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results, ACM, Gothenburg, Sweden, pp. 53–56, doi: http://dx.doi.org/10.1145/3183399.3183424.

Alonso, J.M., Castiello, C. and Mencar, C. (2018), “A Bibliometric Analysis of the Explainable Artificial Intelligence Research Field”, In: Medina, J., et al. Information Processing and Management of Uncertainty in Knowledge-Based Systems. Theory and Foundations, IPMU 2018, Communications in Computer and Information Science, vol 853. Springer, Cham, doi: https://doi.org/10.1007/978-3-319-91473-2_1

Tintarev, N. and Masthoff, J. (2007), “A survey of explanations in recommender systems”, IEEE 23rd Int. Conference on Data Engineering Workshop, IEEE, Istanbul, Turkey, 2007, pp. 801–810, doi: http://dx.doi.org/10.1109/icdew. 2007.4401070

Chalyi, S., Leshchynskyi, V. and Leshchynska I. (2021), “Counterfactual temporal model of causal relationships for constructing explanations in intelligent systems”, Bulletin of the National Technical University "KhPI", Ser. : System analysis, control and information technology, National Technical University "KhPI", Kharkiv, no. 2(6), pp. 41–46, doi: https://doi.org/10.20998/2079-0023.2021.02.07

Gunning D. and Aha, D. (2019) “DARPA’s Explainable Artificial Intelligence (XAI) Program”, AI Magazine, Vol. 40(2), pp. 44-58, doi: https://doi.org/10.1609/aimag.v40i2.2850.

Tintarev, N. and Masthoff, J. (2012), “Evaluating the effectiveness of explanations for recommender systems”, User Model User-Adap Inter., Vol. 22, pp. 399– 439, doi: https://doi.org/10.1007/s11257-011-9117-5.

Yeh, C.-K., Hsieh, C.-Y., Suggala, A., Inouye, D.I. and Ravikumar, P.K. (2019), “On the (in)fidelity and sensitivity of explanations”, Advances in Neural Information Processing Systems, Vancouver, BC, Canada, pp. 10965–10976, available at: https://dl.acm.org/doi/abs/10.5555/3454287.3455271

Dubois, D. and Prade, H. (2015), “Possibility Theory and Its Applications: Where Do We Stand?”, Mathware and Soft Comp. Magazine, Springer Handbook of Comp. Intel., vol. 18, pp. 31–60, doi: https://doi.org/10.1007/978-3-662-43505-2_3.

Chalyi, S., Leshchynskyi, V.and Leshchynska, I. (2022), “Relational-temporal model of set of substances of subject area for the process of solution formation in intellectual information systems”, Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Inf. Technologies, No. 1 (7), pp. 84–89, doi: https://doi.org/10.20998/2079-0023.2022.01.14

Wick, M.R. (1993), “Second generation expert system explanation”, Second Generation Expert Systems, Springer, Berlin, Germany, 1993, pp. 614–640, doi: http://dx.doi.org/10.1007/978-3-642-77927-5_26

Alvarez-Melis, D. and Jaakkola T.S. (2018), “Towards robust interpretability with self-explaining neural networks”, NIPS'18: Proceedings of the 32nd International Conference on Neural Information Processing Systems, Inc., Montréal, Canada, pp. 7786–7795, available at: https://dl.acm.org/doi/10.5555/3327757.3327875#d31467704e1

Chalyi, S. and Leshchynskyi, V. (2023), “Probabilistic counterfactual causal model for a single input variable in explainability task”, Advanced Information Systems, Vol. 7, No. 3, pp. 54–59, doi: https://doi.org/10.20998/2522-9052.2023.3.08

Chalyi, S. and Leshchynskyi, V. (2023), “Evaluation of the sensitivity of explanations in the intelligent information system”, Control, navigation and communication systems, Vol. 2, pp. 165-169, doi: https://doi.org/10.26906/SUNZ.2023.2.165