PROBABILISTIC COUNTERFACTUAL CAUSAL MODEL FOR A SINGLE INPUT VARIABLE IN EXPLAINABILITY TASK

Main Article Content

Serhii Chalyi
Volodymyr Leshchynskyi

Abstract

The subject of research in this article is the process of constructing explanations in intelligent systems represented as black boxes. The aim is to develop a counterfactual causal model between the values of an input variable and the output of an artificial intelligence system, considering possible alternatives for different input variable values, as well as the probabilities of these alternatives. The goal is to explain the actual outcome of the system's operation to the user, along with potential changes in this outcome according to the user's requirements based on changes in the input variable value. The intelligent system is considered as a "black box." Therefore, this causal relationship is formed using possibility theory, which allows accounting for the uncertainty arising due to the incompleteness of information about changes in the states of the intelligent system in the decision-making process. The tasks involve: structuring the properties of a counterfactual explanation in the form of a causal dependency; formulating the task of building a potential counterfactual causal model for explanation; developing a possible counterfactual causal model. The employed approaches include: the set-theoretic approach, used to describe the components of the explanation construction process in intelligent systems; the logical approach, providing the representation of causal dependencies between input data and the system's decision. The following results were obtained. The structuring of counterfactual causal dependency was executed. A comprehensive task of constructing a counterfactual causal dependency was formulated as a set of subtasks aimed at establishing connections between causes and consequences based on minimizing discrepancies in input data values and deviations in the decisions of the intelligent system under conditions of incomplete information regarding the functioning process of the system. A potential counterfactual causal model for a single input variable was developed. Conclusions. The scientific novelty of the obtained results lies in the proposal of a potential counterfactual causal model for a single input variable. This model defines a set of alternative connections between the values of the input variable and the obtained result based on estimates of the possibility and necessity of using these variables to obtain a decision from the intelligent system. The model enables the formation of a set of dependencies that explain to the user the importance of input data values for achieving an acceptable decision for the user.

Article Details

How to Cite
Chalyi , S. ., & Leshchynskyi , V. . (2023). PROBABILISTIC COUNTERFACTUAL CAUSAL MODEL FOR A SINGLE INPUT VARIABLE IN EXPLAINABILITY TASK. Advanced Information Systems, 7(3), 54–59. https://doi.org/10.20998/2522-9052.2023.3.08
Section
Intelligent information systems
Author Biographies

Serhii Chalyi , Kharkiv National University of Radio Electronics, Kharkiv

Doctor of Technical Sciences, Professor,  Professor of Information Control Systems Department

Volodymyr Leshchynskyi , Kharkiv National University of Radio Electronics, Kharkiv

Candidate of Technical Sciences, Associate Professor, Associate Professor of Software Engineering Department

References

Miller, T. (2019), “Explanation in artificial intelligence: Insights from the social sciences”, Artificial Intelligence, vol. 267, pp. 1–38, doi: https://doi.org/10.1016/j.artint.2018.07.007.

Bodria, F., Giannotti, F., Guidotti, R., Naretto, F., Pedreschi, D. and Rinzivillo, S. (2021), “Benchmarking and survey of explanation methods for black box models”, CoRR arXiv:2102.13076, available at: https://arxiv.org/abs/2102.13076.

Chalyi, S. and Leshchynskyi, V. (2020), “Temporal representation of causality in the construction of explanations in intelligent systems,” Advanced Information Systems, vol. 4, no. 3, pp. 113–117, doi: https://doi.org/10.20998/2522-9052.2020.3.16.

Chalyi, S. and Leshchynskyi, V. (2020), “Method of constructing explanations for recommender systems based on the temporal dynamics of user preferences”, EUREKA: Physics and Engineering, vol. 3, pp. 43–50, doi: https://doi.org/10.21303/2461-4262.2020.001228.

Gunning, D. and Aha, D. (2019), “DARPA’s Explainable Artificial Intelligence (XAI) Program”, AI Magazine, Vol. 40(2), pp. 44–58, doi: https://doi.org/10.1609/aimag.v40i2.2850.

Chalyi, S., Leshchynskyi, V. and Leshchynska I. (2021), “Counterfactual temporal model of causal relationships for constructing explanations in intelligent systems”, Bulletin of the National Technical University "KhPI", Ser. : System analysis, control and information technology, National Technical University "KhPI", Kharkiv, no. 2(6), pp. 41–46, doi: https://doi.org/10.20998/2079-0023.2021.02.07.

Beck, S.R., Riggs, K.J. and Gorniak, S.L. (2009), “Relating developments in children’s counterfactual thinking and executive functions”, Thinking & Reasoning, vol. 15, is. 4, pp. 337–354, doi: https://doi.org/10.1080/13546780903135904.

Byrne, R.M.J. (2019), “Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning”, Kraus S. (ed), Proceedings of the twenty-eighth international joint conference on artificial intelligence, IJCAI 2019, Macao, China, August 10–16, 2019, pp 6276–6282, doi: https://doi.org/10.24963/ijcai.2019/876.

Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D. and Lee, S. (2019), “Counterfactual visual explanations”, Proceedings of the 36th international conference on machine learning, ICML 2019, 9–15 June 2019, Long Beach, California, USA, PMLR, Proceedings of machine learning research, vol .97, pp. 2376–2384, https://doi.org/10.48550/arXiv.1904.07451.

Chalyi, S. and Leshchynskyi, V. (2020), “Temporal representation of causality in the construction of explanations in intelligent systems”, Advanced Information Systems, vol. 4, no. 3, pp. 113–117, doi: https://doi.org/10.20998/2522-9052.2020.3.16.

Pearl, J. (2009), “Causality: Models, Reasoning and Inference”, Econometric Theory, vol. 19, pp. 675–685, Cambridge University Press, USA, doi: https://doi.org/10+10170S0266466603004109.

Halpern, J. Y. and Pearl, J. (2005), “Causes and explanations: A structural-model approach. Part II: Explanations”, The British Journal for the Philosophy of Science, Vol. 56 (4), pp. 889–911, doi: https://doi.org/10.48550/arXiv.cs/0208034.

Lewis, D. (2000), “Causation as influence”, Journal of Philosophy, vol. 97, no. 4 (Special Issue: Causation), pp. 182–197, available at: https://www.jstor.org/stable/2678389.

Levykin, V. and Chala, O. (2018), “Development of a method of probabilistic inference of sequences of business process activities to support business process management”, Eastern-European Journal of Enterprise Technologies, No. 5/3(95), рр. 16-24, doi: https://doi.org/10.15587/1729-4061.2018.142664.

Dubois, Didier and Prade, Henri. (2015), “Possibility Theory and Its Applications: Where Do We Stand?”, Mathware and Soft Computing Magazine, Springer Handbook of Computational Intelligence, vol. 18, p. 31–60, doi: https://doi.org/10.1007/978-3-662-43505-2_3.