TEMPORAL REPRESENTATION OF CAUSALITY IN THE CONSTRUCTION OF EXPLANATIONS IN INTELLIGENT SYSTEMS

Main Article Content

Serhii Chalyi
http://orcid.org/0000-0002-9982-9091
Volodymyr Leshchynskyi
http://orcid.org/0000-0002-8690-5702

Abstract

The subject matter of the article are the processes of constructing explanations in intelligent systems. Objectives. The goal is to develop a temporal representation of causality in order to provide a description of the process of the intelligent system as part of the explanation, taking into account the temporal aspect. As a result, it provides an opportunity to increase user confidence in the results of the intelligent system. Tasks: structuring of causal dependences taking into account the decision-making process in the intellectual system and its state; development of a temporal model of causality for explanations in the intellectual system. The approaches used are: approaches to the description of causality between the elements of the system on the basis of causal relationships, on the basis of probabilistic dependencies, as well as on the basis of the physical interaction of its elements. The following results were obtained. The structuring of causal dependences for construction of explanations with allocation of causal, probabilistic communications, and also dependences between a condition of intellectual system and the recommendations received in this system is executed. A model of causal dependences in an intelligent system is proposed to construct explanations for the recommendations of this system. Conclusions. The scientific novelty of the results is as follows. The model of causal dependences which are intended for construction of the explanation in intellectual system is offered. This explanation consists of a chain of causal relationships that reflect the sequence of decision-making over time. The model covers the limitations and conditions of the formation of the result of the intelligent system. Constraints are represented by causal relationships between key performance actions. Restrictions must be true for all explanations where they are used. Conditions determine the probable relationships between such actions in the intellectual system. The model takes into account the influence of key parameters of the state of the intelligent system on the achievement of the result. The presented model provides an explanation with varying degrees of detail based on the definition of the temporal sequence of actions, as well as taking into account changes in the states of the intelligent system.

Article Details

Section
Intelligent information systems
Author Biographies

Serhii Chalyi, Kharkiv National University of Radio Electronics, Kharkiv

Doctor of Technical Sciences, Professor, Professor of Professor of Information Control Systems Department

Volodymyr Leshchynskyi, Kharkiv National University of Radio Electronics, Kharkiv

Candidate of Technical Sciences, Associate Professor, Associate Professor of Software Engineering Department

References

Miller, T. (2019), “Explanation in artificial intelligence: Insights from the social sciences”, Artificial Intelligence, vol. 267, pp.1-38, DOI: https://doi.org/10.1016/j.artint.2018.07.007.

Chalyi, S., Leshchynskyi, V. and Leshchynska, I. (2019), “The concept of designing explanations in the recommender systems based on the white box”, Control, navigation and communication systems, Vol. 3 (55). pp. 156-160. DOI: https://doi.org/10.26906/SUNZ.2019.3.156.

Chalyi, S., Leshchynskyi, V. and Leshchynska, I. (2019), “Designing explanations in the recommender systems based on the principle of a black box”, Advanced information systems, Vol. 3, No 2, pp. 47-51, DOI: https://doi.org/10.20998/2522-9052.2019.2.08.

Goodman, B. and Flaxman, S. (2017), “European Union regulations on algorithmic decision making and a “Right to explanation”, AI Magazine, Vol. 38 (3), pp. 50–57.

Tjoa, E. and Guan, C. (2019), “A survey on explainable artificial intelligence (XAI): Towards medical XAI”, Explainable Artificial Intelligence, pp. 1-22.

Castelvecchi, D. (2016), “Can we open the black box of AI?”, Nature, Vol. 538 (7623), pp. 20-23.

Arrieta, B., Rodriguez, N. and Del Ser, J. (2020), “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI”, Information Fusion, Vol. 58, pp. 82-115. DOI: https://doi.org/10.1016/j.inffus.2019.12.012.

Lou, Y., Caruana, R. and Gehrke, J. (2012), “Intelligible models for classification and regression”, Proc. of the 18th ACM SIGKDD int. conf. on Knowledge discovery and data mining, pp. 150–158. DOI: https://doi.org/10.1145/2339530.2339556.

Halpern, J.Y. and Pearl, J. (2005), “Causes and explanations: A structural-model approach. Part I: Causes”, The British Journal for the Philosophy of Science, Vol. 56 (4), pp. 843-887.

Menzies, P. and Price, H. (1993), “Causation as a secondary quality”, The British Journal for the Philosophy of Science, Vol.44 (2), pp. 187-203.

Fair, D. (1979), “Causation and the flow of energy”, Erkenntnis, Vol. 14, pp. 219–250. DOI: https://doi.org/10.1007/BF00174894.

Chalyi, S., Leshchynskyi, V. and Leshchynska, I. (2019), “Modeling explanations for the recommended list of items based on the temporal dimension of user choice”, Control, navigation and communication systems, Vol. 6 (58), pp. 97-101. DOI: https://doi.org/10.26906/SUNZ.2019.6.097.

Levykin, V. and Chala, O. (2018), “Development of a method for the probabilistic inference of sequences of a business process activities to support the business process management”, Eastern-European Journal of Eenterprise Technologies, Vol. 5/3(95), pp. 16-24. DOI: https://doi.org/10.15587/1729-4061.2018.142664.

Chalyi, S. and Pribylnova, I. (2019), “The method of constructing recommendations online on the temporal dynamics of user interests using multilayer graph”, EUREKA: Physics and Engineering, 2019, Vol. 3, pp. 13-19.