PARALLEL IMPLEMENTATION OF THE METHOD OF GRADIENT BOOSTING

Main Article Content

Olena Tolstoluzka
Bogdan Parshencev
Olha Moroz

Abstract

The issue of machine learning has been paying more attention in all areas of information technology in recent times. On the one hand, this is due to the rapid growth of requirements for future specialists, and on the other - with the very rapid development of information technology and Internet communications. One of the main tasks of e-learning is the task of classification. For this type of task, the method of machine learning called gradient boost is very well suited. Grading boosting is a family of powerful machine learning algorithms that have proven significant success in solving practical problems. These algorithms are very flexible and easily customized for the specific needs of the program, for example, they are studied in relation to different loss functions. The idea of boosting is the iterative process of sequential building of private models. Each new model learns based on information about errors made in the previous stage, and the resulting function is a linear combination of the whole ensemble of models, taking into account minimization of any penalty function. The mathematical apparatus of gradient boosting is well adapted for the solution of the classification problem. However, as the number of input data increases, the issue of reducing the construction time of the ensemble of decision trees becomes relevant. Using parallel computing systems and parallel programming technologies can produce positive results, but requires the development of new methods for constructing gradient boosting. The article reveals the main stages of the method of parallel construction of gradient boosting for solving the classification problem in e-learning. Unlike existing ones, the method allows to take into account the features of architecture and the organization of parallel processes in computing systems with shared and distributed memory. The method takes into account the possibility of evaluating the efficiency of building an ensemble of decision trees and parallel algorithms. Obtaining performance indicators for each iteration of the method helps to select the rational number of parallel processors in the computing system. This allows for a further reduction of the completion time of the gradient boosting. The simulation with the use of MPI parallel programming technology, the Python programming language for the architecture of the DM-MIMD system, confirms the reliability of the results. Here is an example of the organization of input data. Presented by Python is a program for constructing gradient boosting. The developed visualization of the obtained estimates of performance indicators allows the user to select the necessary configuration of the computing system.

Article Details

How to Cite
Tolstoluzka, O., Parshencev, B., & Moroz, O. (2018). PARALLEL IMPLEMENTATION OF THE METHOD OF GRADIENT BOOSTING. Advanced Information Systems, 2(3), 19–23. https://doi.org/10.20998/2522-9052.2018.3.03
Section
Identification problems in information systems
Author Biographies

Olena Tolstoluzka, V. N. Karazin Kharkiv National University, Kharkiv

Doctor of Technical Sciences, Senior Research Fellow, Professor of Theoretical and Applied Systems Engineering Department

Bogdan Parshencev, V. N. Karazin Kharkiv National University, Kharkiv

Postgraduate student of Theoretical and Applied Systems Engineering Department

Olha Moroz, V. N. Karazin Kharkiv National University, Kharkiv

Senior Lecturer of Theoretical and Applied Systems Engineering Department

References

Tolstoluzka, O. and Parshencev, B. (2018), “The solution of the classification problem in e-learning based on the method parallel construction of decision trees”, Advanced Information Systems, Vol. 2, No. 3, pp. 5–9.

Hastie, T., Tibshirani, R. and Friedman, J. (2001), “Linear Methods for Classification”, The Elements of Statistical Learning, ser. Springer series in statistics, Springer, New York, рр. 101–137.

Vapnik, V. (1998), The Nature of Statistical Learning Theory, John Wiley and Sons, N.Y., 300 p.

Gergel, V.P. (2007), Theory and Practice of Parallel Computing, Bean, Moscow, 71 p.

Voevodin, V.V. and Voevodin, Vl.V. (2002), Parallel Computations, BHV-Petersburg, St. Petersburg, 608 p.

Breiman, L., Friedman, J.H., Olshen, R.A. and Stone, C.T. (1984), Classification and Regression Trees, Wadsworth, Belmont, California, 332 p.

Gehrke, J., Ganti, V., Ramakrishnan R. and Wei-Yin Loh (1999), “BOAT — optimistic decision tree construction”, ACM SIGMOD International Conference on Management of Data, pp. 169–180.

Polyakov, G.A., Shmatkov, S.I., Tolstoluzhskaya, E.G. and Tolstoluzhsky, D.A. (2012), Synthesis and analysis of parallel processes in adaptive time-parametric computing systems, V. N. Karazin Kharkiv National University, Kharkiv, рp. 434-575.