
name
Orlov Alexander Ivanovich
Scholastic degree
•
•
•
Academic rank
professor
Honorary rank
—
Organization, job position
• Bauman Moscow State Technical University
Research interests
статистические методы, организационно-экономическое моделирование. Разработал новую область прикладной статистики — статистику объектов нечисловой природы
Web site url
—
Current rating (overall rating of articles)
0
TOP5 co-authors
Articles count: 152
Сформировать список работ, опубликованных в Научном журнале КубГАУ
-
ADDITIVE-MULTIPLICATIVE MODEL FOR RISK ESTIMATION IN THE PRODUCTION OF ROCKET AND SPACE TECHNICS
01.00.00 Physical-mathematical sciences
Description
For the first time we have developed a general additive-multiplicative model of the risk estimation (to estimate the probabilities of risk events). In the two-level system in the lower level the risk estimates are combined additively, on the top – in a multiplicative way. Additive-multiplicative model was used for risk estimation for (1) implementation of innovative projects at universities (with external partners), (2) the production of new innovative products, (3) the projects for creation of rocket and space equipmen
-
01.00.00 Physical-mathematical sciences
Description
In various applications it is necessary to analyze some expert orderings, ie clustered rankings of examination objects. These areas include technical studies, ecology, management, economics, sociology, forecasting, etc. The objects may make samples of the products, technologies, mathematical models, projects, job applicants and others. We obtain clustered rankings which can be both with the help of experts and objective way, for example, by comparing the mathematical models with experimental data using a particular quality criterion. The method described in this article was developed in connection with the problems of chemical safety and environmental security of the biosphere. We propose a new method for constructing a clustered ranking which can be average (in the sense, discussed in this work) for all clustered rankings under our consideration. Then the contradictions between the individual initial rankings are contained within clusters average (coordinated) ranking. As a result, ordered clusters reflects the general opinion of the experts, more precisely, the total that is contained simultaneously in all the original rankings. Newly built clustered ranking is often called the matching (coordinated) ranking with respect to the original clustered rankings. The clusters are enclosed objects about which some of the initial rankings are contradictory. For these objects is necessary to conduct the new studies. These studies can be formal mathematics (calculation of the Kemeny median, orderings by means of the averages and medians of ranks, etc.) or these studies require involvement of new information from the relevant application area, it may be necessary conduct additional scientific research. In this article we introduce the necessary concepts and we formulate the new algorithm of construct the coordinated ranking for some cluster rankings in general terms, and its properties are discussed
-
01.00.00 Physical-mathematical sciences
Description
We consider an approach to the transition from continuous to discrete scale which was defined by means of step of quantization (i.e. interval of grouping). Applied purpose is selecting the number of gradations in sociological questionnaires. In accordance with the methodology of the general stability theory, we offer to choose a step so that the errors, generated by the quantization, were of the same order as the errors inherent in the answers of respondents. At a finite length of interval of the measured value change of the scale this step of quantization uniquely determines the number of gradations. It turns out that for many issues gated it is enough to point 3 - 6 answers gradations (hints). On the basis of the probabilistic model we have proved three theorems of quantization. They are allowed to develop recommendations on the choice of the number of gradations in sociological questionnaires. The idea of "quantization" has applications not only in sociology. We have noted, that it can be used not only to select the number of gradations. So, there are two very interesting applications of the idea of "quantization" in inventory management theory - in the two-level model and in the classical Wilson model taking into account deviations from it (shows that "quantization" can use as a way to improve stability). For the two-level inventory management model we proved three theorems. We have abandoned the assumption of Poisson demand, which is rarely carried out in practice, and we give generally fairly simple formulas for finding the optimal values of the control parameters, simultaneously correcting the mistakes of predecessors. Once again we see the interpenetration of statistical methods that have arisen to analyze data from a variety of subject areas, in this case, from sociology and logistics. We have another proof that the statistical methods - single scientificpractical area that is inappropriate to share by areas of applications
-
ASYMPTOTICS OF ESTIMATES OF PROBABILITY DISTRIBUTION DENSITY
01.00.00 Physical-mathematical sciences
Description
Nonparametric estimates of the probability distribution density in spaces of arbitrary nature are one of the main tools of non-numerical statistics. Their particular cases are considered - kernel density estimates in spaces of arbitrary nature, histogram estimations and Fix-Hodges-type estimates. The purpose of this article is the completion of a series of papers devoted to the mathematical study of the asymptotic properties of various types of nonparametric estimates of the probability distribution density in spaces of general nature. Thus, a mathematical foundation is applied to the application of such estimates in non-numerical statistics. We begin by considering the mean square error of the kernel density estimate and, in order to maximize the order of its decrease, the choice of the kernel function and the sequence of the blur indicators. The basic concepts are the circular distribution function and the circular density. The order of convergence in the general case is the same as in estimating the density of a numerical random variable, but the main conditions are imposed not on the density of a random variable, but on the circular density. Next, we consider other types of nonparametric density estimates - histogram estimates and Fix-Hodges-type estimates. Then we study nonparametric regression estimates and their application to solve discriminant analysis problems in a general nature space
-
ASYMPTOTIC METHODS OF STATISTICAL CONTROL
01.00.00 Physical-mathematical sciences
Description
Statistical control is a sampling control based on the probability theory and mathematical statistics. The article presents the development of the methods of statistical control in our country. It discussed the basics of the theory of statistical control – the plans of statistical control and their operational characteristics, the risks of the supplier and the consumer, the acceptance level of defectiveness and the rejection level of defectiveness. We have obtained the asymptotic method of synthesis of control plans based on the limit average output level of defectiveness. We have also developed the asymptotic theory of single sampling plans and formulated some unsolved mathematical problems of the theory of statistical control
-
THE ASYMPTOTIC INFORMATION CRITERION OF NOISE QUALITY
Description
Intuitively everyone understands that noise is a signal in which there no information is, or which in practice fails to reveal the information. More precisely, it is clear that a certain sequence of elements (the number) the more is the noise, the less information is contained in the values of some elements on the values of others. It is even stranger, that noone has suggested the way, but even the idea of measuring the amount of information in some fragments of signal of other fragments and its use as a criterion for assessing the degree of closeness of the signal to the noise. The authors propose the asymptotic information criterion of the quality of noise, and the method, technology and methodology of its application in practice. As a method of application of the asymptotic information criterion of noise quality, we offer, in practice, the automated systemcognitive analysis (ASC-analysis), and as a technology and software tools of ASC-analysis we offer the universal cognitive analytical system called "Eidos". As a method, we propose a technique of creating applications in the system, as well as their use for solving problems of identification, prediction, decision making and research the subject area by examining its model. We present an illustrative numerical example showing the ideas presented and demonstrating the efficiency of the proposed asymptotic information criterion of the quality of the noise, and the method, technology and methodology of its application in practice
-
BASIC RESULTS OF THE MATHEMATICAL THEORY OF CLASSIFICATION
01.00.00 Physical-mathematical sciences
Description
The mathematical theory of classification contains a large number of approaches, models, methods, algorithms. This theory is very diverse. We distinguish three basic results in it - the best method of diagnosis (discriminant analysis), an adequate indicator of the quality of discriminant analysis algorithm, the statement about stopping after a finite number of steps iterative algorithms of cluster analysis. Namely, on the basis of Neyman - Pearson Lemma we have shown that the optimal method of diagnosis exists and can be expressed through probability densities corresponding to the classes. If the densities are unknown, one should use non-parametric estimators of training samples. Often, we use the quality indicator of diagnostic algorithm as "the probability (or share) the correct classification (diagnosis)" - the more the figure is the better algorithm is. It is shown that widespread use of this indicator is unreasonable, and we have offered the other - "predictive power", obtained by the conversion in the model of linear discriminant analysis. A stop after a finite number of steps of iterative algorithms of cluster analysis method is demonstrated by the example of k-means. In our opinion, these results are fundamental to the theory of classification and every specialist should be familiar with them for developing and applying the theory of classification
-
PROBABILISTIC-STATISTICAL METHODS IN KOLMOGOROV’S RESEARCHES
01.00.00 Physical-mathematical sciences
Description
From a modern point of view we have discussed Kolmogorov’s researches in the axiomatic approach to probability theory, the goodness-of-fit test of the empirical distribution with theoretical, properties of the median estimates as a distribution center, the effect of "swelling" of the correlation coefficient, the theory of averages, the statistical theory of crystallization of metals, the least squares method, the properties of sums of a random number of random variables, statistical control, unbiased estimates, axiomatic conclusion of logarithmic normal distribution in crushing, the methods of detecting differences in the weather-type experiments
-
PROBABILISTIC-STATISTICAL METHODS IN GNEDENKO’S RESEARCHES
01.00.00 Physical-mathematical sciences
Description
We analyze the probabilistic-statistical methods in the researches of Boris Vladimirovich Gnedenko – the academician of Ukrainian Academy of Science, which are very important for the XXI century. We have also discussed the limit theorems of probability theory, mathematical statistics, reliability theory, statistical methods of quality control and queuing theory. We give some information about the main stages of scientific career of B.V. Gnedenko, his views on the history of mathematics and teaching
-
PROBABILITY-STATISTICAL MODELS OF CORRELATION AND REGRESSION
08.00.13 Mathematical and instrumental methods of Economics
Description
The correlation and determination coefficients are widely used in statistical data analysis. According to measurement theory, Pearson's linear paired correlation coefficient is applicable to variables measured on an interval scale. It cannot be used in the analysis of ordinal data. The nonparametric Spearman and Kendall rank coefficients estimate the relationship of ordinal variables. The critical value when testing the significance of the difference of the correlation coefficient from 0 depends on the sample size. Therefore, using the Chaddock Scale is incorrect. When using a passive experiment, the correlation coefficients are reasonably used for prediction, but not for control. To obtain probabilistic-statistical models intended for control, an active experiment is required. The effect of outliers on the Pearson correlation coefficient is very large. With an increase in the number of analyzed sets of predictors, the maximum of the corresponding correlation coefficients — indicators of approximation quality noticeably increases (the effect of “inflation” of the correlation coefficient). Four main regression analysis models are considered. Models of the least squares method with a determinate independent variable are distinguished. The distribution of deviations is arbitrary, however, to obtain the limit distributions of parameter estimates and regression dependences, we assume that the conditions of the central limit theorem are satisfied. The second type of model is based on a sample of random vectors. The dependence is nonparametric, the distribution of the two-dimensional vector is arbitrary. The estimation of the variance of an independent variable can be discussed only in the model based on a sample of random vectors, as well as the determination coefficient as a quality criterion for the model. Time series smoothing is discussed. Methods of restoring dependencies in spaces of a general nature are considered. It is shown that the limiting distribution of the natural estimate of the dimensionality of the model is geometric, and the construction of an informative subset of features encounters the effect of "inflation coefficient correlation". Various approaches to the regression analysis of interval data are discussed. Analysis of the variety of regression analysis models leads to the conclusion that there is no single “standard model”