name
Orlov Alexander Ivanovich
Scholastic degree
•
•
•
Academic rank
professor
Honorary rank
—
Organization, job position
• Bauman Moscow State Technical University
Research interests
статистические методы, организационно-экономическое моделирование. Разработал новую область прикладной статистики — статистику объектов нечисловой природы
Web site url
—
Current rating (overall rating of articles)
0
TOP5 co-authors
Articles count: 155
Сформировать список работ, опубликованных в Научном журнале КубГАУ
-
THE PROBLEM OF RESEARCH OF FINAL RANKING FOR GROUP OF EXPERTS BY MEANS OF KEMENY MEDIAN
01.00.00 Physical-mathematical sciences
DescriptionIn various applications, it is necessary to analyze several expert orderings, i.e. clustered rankings objects of examination. These areas include technical studies, ecology, management, economics, sociology, forecasting, etc. The objects can be some samples of products, technologies, mathematical models, projects, job applicants and others. In the construction of the final opinion of the commission of experts, it is important to find clustered ranking that averages responses of experts. This article describes a number of methods for clustered rankings averaging, among which there is the method of Kemeny median calculation, based on the use of Kemeny distance. This article focuses on the computing side of the final ranking among the expert opinions problem by means of median Kemeny calculation. There are currently no exact algorithms for finding the set of all Kemeny medians for a given number of permutations (rankings without connections), only exhaustive search. However, there are various approaches to search for a part or all medians, which are analyzed in this study. Zhikharev's heuristic algorithms serve as a good tool to study the set of all Kemeny medians: identifying any connections in mutual locations of the medians in relation to the aggregated expert opinions set (a variety of expert answers permutations). Litvak offers one precise and one heuristic approaches to calculate the median among all possible sets of solutions. This article introduces the necessary concepts, analyzes the advantages of median Kemeny among other possible searches of expert orderings. It identifies the comparative strengths and weaknesses of examined computational ways
-
TWO-SAMPLE WILCOXON TEST - ANALYSIS OF TWO MYTHS
01.00.00 Physical-mathematical sciences
DescriptionIt was established that the two-sample Wilcoxon test (Mann-Whitney test) was designed to test the hypothesis H0: P(X
-
DETECTION OF DEVIATIONS IN CONTROLLING SYSTEM (FOR EXAMPLE, MONITORING THE LEVEL OF FLIGHT SAFETY)
DescriptionControl charts are proposed to use as a tool to detect deviations in the controlling system. This proposal is considered for monitoring flight safety. Possibility of use in practice of airlines of a new indicator of flight safety level and a new method of its monitoring is discussed. As an indicator the ERC of ARMS group, and as a method of monthly and weekly monitoring – a method of the cumulative sums are offered
-
DO WE ALWAYS NEED A SUPPLIER’S QUALITY CONTROL?
DescriptionThe higher the level of quality achieved, the greater the control size - this is the paradox of the classical theory of statistical control. A possible way out is to move to the technical policy based on economic characteristics. Shifting control to the consumer may be economically profitable. We have considered two variants of technical policy - increasing lot size and replacing defective product units at the consumer
-
MOVING FORWARD TO ARISTOTLE: WE MUST BE FREE FROM THE PERVERSIONS OF ECONOMIC THEORY
DescriptionThe founder of the economic theory is Aristotle. The so-called "market economy" is a perversion of Aristotle's views. We have to eliminate distortions. What can replace the "market economy"? We are developing a new organizational-economic theory - solidary information economy, based on the views of Aristotle. The name of this theory has changed over time. Initially, we used the term "nonformal information economy of the future", and then began to use the term "solidary information economy." In connection with Biocosmology and neo-Aristotelism preferred is an adequate term "functionalist organic information economy". This article describes the main provisions of solidary information economy, intended to replace the market economy as a management tool. The main problems are discussed, the solution of which is devoted to research related to the considered basic organizational and economic theory. We discuss Aristotle's positions, on which the economic theory is based, in particular, solidary information economy. We prove that the market economy has remained in the XIX century and the mainstream in modern economic science - justification of insolvency of a market economy and the need to move to a planned system of economic management. We examine the impact of ICT on economic activity. We develop the approaches to decision-making in the solidary information economy. On the basis of modern decision theory (especially expert procedures) and informationcommunication technologies people can get rid of chrematistics and will understand the term of "economy" according to Aristotle
-
RENEWAL DEPENDENCE METHOD OF LEAST SQUARES BASED NONPARAMETRIC MODEL WITH PERIODIC COMPONENT
01.00.00 Physical-mathematical sciences
DescriptionWe consider the nonparametric problem of reneval dependence, which is described by the sum of a linear trend and periodic function with a known period. We obtain the asymptotic distribution of the parameter estimates and the trend component. The methods of estimating the periodic component and designing in-terval forecast. In the model of the points of observa-tion, natural for applications, justified by the condi-tions of use. In particular, we prove an asymptotically unbiased estimate of the coefficient of the linear term
-
INTERCONNECTION LIMIT THEOREMS AND MONTE-CARLO METHOD
01.00.00 Physical-mathematical sciences
DescriptionThe purpose of mathematical statistics is development of methods for the data analysis intended to solve applied problems. Over time, approaches to the development of data analysis methods have changed. A hundred years ago, it was assumed, that the distributions of the data have a certain type, for example, they are normal distributions, and on that assumption they developed a statistical theory. The next stage, in the first place in theoretical studies there are limit theorems. By "small sample" we mean a sample, which can not be applied to conclusions based on the limit theorems. In each statistical problem there is a need to divide the final sample sizes into two classes - those for which you can apply the limit theorems, and those for which you can not do it because of the risk of incorrect conclusions. To solve this problem we often used the Monte Carlo method. More complex problems arise when studying the effect on the properties of statistical procedures for data analysis of various deviations from the original assumptions. To study such impact, we often used the Monte Carlo method as well. The basic (and not solved in a general way) problem of the study of the stability of the findings in the presence of deviations from the parametric families of distributions is the problem of choosing some distributions for using in modeling. We consider some examples of application of the Monte Carlo method, relating to the activities of our research team. We have also formulated basic unsolved problems
-
PROBABILITY MODELS FOR OBTAINING NON-NUMERICAL DATA
01.00.00 Physical-mathematical sciences
DescriptionThe statistics of objects of non-numerical nature (statistics of non-numerical objects, non-numerical data statistics, non-numeric statistics) is the area of mathematical statistics, devoted to the analysis methods of non-numeric data. Basis of applying the results of mathematical statistics are probabilistic-statistical models of real phenomena and processes, the most important (and often only) which are models for obtaining data. The simplest example of a model for obtaining data is the model of the sample as a set of independent identically distributed random variables. In this article we have considered the basic probabilistic models for obtaining non-numeric data. Namely, the models of dichotomous data, results of paired comparisons, binary relations, ranks, the objects of general nature. We have discussed the various options of probabilistic models and their practical use. For example, the basic probabilistic model of dichotomous data - Bernoulli vector (Lucian) i.e. final sequence of independent Bernoulli trials, for which the probabilities of success may be different. The mathematical tools of solutions of various statistical problems associated with the Bernoulli vectors are useful for the analysis of random tolerances; random sets with independent elements; in processing the results of independent pairwise comparisons; statistical methods for analyzing the accuracy and stability of technological processes; in the analysis and synthesis of statistical quality control plans (for dichotomous characteristics); the processing of marketing and sociological questionnaires (with closed questions like "yes" - "no"); the processing of socio-psychological and medical data, in particular, the responses to psychological tests such as MMPI (used in particular in the problems of human resource management), and analysis of topographic maps (used for the analysis and prediction of the affected areas for technological disasters, distributing corrosion, propagation environmentally harmful pollutants, various diseases (including myocardial infarction), in other situations), etc.
-
PROBABILISTIC-STATISTICAL MODELING THE INTERFERENCES FROM ELECTRIC LOCOMOTIVES
01.00.00 Physical-mathematical sciences
DescriptionThe movements of electric locomotives create the interferences affecting the wired link. The creation of sufficiently technical effective and at the same time cost-effective means of protection from wireline interferences generated traction networks assumes as a preparatory phase to develop mathematical models of interference caused by electric locomotives. We have developed a probabilistic-statistical model of interferences caused by electric locomotives. The asymptotic distribution of the total interference is the distribution of the length of the two-dimensional random vector whose coordinates - independent normally distributed random variables with mean 0 and variance 1. Limit theorem is proved for the expectation of the total amplitude of the interferences. Monte-Carlo method is used to study the rate of convergence of the expectation of the total amplitude of the interferences to the limiting value. We used an algorithm of mixing developed by MacLaren-Marsaglia (M-algorithm). Five sets of amplitudes are analyzed, selected in accordance with the recommendations of experts in the field of traction AC networks. The most rapid convergence to the limit takes place in the case of equal amplitudes. It was found that the maximum possible average value of the amplitude of the random noise by 7.4% less than the previously used value, which promises a significant economic impact
-
PROBABILITY-STATISTICAL MODELS OF CORRELATION AND REGRESSION
08.00.13 Mathematical and instrumental methods of Economics
DescriptionThe correlation and determination coefficients are widely used in statistical data analysis. According to measurement theory, Pearson's linear paired correlation coefficient is applicable to variables measured on an interval scale. It cannot be used in the analysis of ordinal data. The nonparametric Spearman and Kendall rank coefficients estimate the relationship of ordinal variables. The critical value when testing the significance of the difference of the correlation coefficient from 0 depends on the sample size. Therefore, using the Chaddock Scale is incorrect. When using a passive experiment, the correlation coefficients are reasonably used for prediction, but not for control. To obtain probabilistic-statistical models intended for control, an active experiment is required. The effect of outliers on the Pearson correlation coefficient is very large. With an increase in the number of analyzed sets of predictors, the maximum of the corresponding correlation coefficients — indicators of approximation quality noticeably increases (the effect of “inflation” of the correlation coefficient). Four main regression analysis models are considered. Models of the least squares method with a determinate independent variable are distinguished. The distribution of deviations is arbitrary, however, to obtain the limit distributions of parameter estimates and regression dependences, we assume that the conditions of the central limit theorem are satisfied. The second type of model is based on a sample of random vectors. The dependence is nonparametric, the distribution of the two-dimensional vector is arbitrary. The estimation of the variance of an independent variable can be discussed only in the model based on a sample of random vectors, as well as the determination coefficient as a quality criterion for the model. Time series smoothing is discussed. Methods of restoring dependencies in spaces of a general nature are considered. It is shown that the limiting distribution of the natural estimate of the dimensionality of the model is geometric, and the construction of an informative subset of features encounters the effect of "inflation coefficient correlation". Various approaches to the regression analysis of interval data are discussed. Analysis of the variety of regression analysis models leads to the conclusion that there is no single “standard model”