ru / en  #### name

Orlov Alexander Ivanovich

professor

#### Research interests

статистические методы, организационно-экономическое моделирование. Разработал новую область прикладной статистики — статистику объектов нечисловой природы

0

## Articles count: 150

• Description
The movements of electric locomotives create the interferences affecting the wired link. The creation of sufficiently technical effective and at the same time cost-effective means of protection from wireline interferences generated traction networks assumes as a preparatory phase to develop mathematical models of interference caused by electric locomotives. We have developed a probabilistic-statistical model of interferences caused by electric locomotives. The asymptotic distribution of the total interference is the distribution of the length of the two-dimensional random vector whose coordinates - independent normally distributed random variables with mean 0 and variance 1. Limit theorem is proved for the expectation of the total amplitude of the interferences. Monte-Carlo method is used to study the rate of convergence of the expectation of the total amplitude of the interferences to the limiting value. We used an algorithm of mixing developed by MacLaren-Marsaglia (M-algorithm). Five sets of amplitudes are analyzed, selected in accordance with the recommendations of experts in the field of traction AC networks. The most rapid convergence to the limit takes place in the case of equal amplitudes. It was found that the maximum possible average value of the amplitude of the random noise by 7.4% less than the previously used value, which promises a significant economic impact
• Description
The statistics of objects of non-numerical nature (statistics of non-numerical objects, non-numerical data statistics, non-numeric statistics) is the area of mathematical statistics, devoted to the analysis methods of non-numeric data. Basis of applying the results of mathematical statistics are probabilistic-statistical models of real phenomena and processes, the most important (and often only) which are models for obtaining data. The simplest example of a model for obtaining data is the model of the sample as a set of independent identically distributed random variables. In this article we have considered the basic probabilistic models for obtaining non-numeric data. Namely, the models of dichotomous data, results of paired comparisons, binary relations, ranks, the objects of general nature. We have discussed the various options of probabilistic models and their practical use. For example, the basic probabilistic model of dichotomous data - Bernoulli vector (Lucian) i.e. final sequence of independent Bernoulli trials, for which the probabilities of success may be different. The mathematical tools of solutions of various statistical problems associated with the Bernoulli vectors are useful for the analysis of random tolerances; random sets with independent elements; in processing the results of independent pairwise comparisons; statistical methods for analyzing the accuracy and stability of technological processes; in the analysis and synthesis of statistical quality control plans (for dichotomous characteristics); the processing of marketing and sociological questionnaires (with closed questions like "yes" - "no"); the processing of socio-psychological and medical data, in particular, the responses to psychological tests such as MMPI (used in particular in the problems of human resource management), and analysis of topographic maps (used for the analysis and prediction of the affected areas for technological disasters, distributing corrosion, propagation environmentally harmful pollutants, various diseases (including myocardial infarction), in other situations), etc.
• Description
The purpose of mathematical statistics is development of methods for the data analysis intended to solve applied problems. Over time, approaches to the development of data analysis methods have changed. A hundred years ago, it was assumed, that the distributions of the data have a certain type, for example, they are normal distributions, and on that assumption they developed a statistical theory. The next stage, in the first place in theoretical studies there are limit theorems. By "small sample" we mean a sample, which can not be applied to conclusions based on the limit theorems. In each statistical problem there is a need to divide the final sample sizes into two classes - those for which you can apply the limit theorems, and those for which you can not do it because of the risk of incorrect conclusions. To solve this problem we often used the Monte Carlo method. More complex problems arise when studying the effect on the properties of statistical procedures for data analysis of various deviations from the original assumptions. To study such impact, we often used the Monte Carlo method as well. The basic (and not solved in a general way) problem of the study of the stability of the findings in the presence of deviations from the parametric families of distributions is the problem of choosing some distributions for using in modeling. We consider some examples of application of the Monte Carlo method, relating to the activities of our research team. We have also formulated basic unsolved problems
• Description
We consider the nonparametric problem of reneval dependence, which is described by the sum of a linear trend and periodic function with a known period. We obtain the asymptotic distribution of the parameter estimates and the trend component. The methods of estimating the periodic component and designing in-terval forecast. In the model of the points of observa-tion, natural for applications, justified by the condi-tions of use. In particular, we prove an asymptotically unbiased estimate of the coefficient of the linear term
• Description
The founder of the economic theory is Aristotle. The so-called "market economy" is a perversion of Aristotle's views. We have to eliminate distortions. What can replace the "market economy"? We are developing a new organizational-economic theory - solidary information economy, based on the views of Aristotle. The name of this theory has changed over time. Initially, we used the term "nonformal information economy of the future", and then began to use the term "solidary information economy." In connection with Biocosmology and neo-Aristotelism preferred is an adequate term "functionalist organic information economy". This article describes the main provisions of solidary information economy, intended to replace the market economy as a management tool. The main problems are discussed, the solution of which is devoted to research related to the considered basic organizational and economic theory. We discuss Aristotle's positions, on which the economic theory is based, in particular, solidary information economy. We prove that the market economy has remained in the XIX century and the mainstream in modern economic science - justification of insolvency of a market economy and the need to move to a planned system of economic management. We examine the impact of ICT on economic activity. We develop the approaches to decision-making in the solidary information economy. On the basis of modern decision theory (especially expert procedures) and informationcommunication technologies people can get rid of chrematistics and will understand the term of "economy" according to Aristotle
• Description
The higher the level of quality achieved, the greater the control size - this is the paradox of the classical theory of statistical control. A possible way out is to move to the technical policy based on economic characteristics. Shifting control to the consumer may be economically profitable. We have considered two variants of technical policy - increasing lot size and replacing defective product units at the consumer
• Description
Control charts are proposed to use as a tool to detect deviations in the controlling system. This proposal is considered for monitoring flight safety. Possibility of use in practice of airlines of a new indicator of flight safety level and a new method of its monitoring is discussed. As an indicator the ERC of ARMS group, and as a method of monthly and weekly monitoring – a method of the cumulative sums are offered
• Description
It was established that the two-sample Wilcoxon test (Mann-Whitney test) was designed to test the hypothesis H0: P(X
• Description
In various applications, it is necessary to analyze several expert orderings, i.e. clustered rankings objects of examination. These areas include technical studies, ecology, management, economics, sociology, forecasting, etc. The objects can be some samples of products, technologies, mathematical models, projects, job applicants and others. In the construction of the final opinion of the commission of experts, it is important to find clustered ranking that averages responses of experts. This article describes a number of methods for clustered rankings averaging, among which there is the method of Kemeny median calculation, based on the use of Kemeny distance. This article focuses on the computing side of the final ranking among the expert opinions problem by means of median Kemeny calculation. There are currently no exact algorithms for finding the set of all Kemeny medians for a given number of permutations (rankings without connections), only exhaustive search. However, there are various approaches to search for a part or all medians, which are analyzed in this study. Zhikharev's heuristic algorithms serve as a good tool to study the set of all Kemeny medians: identifying any connections in mutual locations of the medians in relation to the aggregated expert opinions set (a variety of expert answers permutations). Litvak offers one precise and one heuristic approaches to calculate the median among all possible sets of solutions. This article introduces the necessary concepts, analyzes the advantages of median Kemeny among other possible searches of expert orderings. It identifies the comparative strengths and weaknesses of examined computational ways
• Description
This article briefly reviews the classical concept of functional dependence in mathematics, determines the limitations of this concept for adequate modeling of reality and formulates the problem, consisting in search of such generalization of the concept of func-tions, which is more suitable for the adequate reflec-tion of causal relationships in the real domain. Also, it discusses theoretical and practical solving the prob-lem, consisting in: (a) we suggest the universal method of calculating the amount of information in the value of argument about the meaning of the function, i.e. cognitive functions which is independent from the subject area; b) we offer software tools: Eidos intelli-gent system, allowing in practice to carry out these calculations, i.e. to build cognitive functions based on a fragmented noisy empirical data of high dimension. We also offer the concepts of nonreducing, partially and completely reduced direct and inverse, positive and negative cognitive functions and the method of formation of reduced cognitive function, which is a generalization of known weighted least-squares meth-od on the basis of observation the amount of infor-mation in the values of the argument about the values of the functions accounting