University of Texas at El Paso
Computer Science Department
Abstracts of 2015 Reports


Technical Report UTEP-CS-15-99, December 2015
Updated version UTEP-CS-15-99a, February 2016
Final version UTEP-CS-15-99b, May 2016
Why Min-Based Conditioning
Salem Benferhat and Vladik Kreinovich

Published in Proceedings of the 7th International Workshop on Reliable Engineering Computing REC'2016, Bochum, Germany, June 15-17, 2016, pp. 327-334.

In many practical situations, we do not have full information about which alternatives are possible and which are not. In such situations, an expert can estimate, for each alternative, the degree to which this alternative is possible. Sometimes, experts can produce numerical estimates of their degrees, but often, they can only provide us with qualitative estimates: they inform us which degrees are higher, but do not provide us with numerical values for these degrees. After we get these degrees from the experts, we often gain additional information, because of which some alternatives which were previously considered possible are now excluded. To take this new information into account, we need to appropriately update the corresponding possibility degrees. In this paper, we prove that under several reasonable requirements on such an update procedure, there is only one procedure that satisfies all these requirements -- namely, the min-based conditioning.

Original file UTEP-CS-15-99 in pdf
Updated version UTEP-CS-15-99a in pdf
Final version UTEP-CS-15-99b in pdf


Technical Report UTEP-CS-15-98, December 2015
Toward Unification of Explicit and Implicit Invocation-Style Programming
Yoonsik Cheon

Subprograms like procedures and methods can be invoked explicitly or implicitly; in implicit invocation, an event implicitly causes the invocation of subprograms that are registered an interest in the event. Mixing these two styles is common in programming and often unavoidable in developing such software as GUI applications and event-based control systems. However, it isn't also uncommon for the mixed use to complicate programming logic and thus produce unclean code, code that is hard to read and understand. We show, through a small but realistic example, that the problem is not much on mixing two different styles itself but more on using them in an unconstrained manner. We propose a few principles or guidelines for unifying the two styles harmoniously and also describe a simple, proof-of-concept framework for converting one style to the other for the unification. Our work enables one to blend the two different invocation styles harmoniously and in a properly constrained fashion to produce clean code.

File in pdf


Technical Report UTEP-CS-15-97, December 2015
Updated version UTEP-CS-15-97a, April 2016
Fuzzy Techniques Provide a Theoretical Explanation for the Heuristic l^p-Regularization of Signals and Images
Fernando Cervantes, Bryan Usevitch, Leobardo Valera, Vladik Kreinovich, and Olga Kosheleva

Published in Proceedings of the 2016 IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2016, Vancouver, Canada, July 24-29, 2016.

One of the main techniques used to de-noise and de-blur signals and images is regularization, which is based on the fact that signals and images are usually smoother than noise. Traditional Tikhonov regularization assumes that signals and images are differentiable, but, as Mandelbrot has shown in his fractal theory, many signals and images are not differentiable. To de-noise and de-blur such images, researchers have designed a heuristic method of l^p-regularization.

l^p-regularization leads to good results, but it is not used as widely as should be, because it lacks a convincing theoretical explanation -- and thus, practitioners are often reluctant to use it, especially in critical situations. In this paper, we show that fuzzy techniques provide a theoretical explanation for the l^p-regularization.

Fuzzy techniques also enables us to come up with natural next approximations to be used when the accuracy of the l^p-based de-noising and de-blurring is not sufficient.

Original file UTEP-CS-15-97 in pdf
Updated version UTEP-CS-15-97a in pdf


Technical Report UTEP-CS-15-96, December 2015
Updated version UTEP-CS-15-96a, April 2016
Membership Functions Representing a Number vs. Representing a Set: Proof of Unique Reconstruction
Hung T. Nguyen, Vladik Kreinovich, and Olga Kosheleva

Published in Proceedings of the 2016 IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2016, Vancouver, Canada, July 24-29, 2016.

In some cases, a membership function m(x) represents an unknown number, but in many other cases, it represents an unknown crisp set. In this case, for each crisp set S, we can estimate the degree m(S) to which this set S is the desired one. A natural question is: once we know the values m(S) corresponding to all possible crisp sets S, can we reconstruct the original membership function? In this paper, we show that the original membership function m(x) can indeed be uniquely reconstructed from the values m(S).

Original file UTEP-CS-15-96 in pdf
Updated version UTEP-CS-15-96a in pdf


Technical Report UTEP-CS-15-95, December 2015
From Tertullian's Credo Quia Absurdum to Bohr's Crazy Theories: A Rational Explanation of a Seemingly Irrational Idea
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2017, Vol. 11, No. 2, pp. 122-124.

At first glance, Tertullian's idea -- that absurdity of a statement makes it more believable -- sounds irrational, maybe appropriate for theology but definitely not for science. However, somewhat surprisingly, a similar idea was successfully used by the Nobelist Niels Bohr in theoretical physics -- an epitome of rationality in science. In this paper, we show that this Tertullian-Bohr idea actually has a simple rational explanation. Specifically, if previous attempts to construct a theory which is consistent with what is perceived as common sense were unsuccessful, this implies that a true theory much contradict common sense -- and thus, the fact that a given theory contradicts common sense increases the probability of its correctness.

File in pdf


Technical Report UTEP-CS-15-94, December 2015
How to Predict Nesting Sites?
Stephen Escarzaga, Craig Tweedie, and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2017, Vol. 11, No. 2, pp. 119-121.

How to predict nesting sites? Usually, all we know is the past nesting sites, and the fact that the birds select a site which is optimal for them (in some reasonable sense), but we do not know the exact objective function describing this optimality. In this paper, we propose a way to make predictions in such a situation.

File in pdf


Technical Report UTEP-CS-15-93, December 2015
Towards Selecting the Best Abstraction for a Patrolling Game
Anjon Basak, Christopher Kiekintveld, and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2017, Vol. 11, No. 2, pp. 104-118.

When the number of possible strategies is large, it is not computationally feasible to compute the optimal strategy for the original game. Instead, we select our strategy based on an approximate approximate description of the original game. The quality of the resulting strategy depends on which approximation we select. In this paper, on an example of a simple game, we show how to find the optimal approximation, the approximation whose use results in the best strategy.

File in pdf


Technical Report UTEP-CS-15-92, December 2015
Simplest Innovations Are, Empirically, the Most Promising: An Explanation
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2016, Vol. 38, pp. 132-134.

Many examples show that the simplest innovation are the most promising. In this paper, we provide a theoretical explanation for this empirical observation.

File in pdf


Technical Report UTEP-CS-15-91, December 2015
Why It Is Healthy to Regularly Challenge Authority: An Algorithmic Explanation
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2016, Vol. 38, pp. 129-131.

One way to make group decisions is to select the best decision maker(s) in the group as the authority, and to follow his or her decisions. At first glance, it seems that if the selected authority is indeed the best decision maker, it is beneficial for everyone to obey his or her authority. However, history shows that in many cases, challenges to the authority (even to the authority of the best decision maker) were beneficial to the group. In this paper, we provide an algorithmic explanation for this phenomenon. The main idea behind this explanation is that most practical general problems are NP-hard and thus, no feasible algorithm can solve all instances of such a problem. Thus, even for the best decision maker, who uses the best of the currently available feasible algorithms, there inevitably are cases when the resulting decision is not the best, and can thus be improved.

File in pdf


Technical Report UTEP-CS-15-90, December 2015
A Systematic Derivation of Loop Specifications Using Patterns
Aditi Barua and Yoonsik Cheon

Any non-trivial program contains loop control structures such as while, for and do statements. A formal correctness proof of code containing loop control structures is typically performed using an induction-based technique, and oftentimes the most challenging step of an inductive proof is formulating a correct induction hypothesis. An incorrectly-formulated induction hypothesis will surely lead to a failure of the proof. In this paper we propose a systematic approach for formulating and driving specifications of loop control structures for formal analysis and verification of programs. We explain our approach using while loops and a functional program verification technique in which a program is viewed as a mathematical function from one program state to another. The most common use of loop control structures is to iterate over a certain sequence of values and manipulate it, one value at a time. Many loops exhibit certain common flavors or patterns, and similarly-structured loops have similarly-structured specifications. Our approach is to categorize and document the common flavors or usage patterns of loop control structures as reusable specification patterns. One key idea of our pattern specification is to promote manipulation of individual values to the whole sequence iterated over by a loop. Our patterns are compositional and can be organized into a pattern hierarchy. A catalog of loop specification patterns can be a good resource for systematically formulating and deriving specifications of loops. Indeed, our case study indicates that our patterns are applicable to a wide range of programs from systems programming to scientific and business applications.

File in pdf


Technical Report UTEP-CS-15-89, December 2015
Updated version UTEP-CS-15-89a, May 2016
Interval Methods for Data Fitting under Uncertainty: A Probabilistic Treatment
Vladik Kreinovich and Sergey Shary

Published in Reliable Computing, 2016, Vol. 23, pp. 105-141.

How to estimate parameters from observations subject to errors and uncertainty? Very often, the measurement errors are random quantities that can be adequately described by the probability theory. When we know that the measurement errors are normally distributed with zero mean, then the (asymptotically optimal) Maximum Likelihood Method leads to the popular least squares estimates. In many situations, however, we do not know the shape of the error distribution, we only know that the measurement errors are located on a certain interval. Then the maximum entropy approach leads to a uniform distribution on this interval, and the Maximum Likelihood Method results in the so-called minimax estimates. We analyse specificity and drawbacks of the minimax estimation under essential interval uncertainty in data and discuss possible ways to solve the difficulties. Finally, we show that, for the linear functional dependency, the minimax estimates motivated by the Maximum Likelihood Method coincide with those produced by the Maximum Consistency Method that originate from interval analysis.

Original file UTEP-CS-15-89 in pdf
Updated version UTEP-CS-15-89a in pdf


Technical Report UTEP-CS-15-88, November 2015
Science Is Helpful for Engineering Applications: A Theoretical Explanation of an Empirical Observation
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2016, Vol. 3, No. 1, pp. 45-50.

Empirical evidence shows that when engineering design uses scientific analysis, we usually get a much better performance that for the system designed by using a trial-and-error engineering approach. In this paper, we provide a quantitative explanation for this empirical observation.

File in pdf


Technical Report UTEP-CS-15-87, November 2015
Why the Range of a Robust Statistic Under Interval Uncertainty Is Often Easier to Compute
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2016, Vol. 3, No. 1, pp. 37-43.

In statistical analysis, we usually use the observed sample values x1, ..., xn to compute the values of several statistics v(x1, ..., xn) -- such as sample mean, sample variance, etc. The usual formulas for these statistics implicitly assume that we know the exact values x1, ..., xn. In practice, the sample values X1, ..., Xn come from measurements and are, thus, only approximations to the actual (unknown) values x1, ..., xn of the corresponding quantity. Often, the only information that we have about each measurement error Δ xi = Xi − xi is the upper bound Δi on the measurement error: |Δ xi| ≤ Δi. In this case, the only information about each actual value xi is that it belongs to the interval [Xi − Δi, Xi + Δi]. It is therefore desirable to compute the range of each given statistic v(x1, ..., xn) over these intervals. It is known that often, estimating the range of a robust statistic (e.g., median) is computationally easier than estimating the range of its traditional equivalent (e.g., mean). In this paper, we provide a qualitative explanation for this phenomenon.

File in pdf


Technical Report UTEP-CS-15-86, November 2015
Waning Influence of History: Why?
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2016, Vol. 38, pp. 126-128.

In the past, history played an important role in education: students learned history of science, history of mathematics, etc. In the last decades, the influence of history has waned. In this paper, we provide a natural explanation for this waning.

File in pdf


Technical Report UTEP-CS-15-85, November 2015
Positive Consequences of Negative Attitude: Game-Theoretic Analysis
Mahdokht Afravi and Vladik Kreinovich

Published in International Journal of Contemporary Mathematical Sciences, 2016, Vol. 11, No. 3, pp. 113-118.

At first glance, the world would be a better place if all the people had positive attitude towards each other. It is known that this is not always the case: sometimes, the excess of positive attitude can lead to negative consequences. In this paper, we show that, vice versa, a reasonable amount of negative attitude can make life better for everyone. What is therefore needed is not the exclusive appearance of positive attitude, but rather a balance -- titled towards moderately positive attitude.

File in pdf


Technical Report UTEP-CS-15-84, November 2015
Maximum Entropy Approach Is Not As Arbitrary As It May Seem at First Glance
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2016, Vol. 3, No. 1, pp. 1-7.

When we only have partial information about the probability distribution, i.e., when several different probability distributions are consistent with our knowledge, then it makes sense to select a distribution with the largest entropy. In particular, when we only know that the quantity is located within a certain interval -- and we have no information about the probability of different values within this intervals -- then it is reasonable to assume that all these values are equally probable, i.e., that we have a uniform distribution on this interval. The problem with this idea is that if we apply it to the same quantity after a non-linear rescaling, we get a different (non-uniform) distribution in the original scale. In other words, it seems that the results of applying the Maximum Entropy approach are rather arbitrary: they depend on what exactly scale we apply them to. In this paper, we show how to overcome this subjectivity: namely, we propose to take into account that, due to measurement inaccuracy, we always have finitely many possible measurement results, and this finiteness makes the results of applying the Maximum Entropy approach uniquely determined.

File in pdf


Technical Report UTEP-CS-15-83, November 2015
Explaining Boris Pasternak's Observation that Complex Ideas Are Sometimes Easier to Understand
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2016, Vol. 3, No. 1, pp. 9-12.

Probably the most cited lines from the poetry of the Nobel-prize winning Russian writer Boris Pasternak contain the observation that complex ideas are sometimes easier to understand than simpler ones. This is not just a paradoxical poetic statement: many teachers have observed the same seemingly counter-intuitive phenomenon. In this paper, we provide a possible explanation for this phenomenon, by showing that indeed, many easier-to-describe mathematical models lead to more-difficult-to-solve mathematical problems.

File in pdf


Technical Report UTEP-CS-15-82, November 2015
From Equations to Tri-quations and Multi-quations
Mourat Tchoshanov, Olga Kosheleva, and Vladik Kreinovich

Published in International Journal of Contemporary Mathematical Sciences, 2016, Vol. 11, No. 3, pp. 105-111.

In general, an equation A(x1, ..., xn)= B(x1, ..., xn) corresponds to the situation when we have two quantities A(x1, ..., xn) and B(x1, ..., xn) which are known to be equal, we know how each of these quantities depends on the unknown parameters x1, ..., xn, and we want to find the values of the unknowns xi from this equality -- and from other similar equalities. In some practical situations, instead of two equal values, we have three (or more) quantities which are all equal to each other. In this paper, we explain how to find the unknowns xi in such situations, i.e., how to solve the corresponding "tri-quations" and "multi-quations".

File in pdf


Technical Report UTEP-CS-15-81, November 2015
Why the Graph Isomorphism Problem Is Easier Than Propositional Satisfiability: A Possible Qualitative Explanation
Vladik Kreinovich and Olga Kosheleva

Published in International Journal of Contemporary Mathematical Sciences, 2016, Vol. 11, No. 3, pp. 97-103.

A recent result has shown that the graph isomorphism problem can be solved in quasi-polynomial time, while the general belief is that only exponential time algorithms are possible for propositional satisfiability. This is somewhat counter-intuitive, since for propositional satisfiability, we need to look for one of 2n options, while in graph isomorphism, we need to look for one of n! options, and n! is much larger than 2n. Our qualitative explanation for this counter-intuitive fact comes from the fact that, in general, a graph isomorphism problem has a unique solution -- in contrast to propositional satisfiability which, in general, has many solutions -- and it is known that problems with unique solutions are often easier to solve.

File in pdf


Technical Report UTEP-CS-15-80, November 2015
Updated version UTEP-CS-15-80a, November 2015
Decision Making under Interval (and More General) Uncertainty: Monetary vs. Utility Approaches
Vladik Kreinovich

Published in Journal of Computational Technologies, 2017, Vol. 22, No. 2, pp. 37-49.

In many situations, e.g., in financial and economic decision making, the decision results either in a money gain (or loss) and/or in the gain of goods that can be exchanged for money or for other goods. In such situations, interval uncertainty means that we do not know the exact amount of money that we will get for each possible decision, we only know lower and upper bounds on this amount. In this case, a natural idea is to assign a fair price to different alternatives, and then to use these fair prices to select the best alternative. In the talk, we show how to assign a fair price under interval uncertainty. We also explain how to assign a fair price in the case of more general types of uncertainty such as p-boxes (bounds on cumulative distribution function), twin intervals (when we only know approximate bounds), fuzzy values (when we have imprecise expert estimates of the gains), etc.

In other situations, e.g., when buying a house to live in or selecting a movie to watch, the result of the decision is the decision maker's own satisfaction. In such situations, a more adequate approach is to use utilities - a quantitative way of describing user's preferences. In this talk, after a brief introduction describing what are utilities, how to evaluate them, and how to make decisions based on utilities, we explain how to make decisions in situations with user uncertainty - a realistic situation when a decision maker cannot always decide which alternative is better for him or her.

Original file UTEP-CS-15-80 in pdf
Updated version UTEP-CS-15-80a in pdf


Technical Report UTEP-CS-15-79, November 2015
On the Importance of Duality and Multi-Ality In Mathematics Education
Mourat Tchoshanov, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the 5th International Conference "Mathematics Education: Theory and Practice" MATHEDU'2015, Kazan, Russia, November 27-28, 2015, pp. 8-13.

For each mathematical object, there are usually several different equivalent representations: for example, a spatial object can be represented either in geometric terms, or by a function that describes its shape. The need for several representations comes from the fact that each of these representations is useful in solving some problems for which the use of other representations is less helpful. Thus, the more representations a student knows, the more capable this student is of solving mathematical problems. In this paper, we propose a general formal description of the corresponding notion of duality (and, more generally, "multi-ality"), and we explain the importance of duality and multi-ality in mathematics education.

File in pdf


Technical Report UTEP-CS-15-78, October 2015
Towards Making Theory of Computation Course More Understandable and Relevant: Recursive Functions, For-Loops, and While-Loops
Vladik Kreinovich and Olga Kosheleva

Published in Proceedings of the 5th International Conference "Mathematics Education: Theory and Practice" MATHEDU'2015, Kazan, Russia, November 27-28, 2015, pp. 17-19.

In this paper, we show how we can make a theory of computation course more understandable and more relevant: namely, we show that a seemingly abstract notion of primitive recursion is a direct counterpart to for-loops, while the mu-recursion is an analog of while-loops.

File in pdf


Technical Report UTEP-CS-15-77, October 2015
How to Explain The Empirical Success of Generalized Trigonometric Functions in Processing Discontinuous Signals
Pedro Barragan Olague and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2016, Vol. 37, pp. 25-29.

Trigonometric functions form the basis of Fourier analysis - one of the main signal processing tools. However, while they are very efficient in describing smooth signals, they do not work well for signals that contain discontinuities - such as signals describing phase transitions, earthquakes, etc. It turns out that empirically, one of the most efficient ways of describing and processing such signals is to use a certain generalization of trigonometric functions. In this paper, we provide a theoretical explanation of why this particular generalization is the most empirically efficient one.

File in pdf


Technical Report UTEP-CS-15-76, October 2015
How to Compute von Neumann-Morgenstern Solutions
Martha Osegueda Escobar and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2016, Vol. 39, pp. 68-73.

File in pdf


Technical Report UTEP-CS-15-75, October 2015
How to Make Sure That Everyone Works Towards a Common Goal: Towards Optimal Incentives
Christian Servin and Vladik Kreinovich

Published in Proceedings of the 3rd International Conference on Mathematical and Computer Modeling, Omsk, Russia, November 12, 2015, pp. 78-80.

File in pdf


Technical Report UTEP-CS-15-74, October 2015
How to Modify Data Processing Algorithms So That They Detect Only Dependencies Which Make Sense to Domain Experts
Geovany Ramirez, Craig Tweedie, Jason Carlsson, and Vladik Kreinovich

Published in Proceedings of the 3rd International Conference on Mathematical and Computer Modeling, Omsk, Russia, November 12, 2015, pp. 137-139.

File in pdf


Technical Report UTEP-CS-15-73, October 2015
A Possible Utility-Based Explanation of Deaton's Paradox (and Habits of Mind)
Hung T. Nguyen, Songsak Sriboonchitta, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the 3rd International Conference on Mathematical and Computer Modeling, Omsk, Russia, November 12, 2015, pp. 15-17.

File in pdf


Technical Report UTEP-CS-15-72, October 2015
How to Divide a Territory: an Argument in Favor of Private Property
Mahdokht Afravi and Vladik Kreinovich

Published in Proceedings of the 3rd International Conference on Mathematical and Computer Modeling, Omsk, Russia, November 12, 2015, pp. 20-22.

File in pdf


Technical Report UTEP-CS-15-71, October 2015
How to Take Into Account Student's Degree of Confidence When Grading Exams
Olga Kosheleva, Joe Lorkowski, Viannette Felix, and Vladik Kreinovich

Published in Proceedings of the 5th International Conference "Mathematics Education: Theory and Practice" MATHEDU'2015, Kazan, Russia, November 27-28, 2015, pp. 29-30.

When grading exams, it is important to take into account how confident the student is in the answer. If the answer is correct, then it is better - and thus, deserves a better grade -- if the student is absolutely confident in this correct answer. On the other hand, if the answer is wrong, then, the more confident the student is in this wrong answer, the worse. The grading scheme should be such that provides an incentive for the students to report their true degree of confidence. In this paper, we explain how to design such a grading scheme.

File in pdf


Technical Report UTEP-CS-15-70, October 2015
Oscillating Exam Averages and Their Control-Theory Explanation
Olga Kosheleva and Vladik Kreinovich

Published in Proceedings of the 5th International Conference "Mathematics Education: Theory and Practice" MATHEDU'2015, Kazan, Russia, November 27-28, 2015, pp. 14-16.

When a student misses one of the exams, his overall grade for the class is often interpolated based on his available grades. This would have been a fair procedure if the grades for different tests were equally distributed. In practice, often, the average grades for different tests are oscillating. As a result, the usual interpolation techniques may inadvertently bias the student grade for the class. In this paper, we explain this oscillation, and analyze how to avoid the corresponding bias.

File in pdf


Technical Report UTEP-CS-15-69, September 2015
Conditional Dimension in Metric Spaces: A Natural Metric-Space Counterpart of Kolmogorov-Complexity-Based Mutual Dimension
Vladik Kreinovich, Luc Longpre, and Olga Kosheleva

Published in Mathematical Structures and Modeling, 2016, Vol. 37, pp. 18-24.

It is known that dimension of a set in a metric space can be characterized in information-related terms -- in particular, in terms of Kolmogorov complexity of different points from this set. The notion of Kolmogorov complexity K(x) -- the shortest length of a program that generates a sequence x -- can be naturally generalized to conditional Kolmogorov complexity K(x:y) -- the shortest length of a program that generates x by using y as an input. It is therefore reasonable to use conditional Kolmogorov complexity to formulate a conditional analogue of dimension. Such a generalization has indeed been proposed, under the name of mutual dimension. However, somewhat surprisingly, this notion was formulated in pure Kolmogorov-complexity terms, without any analysis of possible metric-space meaning. In this paper, we describe the corresponding metric-space notion of conditional dimension -- a natural metric-space counterpart of the Kolmogorov-complexity-based mutual dimension.

File in pdf


Technical Report UTEP-CS-15-68, September 2015
Constructive Mathematics Is Seemingly Simple but There Are Still Open Problems: Kreisel's Observation Explained
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2015, Vol. 2, No. 1, pp. 51-56.

In his correspondence with Grigory Mints, the famous logician Georg Kreisel noticed that many results of constructive mathematics seem easier-to-prove than the corresponding classical (non-constructive) results -- although he noted that these results are still far from being simple and the corresponding open problems are challenging. In this paper, we provide a possible explanation for this empirical observation.

File in pdf


Technical Report UTEP-CS-15-67, September 2015
Occam's Razor Explains Matthew Effect
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2015, Vol. 2, No. 1, pp. 47-50.

Sociologists of science noticed that the results of many collaborative projects and discoveries are often attributed only to their most famous collaborators, even when the contributions of these famous collaborators were minimal. This phenomenon is known as the Matthew effect, after a famous citation from the Gospel of Matthew. In this article, we show that Occam's razor provides a possible explanation for the Matthew effect.

File in pdf


Technical Report UTEP-CS-15-66, September 2015
Updated version UTEP-CS-15-66a, December 2015
Combining Interval and Probabilistic Uncertainty: What Is Computable?
Vladik Kreinovich, Andrzej Pownuk, and Olga Kosheleva

Published in: Panos Pardalos, Anatoly Zhigljavsky, and Julius Zilinskas (eds.), Advances in Stochastic and Deterministic Global Optimization, Springer Verlag, Switzerland, 2016, pp. 13-32.

In many practical problems, we need to process measurement results. For example, we need such data processing to predict future values of physical quantities. In these computations, it is important to take into account that measurement results are never absolutely exact, that there is always measurement uncertainty, because of which the measurement results are, in general, somewhat different from the actual (unknown) values of the corresponding quantities. In some cases, all we know about measurement uncertainty is an upper bound; in this case, we have an interval uncertainty, meaning that all we know about the actual value is that is belongs to a certain interval. In other cases, we have some information -- usually partial -- about the corresponding probability distribution. New data processing challenges appear all the time; in many of these cases, it is important to come up with appropriate algorithms for taking uncertainty into account.

Before we concentrate our efforts on designing such algorithms, it is important to make sure that such an algorithm is possible in the first place, i.e., that the corresponding problem is algorithmically computable. In this paper, we analyze the computability of such uncertainty-related problems. It turns out that in a naive (straightforward) formulation, many such problems are not computable, but they become computable if we reformulate them in appropriate practice-related terms.

Original file UTEP-CS-15-66 in pdf
Updated version UTEP-CS-15-66a in pdf


Technical Report UTEP-CS-15-65, September 2015
Paradox of Choice: A Possible Explanation
Vladik Kreinovich and Olga Kosheleva

Published in Mathematical Structures and Modeling, 2015, Vol. 36, pp. 49-52.

At first glance, we would expect that the more choices we have, the happier we will be. Experiments show, however, then when the number of choices increases, customers become {\em less} happy. In this paper, we provide a possible explanation for this paradox.

File in pdf


Technical Report UTEP-CS-15-64, August 2015
Updated version UTEP-CS-15-64a, December 2015
Solving Equations (and Systems of Equations) Under Uncertainty: How Different Practical Problems Lead to Different Mathematical and Computational Formulations
Vladik Kreinovich

Published in Granular Computing, 2016, Vol. 1, No. 3, pp. 171-179.

Many practical problems are naturally reduced to solving systems of equations. There are many efficient techniques for solving well-defined systems of equations, i.e., systems in which we know the exact values of all the parameters and coefficients. In practice, we usually know these parameters and coefficients with some uncertainty -- uncertainty usually described by an appropriate granule: interval, fuzzy set, rough set, etc. Many techniques have been developed for solving systems of equations under such granular uncertainty. Sometimes, however, practitioners use previously successful techniques and get inadequate results. In this -- mostly pedagogical -- paper, we explain that to obtain an adequate solution, we need to take into account not only the system of equations and the granules describing uncertainty: we also need to take into account the original practical problem -- and for different practical problems, we get different solutions to the same system of equations with the same granules. This need is illustrated mainly on the example of interval uncertainty, the simplest type of uncertainty.

Original file UTEP-CS-15-64 in pdf
Updated version UTEP-CS-15-64a in pdf


Technical Report UTEP-CS-15-63, August 2015
Invariance Explains Multiplicative and Exponential Skedactic Functions
Vladik Kreinovich, Olga Kosheleva, Hung T. Nguyen, and Songsak Sriboonchitta

Published in: Van Nam Huynh, Vladik Kreinovich, and Songsak Sriboonchitta (eds.), Causal Inference in Econometrics, Springer Verlag, Cham, Switzerland, 2016, pp. 119-131.

In many situation, we have an (approximately) linear dependence between several quantities y = f(x1, ..., xn). The variance v of the corresponding approximation error often depends on the values of the quantities x1, ..., xn: v = v(x1, ..., xn); the function describing this dependence is known as the skedactic function. Empirically, two classes of skedactic functions are most successful: multiplicative functions v = c * |x1|γ1 * ... * |xn|γn and exponential functions v = exp(α + γ1 * x1 + ... + γn * xn). In this paper, we use natural invariance ideas to provide a possible theoretical explanation for this empirical success; we explain why in some situations multiplicative skedactic functions work better and in some exponential ones. We also come up with a general class of invariant skedactic function that includes both multiplicative and exponential functions as particular cases.

File in pdf


Technical Report UTEP-CS-15-62, July 2015
Gazelle Companies: What Is So Special About the 20% Threshold?
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2015, Vol. 2, No. 1, pp. 43-46.

In business analysis, a special emphasis is placed on "gazelles", companies that grow by at least 20% per year for several years (usually four). While this 20% threshold is somewhat supported by empirical research, from the theoretical viewpoint, it is not clear what is so special about this value. In this paper, we provide a possible explanation for this empirical fact.

File in pdf


Technical Report UTEP-CS-15-61, July 2015
Why Political Scientists Are Wrong 15% of the Time
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2015, Vol. 2, No. 1, pp. 37-42.

An experimental study has shown that among situations when political scientists claimed that a political outcome was impossible, this outcome actually occurred in 15% of the cases. In this paper, we provide a possible explanation for this empirical fact.

File in pdf


Technical Report UTEP-CS-15-60, July 2015
Al-Sijistani's and Maimonides's Double Negation Theology Explained by Constructive Logic
Olga Kosheleva and Vladik Kreinovich

Published in International Mathematical Forum, 2015, Vol. 10, No. 12, pp. 587-593.

Famous medieval philosophers Al-Sijistani and Maimonides argued that the use of double negation helps us to better understand issues related to theology. To a modern reader, however, their arguments are somewhat obscure and unclear. We show that these arguments can be drastically clarified if we take into account the 20 century use of double negation in constructive logic.

File in pdf


Technical Report UTEP-CS-15-59, July 2015
Why Linear (and Piecewise Linear) Models Often Successfully Describe Complex Non-Linear Economic and Financial Phenomena: A Fuzzy-Based Explanation
H. T. Nguyen, V. Kreinovich, O. Kosheleva, and S. Sriboonchitta

Economic and financial phenomena are highly complex and non-linear. However, surprisingly, in many cases, these phenomena are accurately described by linear models -- or, sometimes, by piecewise linear ones. In this paper, we show that fuzzy techniques can explain the unexpected efficiency of linear and piecewise linear models: namely, we show that a natural fuzzy-based precisiation of imprecise ("fuzzy") expert knowledge often leads to linear and piecewise linear models.

We also discuss which expert-motivated nonlinear models should be used to get a more accurate description of economic and financial phenomena.

File in pdf


Technical Report UTEP-CS-15-58, June 2015
Fuzzy Xor Classes from Quantum Computing
Anderson Avila, Murilo Schmalfuss, Renata Reiser, and Vladik Kreinovich

Published in Proceedings of the 14th International Conference on Artificial Intelligence and Soft Computing ICAISC'2015, Zakopane, Poland, June 14-18, 2015, Springer Lecture Notes on Artificial Intelligence, Vol. 9120, pp. 305-371.

By making use of quantum parallelism, quantum processes provide parallel modelling for fuzzy connectives and the corresponding computations of quantum states can be simultaneously performed, based on the superposition of membership degrees of an element with respect to the different fuzzy sets. Such description and modelling is mainly focussed on representable fuzzy Xor connectives and their dual constructions. So, via quantum computing not only the interpretation based on traditional quantum circuit is considered, but also the notion of quantum process in the qGM model is applied, proving an evaluation of a corresponding simulation by considering graphical interfaces of the VPE-qGM programming environment. The quantum interpretations come from measurement operations performed on the corresponding quantum states.

File in pdf


Technical Report UTEP-CS-15-57, June 2015
Student Autonomy Improves Learning: A Theoretical Justification of the Empirical Results
Octavio Lerma and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2016, Vol. 10, No. 1, pp. 34-38.

In many pedagogical situations, it is advantageous to give students some autonomy: for example, instead of assigning the same homework problem to all the students, to give students a choice between several similar problems, so that each student can choose a problem whose context best fits his or her experiences. A recent experimental study shows that there is a 45% correlation between degree of autonomy and student success. In this paper, we provide a theoretical explanation for this correlation value.

File in pdf


Technical Report UTEP-CS-15-56, June 2015
Comparisons of Measurement Results as Constraints on Accuracies of Measuring Instruments: When Can We Determine the Accuracies from These Constraints?
Christian Servin and Vladik Kreinovich

Published in Proceedings of the Eighth International Workshop on Constraints Programming and Decision Making CoProd'2015, El Paso, Texas, November 6, 2015; detailed version published in Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making: Theory and Applications, Springer Verlag, Berlin, Heidelberg, 2018, pp. 113-122.

For a measuring instrument, a usual way to find the probability distribution of its measurement errors is to compare its results with the results of measuring the same quantity with a much more accurate instrument. But what if we are interested in estimating the measurement accuracy of a state-of-the-art measuring instrument, for which no more accurate instrument is possible? In this paper, we show that while in general, such estimation is not possible; however, can uniquely determine the corresponding probability distributions if we have several state-of-the-art measuring instruments, and for one of them, the corresponding probability distribution is symmetric.

File in pdf


Technical Report UTEP-CS-15-55, June 2015
Why Deep Neural Networks: A Possible Theoretical Explanation
Chitta Baral, Olac Fuentes, and Vladik Kreinovich

Published in: Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making: Theory and Applications, Springer Verlag, Berlin, Heidelberg, 2018, pp. 1-6.

In the past, the most widely used neural networks were 3-layer ones. These networks were preferred, since one of the main advantages of the biological neural networks -- which motivated the use of neural networks in computing -- is their parallelism, and 3-layer networks provide the largest degree of parallelism. Recently, however, it was empirically shown that, in spite of this argument, multi-layer ("deep") neural networks leads to a much more efficient machine learning. In this paper, we provide a possible theoretical explanation for the somewhat surprising empirical success of deep networks.

File in pdf


Technical Report UTEP-CS-15-54, June 2015
Dow Theory's Peak-and-Trough Analysis Justified
Chrysostomos D. Stylios and Vladik Kreinovich

Published in: Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making: Theory and Applications, Springer Verlag, Berlin, Heidelberg, 2018, pp. 123-128.

In the analysis of dynamic financial quantities such as stock prices, equity prices, etc., reasonable results are often obtained if we only consider local maxima ("peaks") and local minima ("troughs") and ignore all the other values. The empirical success of this strategy remains a mystery. In this paper, we provide a possible explanation for this success.

File in pdf


Technical Report UTEP-CS-15-53, June 2015
How to Gauge Disruptions Caused by Garbage Collection: Towards an Efficient Algorithm
Gabriel Arellano, Edward Hudgins, David Pruitt, Adrian Veliz, Eric Freudenthal, and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2016, Vol. 10, No. 1, pp. 4-9.

Comprehensive garbage collection is employed on a variety of computing devices, including intelligent cell phones. Garbage collection can cause prolonged user-interface pauses. In order to evaluate and compare the disruptiveness of various garbage collection strategies, it is necessary to gauge disruptions caused by garbage collection. In this paper, we describe efficient algorithms for computing metrics useful for this purpose.

File in pdf


Technical Report UTEP-CS-15-52, June 2015
When Should We Switch from Interval-Valued Fuzzy to Full Type-2 Fuzzy (e.g., Gaussian)?
Vladik Kreinovich and Chrysostomos D. Stylios

Published in Critical Reviews, 2015, Vol. XI, pp. 57-66.

Full type-2 fuzzy techniques provide a more adequate representation of expert knowledge. However, such techniques also require additional computational efforts, so we should only use them if we expect a reasonable improvement in the result of the corresponding data processing. It is therefore important to come up with a practically useful criterion for deciding when we should stay with interval-valued fuzzy and when we should use full type-2 fuzzy techniques. Such a criterion is proposed in this paper. We also analyze how many experts we need to ask to come up with a reasonable description of expert uncertainty.

File in pdf


Technical Report UTEP-CS-15-51, June 2015
Towards A Physics-Motivated Small-Velocities Approximation to General Relativity
Vladik Kreinovich and Olga Kosheleva

Published in Mathematical Structures and Modeling, 2015, Vol. 34, pp. 29-38.

In the general case, complex non-linear partial differential equations of General Relativity are very hard to solve. Thus, to solve the corresponding physical problems, usually appropriate approximations are used. The first approximation to General Relativity is, of course, Newton's theory of gravitation. Newton's theory is applicable when the gravitational field is weak and when all velocities are much smaller than the speed of light. Most existing approximations allow higher velocities, but still limit us to weak gravitational fields. In this paper, he consider the possibility of a different approximation, in which strong fields are allowed but velocities are required to be small. We derive the corresponding equations and speculate on their possible physical consequences.

File in pdf


Technical Report UTEP-CS-15-50, June 2015
Why We Need Extra Physical Dimensions: A Simple Geometric Explanation
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2015, Vol. 34, pp. 24-28.

It is known that a consistent description of point-wise particles requires that we add extra physical dimensions to the usual four dimensions of space-time. The need for such dimensions is based on not-very-intuitive complex mathematics. It is therefore desirable to try to come up with a simpler geometric explanation for this phenomenon. In this paper, we provide a simple geometric explanation of why extra physical dimensions are needed.

File in pdf


Technical Report UTEP-CS-15-49, June 2015
Updated version UTEP-CS-15-49a, September 2015
What Is Computable? What Is Feasibly Computable? A Physicist's Viewpoint
Vladik Kreinovich and Olga Kosheleva

Published in: Andrew Adamatzky (ed.), Advances in Unconventional Computing, Springer Verlag, 2017, pp. 31-58.

In this paper, we show how the questions of what is computable and what is feasibly computable can be viewed from the viewpoint of physics: what is computable within the current physics? what is computable if we assume -- as many physicists do -- that no final physical theory is possible? what is computable if we consider data processing, i.e., computations based on physical inputs? Our physics-based analysis of these questions leads to some unexpected answers, both positive and negative. For example, we show that under the no-physical-theory-is-perfect assumption, almost all problems are feasibly solvable -- but not all of them.

Original file UTEP-CS-15-49 in pdf
Updated version UTEP-CS-15-49a in pdf


Technical Report UTEP-CS-15-48, June 2015
Why Awe Makes People More Generous: Utility Theory Can Explain Recent Experiments
Joe Lorkowski and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2016, Vol. 10, No. 1, pp. 53-56.

Recent psychological experiments show that the feeling of awe increases people's generosity. In this paper, we show that a usual utility-based approach to decision making explains this increase.

File in pdf


Technical Report UTEP-CS-15-47, June 2015
Across-the-Board Spending Cuts Are Very Inefficient: A Proof
Vladik Kreinovich, Olga Kosheleva, Hung T. Nguyen, and Songsak Sriboonchitta

Published in: Van Nam Huynh, Vladik Kreinovich, and Songsak Sriboonchitta (eds.), Causal Inference in Econometrics, Springer Verlag, Cham, Switzerland, 2016, pp. 109-118.

In many real-life situations, when there is a need for a spending cut, this cut is performed in an across-the-board way, so that each budget item is decreased by the same percentage. Such cuts are ubiquitous, they happen on all levels, from the US budget to the university budget cuts on the college and departmental levels. The main reason for the ubiquity of such cuts is that they are perceived as fair and, at the same time, economically reasonable. In this paper, we perform a quantitative analysis of this problem and show that, contrary to the widely spread positive opinion about across-the-board cuts, these cuts are, on average, very inefficient.

File in pdf


Technical Report UTEP-CS-15-46, June 2015
Why Fuzzy Cognitive Maps Are Efficient
Vladik Kreinovich and Chrysostomos D. Stylios

Published in International Journal of Computers, Communications, & Control, 2015, Vol. 10, No. 6, pp. 825-833.

In many practical situations, the relation between the experts' degrees of confidence in different related statements is well described by Fuzzy Cognitive Maps (FCM). This empirical success is somewhat puzzling, since from the mathematical viewpoint, each FCM relation corresponds to a simplified one-neuron neural network, and it is well known that to adequately describe relations, we need multiple neurons. In this paper, we show that the empirical success of FCM can be explained if we take into account that human's subjective opinions follow Miller's seven plus minus two law.

File in pdf


Technical Report UTEP-CS-15-45, June 2015
Updated version UTEP-CS-15-45a, September 2015
In Engineering Classes, How to Assign Partial Credit: From Current Subjective Practice to Exact Formulas (Based on Computational Intelligence Ideas)
Joe Lorkowski, Vladik Kreinovich, and Olga Kosheleva

Published in Proceedings of the IEEE Symposium Series on Computational Intelligence, Cape Town, South Africa, December 7-10, 2015, pp. 1621-1626.

When a student performed only some of the steps needed to solve a problem, this student gets partial credit. This partial credit is usually proportional to the number of stages that the student performed. This may sound reasonable, but in engineering education, this leads to undesired consequences: for example, a student who did not solve any of the 10 problems on the test, but who successfully performed 9 out of 10 stages needed to solve each problem will still get the grade of A ("excellent"). This may be a good evaluation of the student's intellectual ability, but for a engineering company that hires this A-level student, this will be an unexpected disaster. In this paper, we use computational intelligence ideas to analyze this problem from the viewpoint of potential loss to a company, and we show how to assign partial credit based on such loss estimates. Our conclusion is that this loss (and thus, the resulting grade) depend on the size of the engineering company. Thus, to better understand the student's strengths, it is desirable, instead of a single overall grade, to describe several grades corresponding to different company sizes.

Original file UTEP-CS-15-45 in pdf
Updated version UTEP-CS-15-45a in pdf


Technical Report UTEP-CS-15-44, June 2015
Updated version UTEP-CS-15-44a, September 2015
What is the Right Context for an Engineering Problem: Finding Such a Context is NP-Hard
Martine Ceberio, Vladik Kreinovich, Hung T. Nguyen, Songsak Sriboonchitta, and Rujira Ouncharoen

Published in in Proceedings of the IEEE Symposium Series on Computational Intelligence, Cape Town, South Africa, December 7-10, 2015, pp. 1615-1620.

In the general case, most computational engineering problems are NP-hard. So, to make the problem feasible, it is important to restrict this problem. Ideally, we should use the most general context in which the problem is still feasible. In this paper, we start with a simple proof that finding such most general context is itself an NP-hard problem. Since it is not possible to find the appropriate context by utilizing a general algorithm, it is therefore necessary to be creative -- i.e., in effect, to use computational intelligence techniques. On three examples, we show how such techniques can help us come up with the appropriate context. These examples explain why it is beneficial to take knowledge about causality into account when processing data, why sometimes long-term predictions are easier than short-term ones, and why often for small deviations, a straightforward application of a seemingly optimal control only makes the situation worse.

Original file UTEP-CS-15-44 in pdf
Updated version UTEP-CS-15-44a in pdf


Technical Report UTEP-CS-15-43, May 2015
Modeling Extremal Events Is Not Easy: Why the Extreme Value Theorem Cannot Be As General As the Central Limit Theorem
Vladik Kreinovich, Hung T. Nguyen, Songsak Sriboonchitta, and Olga Kosheleva

Published in: V. Kreinovich (ed.), Uncertainty Modeling, Springer Verlag, Cham, Switzerland, 2017, pp. 123-134.

In many real-life situations, a random quantity is a joint result of several independent factors, i.e., a {\em sum} of many independent random variables. The description of such sums is facilitated by the Central Limit Theorem, according to which, under reasonable conditions, the distribution of such a sum tends to normal. In several other situations, a random quantity is a {\em maximum} of several independent random variables. For such situations, there is also a limit theorem -- the Extreme Value Theorem. However, the Extreme Value Theorem is only valid under the assumption that all the components are identically distributed -- while no such assumption is needed for the Central Limit Theorem. Since in practice, the component distributions may be different, a natural question is: can we generalize the Extreme Value Theorem to a similar general case of possible different component distributions? In this paper, we use simple symmetries to prove that such a generalization is not possible. In other words, the task of modeling extremal events is provably more difficult than the task of modeling of joint effects of many factors.

File in pdf


Technical Report UTEP-CS-15-42, May 2015
Why Is Linear Quantile Regression Empirically Successful: A Possible Explanation
Hung T. Nguyen, Vladik Kreinovich, Olga Kosheleva, and Songsak Sriboonchitta

Published in: V. Kreinovich (ed.), Uncertainty Modeling, Springer Verlag, Cham, Switzerland, 2017, pp. 159-168.

Many quantities describing the physical world are related to each other. As a result, often, when we know the values of certain quantities x1, ..., xn, we can reasonably well predict the value of some other quantity y. In many application, in addition to the resulting estimate for y, it is also desirable to predict how accurate is this approximate estimate, i.e., what is the probability distribution of different possible values y. It turns out that in many cases, the quantiles of this distribution linearly depend on the values x1, ..., xn. In this paper, we provide a possible theoretical explanation for this somewhat surprising empirical success of such linear quantile regression.

File in pdf


Technical Report UTEP-CS-15-41, May 2015
Revised version UTEP-CS-15-41a, November 2015
Standing on the Shoulders of the Giants: Why Constructive Mathematics, Probability Theory, Interval Mathematics, and Fuzzy Mathematics Are Important
Vladik Kreinovich

Published in Reliable Computing, 2016, Vol. 23, pp. 97-104.

Recent death of Ray Moore, one of the fathers of interval mathematics, inspired these thoughts on why interval computations -- and several related areas of study -- are important, and what we can learn from the successes of these areas' founders and promoters.

Original file UTEP-CS-15-41 in pdf
Updated version UTEP-CS-15-41a in pdf


Technical Report UTEP-CS-15-40, May 2015
Revised version UTEP-CS-15-40a, July 2015
Extended version UTEP-CS-15-40b, November 2015
Why ARMAX-GARCH Linear Models Successfully Describe Complex Nonlinear Phenomena: A Possible Explanation
Hung T. Nguyen, Vladik Kreinovich, Olga Kosheleva, and Songsak Sriboonchitta

Original version UTEP-CS-15-40a published in: Van-Nam Huynh, Masahiro Inuiguchi, and Thierry Denoeux (eds.), Integrated Uncertainty in Knowledge Modeling and Decision Making, Proceedings of The Fourth International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making IUKM'2015, Nha Trang, Vietnam, October 15-17, 2015, Springer Lecture Notes in Artificial Intelligence, 2015, Vol. 9376, pp. 138-150.

Economic and financial processes are complex and highly nonlinear. However, somewhat surprisingly, linear models like ARMAX-GARCH often describe these processes reasonably well. In this paper, we provide a possible explanation for the empirical success of these models.

Original file UTEP-CS-15-40 in pdf
Revised version UTEP-CS-15-40a in pdf
Extended version UTEP-CS-15-40b in pdf


Technical Report UTEP-CS-15-39, May 2015
Revised version UTEP-CS-15-39a, July 2015
Extended version UTEP-CS-15-39b, November 2015
Why Copulas Have Been Successful in Many Practical Applications: A Theoretical Explanation Based on Computational Efficiency
Vladik Kreinovich, Hung T. Nguyen, Songsak Sriboonchitta, and Olga Kosheleva

Original version UTEP-CS-15-39a published in: Van-Nam Huynh, Masahiro Inuiguchi, and Thierry Denoeux (eds.), Integrated Uncertainty in Knowledge Modeling and Decision Making, Proceedings of The Fourth International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making IUKM'2015, Nha Trang, Vietnam, October 15-17, 2015, Springer Lecture Notes in Artificial Intelligence, 2015, Vol. 9376, pp. 112-125.

A natural way to represent a 1-D probability distribution is to store its cumulative distribution function (cdf) F(x) = Prob(X ≤ x). When several random variables X1, ..., Xn are independent, the corresponding cdfs F1(x1), ..., Fn(xn) provide a complete description of their joint distribution. In practice, there is usually some dependence between the variables, so, in addition to the marginals Fi(xi), we also need to provide an additional information about the joint distribution of the given variables. It is possible to represent this joint distribution by a multi-D cdf F(x1, ..., xn) = Prob(X1 ≤ x1 & ... & Xn ≤ xn), but this will lead to duplication -- since marginals can be reconstructed from the joint cdf -- and duplication is a waste of computer space. It is therefore desirable to come up with a duplication-free representation which would still allow us to easily reconstruct F(x1, ..., xn). It is therefore desirable to come up with a duplication-free representation which would still allow us to easily reconstruct $F(x_1,\ldots,x_n)$. In this paper, we prove that among all duplication-free representations, the most computationally efficient one is a representation in which marginals are supplements by a copula.

This result explains why copulas have been successfully used in many applications of statistics: since the copula representation is, in some reasonable sense, the most computationally efficient way of representing multi-D probability distributions.

Original file UTEP-CS-15-39 in pdf
Revised version UTEP-CS-15-39a in pdf
Extended version UTEP-CS-15-39b in pdf


Technical Report UTEP-CS-15-38, May 2015
Updated version UTEP-CS-15-38a, September 2015
We Live in the Best of Possible Worlds: Leibniz's Insight Helps to Derive Equations of Modern Physics
Vladik Kreinovich and Guoqing Liu

Published in: Raffaele Pisano, Michel Fichant, Paolo Bussotti, and Agamenon R. E. Oliveira (eds.), The Dialogue between Sciences, Philosophy and Engineering. New Historical and Epistemological Insights, Homage to Gottfried W. Leibnitz 1646-1716, College Publications, London, 2017, pp. 207-226.

To reconcile the notion of a benevolent and powerful God with the actual human suffering, Leibniz proposed the idea that while our world is not perfect, it is the best of possible worlds. This idea inspired important developments in physics: namely, it turned out that equations of motions and equations which describe the dynamics of physical fields can be deduced from the condition that the (appropriately defined) action functional is optimal. This idea is very helpful in physics applications, but to fully utilize this idea, we need to know the action, and there are many possible action functionals. Our idea is to apply Leibniz's insight once again and to assume that (similarly) on the set of all action functionals, there is an optimality criterion, and the actual action functional is optimal with respect to this criterion. This new idea enables us to derive the standard equations of General Relativity, Quantum Mechanics, Electrodynamics, etc. only from the fact that the corresponding expressions for action are optimal. Thus, the physical equations describing our world are indeed the best possible.

Original file UTEP-CS-15-38 in pdf
Updated version UTEP-CS-15-38a in pdf


Technical Report UTEP-CS-15-37, April 2015
Revised conference version UTEP-CS-15-37a, June 2015
Draft journal version UTEP-15-37b, July 2016
Revised journal version UTEP-15-37c, August 2016
How to take into account model inaccuracy when estimating the uncertainty of the result of data processing
Vladik Kreinovich, Olga Kosheleva, Andrzej Pownuk, and Rodrigo Romero

Conference version published in Proceedings of the ASME 2015 International Mechanical Engineering Congress & Exposition IMECE'2015, Houston, Texas, November 13-19, 2015; extended journal version published in ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering, 2017, Vol. 3, No. 1, Paper No. 011002.

In engineering design, it is important to guarantee that the values of certain quantities such as stress level, noise level, vibration level, etc., stay below a certain threshold in all possible situations, i.e., for all possible combinations of the corresponding internal and external parameters. Usually, the number of possible combinations is so large that it is not possible to physically test the system for all these combinations. Instead, we form a computer model of the system, and test this model. In this testing, we need to take into account that the computer models are usually approximate. In this paper, we show that the existing techniques for taking model uncertainty into account overestimate the uncertainty of the results. We also show how we can get more accurate estimates.

Original file UTEP-CS-15-37 in pdf
Conference version UTEP-CS-15-37a in pdf
Draft journal version UTEP-CS-15-37b in pdf
Revised journal version UTEP-CS-15-37c in pdf


Technical Report UTEP-CS-15-36, April 2015
Updated version UTEP-CS-15-36a, August 2015
Optimizing Cloud Use under Interval Uncertainty
Vladik Kreinovich and Esthela Gallardo

Published in: Roman Wyrzykowski, Ewa Deelman, Jack Dongarra, Konrad Karczewski, Jacek Kitowski, and Kazimierz Wiatr (eds.), Parallel Processing and Applied Mathematics, Springer Verlag, Cham, Switzerland, 2016, pp. 435-444.

One of the main advantages of cloud computing is that it helps the users to save money: instead of buying a lot of computers to cover all their computations, the user can rent the computation time on the cloud to cover the rare peak spikes of computer need. From this viewpoint, it is important to find the optimal division between in-house and in-the-cloud computations. In this paper, we solve this optimization problem, both in the idealized case when we know the complete information about the costs and the user's need, and in a more realistic situation, when we only know interval bounds on the corresponding quantities.

Original file UTEP-CS-15-36 in pdf
Updated version UTEP-CS-15-36a in pdf


Technical Report UTEP-CS-15-35, April 2015
A Simplified Explanation of What It Means to Assign a Finite Value to an Infinite Sum
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2016, Vol. 10, No. 1, pp. 15-21.

Recently, a video made rounds that explained that it often makes sense to assign finite values to infinite sums. For example, it makes sense to claim that the sum of all natural numbers is equal to -1/12. This has picked up interested in media. However, judged by the viewers' and readers' comments, for many viewers and readers, neither the video, not the corresponding articles seem to explain the meaning of the above inequality clearly enough. One of the main stumbling blocks is the fact that the infinite sum is clearly divergent, so a natural value of the infinite sum is infinity. What is the meaning of assigning a finite value to this (clearly infinite) sum? While the explanation of the above equality is difficult to describe in simple terms, the main idea behind this equality can be, in our opinion, explained rather naturally, and this is what we do in this paper.

File in pdf


Technical Report UTEP-CS-15-34, April 2015
Updated version UTEP-CS-15-34a, June 2015
How to Take Into Account a Student's Degree of Certainty When Evaluating the Test Results
Joe Lorkowski, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the 45th ASEE/IEEE Frontiers in Education Conference FIE'2015, El Paso, Texas, October 21-24, 2015, pp. 1568-1572.

To more adequately gauge the student's knowledge, it is desirable to take into account not only whether the student's answers on the test are correct or nor, but also how confident the students are in their answers. For example, a situation when a student gives a wrong answer, but understands his/her lack of knowledge on this topic, is not as harmful as the situation when the student is absolutely confident in his/her wrong answer. In this paper, we use the general decision making theory to describe the best way to take into account the student's degree of certainty when evaluating the test results.

Original file UTEP-CS-15-34 in pdf
revised version UTEP-CS-15-34a in pdf


Technical Report UTEP-CS-15-33, April 2015
A Corpus for Investigating English-Language Learners' Dialog Behaviors
Nigel G. Ward and Paola Gallardo

We are interested in developing methods for the semi-automatic discovery of prosodic patterns in dialog and how they differ between languages and among populations. We are starting by examining how the prosody of Spanish-native learners of English differs from that of native speakers. To support this work, we have collected a new corpus of conversations among college students. This includes dialogs between a nonnative speaker of English and a native, dialogs between native speakers of English, and Spanish conversations.

File in pdf


Technical Report UTEP-CS-15-32, April 2015
Updated version UTEP-CS-15-32a, April 2015
Sometimes, It Is Beneficial to Process Different Types of Uncertainty Separately
Chrysostomos D. Stylios, Andrzej Pownuk, and Vladik Kreinovich

Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2015 and 5th World Conference on Soft Computing, Redmond, Washington, August 17-19, 2015.

In many practical situations, we make predictions based on the measured and/or estimated values of different physical quantities. The accuracy of these predictions depends on the accuracy of the corresponding measurements and expert estimates. Often, for each quantity, there are several different sources of inaccuracy. Usually, to estimate the prediction accuracy, we first combine, for each input, inaccuracies from different sources into a single expression, and then use these expressions to estimate the prediction accuracy. In this paper, we show that it is often more computationally efficient to process different types of uncertainty separately, i.e., to estimate inaccuracies in the prediction result caused by different types of uncertainty, and only then combine these inaccuracies into a single estimate.

Original file UTEP-CS-15-32 in pdf
Updated version UTEP-CS-15-32a in pdf


Technical Report UTEP-CS-15-31, April 2015
Once We Know that a Polynomial Mapping Is Rectifiable, We Can Algorithmically Find a Rectification
Julio Urenda, David Finston, and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2015, Vol. 36, pp. 67-73.

It is known that some polynomial mappings φ: Ck --> Cn are rectifiable in the sense that there exists a polynomial mapping α: Cn --> Cn whose inverse is also polynomial and for which α(φ(z1, ...,zk)) = (z1, ...,zk, 0, ..., 0) for all z1, ...,zk. In many cases, the existence of such a rectification is proven indirectly, without an explicit construction of the mapping α.

In this paper, we use Tarski-Seidenberg algorithm (for deciding the first order theory of real numbers) to design an algorithm that, given a polynomial mapping φ: Ck --> Cn which is known to be rectifiable, returns a polynomial mapping α: Cn --> Cn that rectifies φ.

The above general algorithm is not practical for large n, since its computation time grows faster than 2(2n). To make computations more practically useful, for several important case, we have also designed a much faster alternative algorithm.

File in pdf


Technical Report UTEP-CS-15-30, April 2015
When Can We Simplify Data Processing: An Algorithmic Answer
Julio Urenda, Olga Kosheleva, Vladik Kreinovich, and Berlin Wu

Published in Journal of Uncertain Systems, 2016, Vol. 10, No. 1, pp. 72-80.

In many real-life situations, we are interested in the values of physical quantities x1, ..., xn which are difficult (or even impossible) to measure directly. To estimate these values, we measure easier-to-measure quantities y1, ..., ym which are related to the desired quantities by a known relation, and use these measurement results to estimate xi. The corresponding data processing algorithms are sometimes very complex and time-consuming, so a natural question is: are simpler (and, thus, faster) algorithms possible for solving this data processing problem? In this paper, we show that by using the known Tarski-Seidenberg algorithm, we can check whether such a simplification exists and, if it exists, produce this simplification.

File in pdf


Technical Report UTEP-CS-15-29, April 2015
Updated version UTEP-CS-15-29a, April 2015
Symbolic Aggregate ApproXimation (SAX) under Interval Uncertainty
Chrysostomos D. Stylios and Vladik Kreinovich

Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2015 and 5th World Conference on Soft Computing, Redmond, Washington, August 17-19, 2015.

In many practical situations, we monitor a system by continuously measuring the corresponding quantities, to make sure that an abnormal deviation is detected as early as possible. Often, we do not have ready algorithms to detect abnormality, so we need to use machine learning techniques. For these techniques to be efficient, we first need to compress the data. One of the most successful methods of data compression is the technique of Symbolic Aggregate approXimation (SAX). While this technique is motivated by measurement uncertainty, it does not explicitly take this uncertainty into account. In this paper, we show that we can further improve upon this techniques if we explicitly take measurement uncertainty into account.

Original version UTEP-CS-15-29 in pdf
Revised version UTEP-CS-15-29a in pdf


Technical Report UTEP-CS-15-28, April 2015
Why Some Families of Probability Distributions Are Practically Efficient: A Symmetry-Based Explanation
Vladik Kreinovich, Olga Kosheleva, Hung T. Nguyen, and Songsak Sriboonchitta

Published in: Van Nam Huynh, Vladik Kreinovich, and Songsak Sriboonchitta (eds.), Causal Inference in Econometrics, Springer Verlag, Cham, Switzerland, 2016, pp. 133-152.

Out of many possible families of probability distributions, some families turned out to be most efficient in practical situations. Why these particular families and not others? To explain this empirical success, we formulate the general problem of selecting a distribution with the largest possible utility under appropriate constraints. We then show that if we select the utility functional and the constraints which are invariant under natural symmetries -- shift and scaling corresponding to changing the starting point and the measuring unit for describing the corresponding quantity $x$. then the resulting optimal families of probability distributions indeed include most of the empirically successful families. Thus, we get a symmetry-based explanation for their empirical success.

File in pdf


Technical Report UTEP-CS-15-27, April 2015
Analysis of Random Metric Spaces Explains Emergence Phenomenon and Suggests Discreteness of Physical Space
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2015, Vol. 35, pp. 42-49.

In many practical situations, systems follow the pattern set by the second law of thermodynamics: they evolve from an organized inhomogeneous state into a homogeneous structure-free state. In many other practical situations, however, we observe the opposite emergence phenomenon: in an originally homogeneous structure-free state, an inhomogeneous structure spontaneously appears. In this paper, we show that the analysis of random metric spaces provides a possible explanation for this phenomenon. We also show that a similar analysis supports space-time models in which proper space is discrete.

File in pdf


Technical Report UTEP-CS-15-26, April 2015
Why Big-O and Little-O in Algorithm Complexity: A Pedagogical Remark
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2015, Vol. 35, pp. 34-41.

In the comparative analysis of different algorithm, O- and o-notions are frequently used. While their use is productive, most textbooks do not provide a convincing student-oriented explanation of why these particular notations are useful in algorithm analysis. In this note, we provide such an explanation.

File in pdf


Technical Report UTEP-CS-15-25, March 2015
Revised version UTEP-CS-15-25a, June 2015
Why It Is Important to Precisiate Goals
Olga Kosheleva, Vladik Kreinovich, and Hung T. Nguyen

Published in Journal of Uncertain Systems, 2016, Vol. 10, No. 1, pp. 22-30.

After Zadeh and Bellman explained how to optimize a function under fuzzy constraints, there have been many successful applications of this optimization. However, in many practical situations, it turns out to be more efficient to precisiate the objective function before performing optimization. In this paper, we provide a possible explanation for this empirical fact.

Original file UTEP-CS-15-25 in pdf
Revised version UTEP-CS-15-25a in pdf


Technical Report UTEP-CS-15-24, March 2015
Setting Up a Highly Configurable, Scalable Nimbus Cloud Test Bed Running on a MANET
Joshua McKee

File in pdf


Technical Report UTEP-CS-15-23, March 2015
Technical Report UTEP-CS-15-23a, April 2015
Simple Linear Interpolation Explains All Usual Choices in Fuzzy Techniques: Membership Functions, t-Norms, t-Conorms, and Defuzzification
Vladik Kreinovich, Jonathan Quijas, Esthela Gallardo, Caio De Sa Lopes, Olga Kosheleva, and Shahnaz Shahbazova

Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2015 and 5th World Conference on Soft Computing, Redmond, Washington, August 17-19, 2015.

Most applications of fuzzy techniques use piece-wise linear (triangular or trapezoid) membership functions, min or product t-norms, max or algebraic sum t-conorms, and centroid defuzzification. Similarly, most applications of interval-valued fuzzy techniques use piecewise-linear lower and upper membership functions. In this paper, we show that all these choices can be explained as applications of simple linear interpolation.

Original version UTEP-CS-15-23 in pdf
Revised version UTEP-CS-15-23a in pdf


Technical Report UTEP-CS-15-22, March 2015
Coming Up with a Good Question Is Not Easy: A Proof
Joe Lorkowski, Luc Longpre, Olga Kosheleva, and Salem Benferhat

Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2015 and 5th World Conference on Soft Computing, Redmond, Washington, August 17-19, 2015.

Ability to ask good questions is an important part of learning skills. Coming up with a good question, a question that can really improve one's understanding of the topic, is not easy. In this paper, we prove -- on the example of probabilistic and fuzzy uncertainty -- that the problem of selecting of a good question is indeed hard.

File in pdf


Technical Report UTEP-CS-15-21, March 2015
Revised version UTEP-CS-15-21a, April 2015
From 1-D to 2-D Fuzzy: A Proof that Interval-Valued and Complex-Valued Are the Only Distributive Options
Christian Servin, Vladik Kreinovich, and Olga Kosheleva

Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2015 and 5th World Conference on Soft Computing, Redmond, Washington, August 17-19, 2015.

While the usual 1-D fuzzy logic has many successful applications, in some practical cases, it is desirable to come up with a more subtle way of representing expert uncertainty. A natural idea is to add additional information, i.e., to go from 1-D to 2-D (and multi-D) fuzzy logic. At present, there are two main approaches to 2-D fuzzy logic: interval-valued and complex-valued. At first glance, it may seem that many other options are potentially possible. We show, however, that, under certain reasonable conditions, interval-valued and complex-valued are the only two possible options.

Original version UTEP-CS-15-21 in pdf
Revised version UTEP-CS-15-21a in pdf


Technical Report UTEP-CS-15-20, March 2015
Updated version UTEP-CS-15-20a, May 2015
How to Speed Up Software Migration and Modernization: Successful Strategies Developed by Precisiating Expert Knowledge
Francisco Zapata, Octavio Lerma, Leobardo Valera, and Vladik Kreinovich

Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2015 and 5th World Conference on Soft Computing, Redmond, Washington, August 17-19, 2015.

Computers are getting faster and faster; the operating systems are getting more sophisticated. Often, these improvements necessitate that we migrate the existing software to the new platform. In the ideal world, the migrated software should run perfectly well on a new platform; however, in reality, when we try that, thousands of errors appear, errors that need correcting. As a result, software migration is usually a very time-consuming process. A natural way to speed up this process is to take into account that errors naturally fall into different categories, and often, a common correction can be applied to all error from a given category. To efficiently use this idea, it is desirable to estimate the number of errors of different type. In this paper, we show how imprecise expert knowledge about such errors can be used to produce very realistic estimates.

Original version UTEP-CS-15-20 in pdf
Revised version UTEP-CS-15-20a in pdf


Technical Report UTEP-CS-15-19, February 2015
Updated version UTEP-CS-15-19a, April 2015

How to Estimate Expected Shortfall When Probabilities Are Known with Interval or Fuzzy Uncertainty
Christian Servin, Hung T. Nguyen, and Vladik Kreinovich

Published in Proceedings of the IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2015, Istanbul, Turkey, August 1-5, 2015.

To gauge the risk corresponding to a possible disaster, it is important to know both the probability of this disaster and the expected damage caused by such potential disaster ("expected shortfall"). Both these measures of risk are easy to estimate in the ideal case, when we know the exact probabilities of different disaster strengths. In practice, however, we usually only have a partial information about these probabilities: we may have an interval (or, more generally, fuzzy) uncertainty about these probabilities. In this paper, we show how to efficiently estimate the expected shortfall under such interval and/or fuzzy uncertainty.

Original file UTEP-CS-15-19 in pdf
Updated version UTEP-CS-15-19a in pdf


Technical Report UTEP-CS-15-18, February 2015
Updated version UTEP-CS-15-18a, July 2015
Creative Discussions or Memorization? Maybe Both? (on the example of teaching Computer Science)
Vladik Kreinovich and Olga Kosheleva

Published in: Proceedings of the 1st International Conference on Interdisciplinary Development Research IDR'2015, Chiang Mai, Thailand, September 17-18, 2015.

We all strive to be creative in our teaching, but there is often not enough time to make all the topics creative fun. So sometimes, we teach memorization first, understanding later. We do it, but we often do it without seriously analyzing which topics to "sacrifice" to memorization. In this talk, we use simple mathematical models of learning to come up with relevant recommendations: Namely, all the topics form a dependency graph, and if we do not have enough time to allow students to treat all topics with equal creativity, then the most reasonable topics for memorization first are the ones in the critical path of this dependency graph.

Original file UTEP-CS-15-18 in pdf
Updated version UTEP-CS-15-18a in pdf


Technical Report UTEP-CS-15-17, February 2015
Updated version UTEP-CS-15-17a, April 2015
Why Sugeno lambda-Measures
Hung T. Nguyen, Vladik Kreinovich, Joe Lorkowski, and Saiful Abu

Published in Proceedings of the IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2015, Istanbul, Turkey, August 1-5, 2015.

To describe expert uncertainty, it is often useful to go beyond additive probability measures and use non-additive (fuzzy) measures. One of the most widely used classes of such measures is the class of Sugeno lambda-measures. Their success is somewhat paradoxical, since from the purely mathematical viewpoint, these measures are -- in some reasonable sense -- equivalent to probability measures. In this paper, we explain this success by showing that while (1) mathematically, it is possible to reduce Sugeno measures to probability measures, but (2) from the computational viewpoint, using Sugeno measures is much more efficient. We also show that among all fuzzy measures which are equivalent to probability measures, Sugeno measures (and a slightly more general family of measures) are the only ones with this efficiency property.

Original file UTEP-CS-15-17 in pdf
Updated version UTEP-CS-15-17a in pdf


Technical Report UTEP-CS-15-16, February 2015
Updated version UTEP-CS-15-16a, April 2015

Which Bio-Diversity Indices Are Most Adequate
Olga Kosheleva, Craig Tweedie, and Vladik Kreinovich

Published in Proceedings of the IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2015, Istanbul, Turkey, August 1-5, 2015.

One of the main objectives of ecology is to analyze, maintain, and enhance the bio-diversity of different ecosystems. To be able to do that, we need to gauge bio-diversity. Several semi-heuristic diversity indices have been shown to be in good accordance with the intuitive notion of bio-diversity. In this paper, we provide a theoretical justification for these empirically successful techniques. Specifically, we show that the most widely used techniques -- Simpson index -- can be justified by using simple fuzzy rules, while a more elaborate justification explains all empirically successful diversity indices.

Original file UTEP-CS-15-16 in pdf
Updated version UTEP-CS-15-16a in pdf


Technical Report UTEP-CS-15-15, February 2015
Our Reasoning is Clearly Fuzzy, so Why Is Crisp Logic So Often Adequate?
Hung T. Nguyen, Berlin Wu, and Vladik Kreinovich

Published in International Journal of Intelligent Technologies and Applied Statistics (IJITAS), 2015, Vol. 8, No. 2, pp. 133-137.

Our reasoning is clearly fuzzy, so why is crisp logic so often adequate? We explain this phenomenon by showing that in the presence of noise, an arbitrary continuous (e.g., fuzzy) system can be well described by its discrete analog. However, as the description gets more accurate, the continuous description becomes necessary.

File in pdf


Technical Report UTEP-CS-15-14, February 2015
A Natural Simple Model of Scientists' Strength Leads to Skew-Normal Distribution
Komsan Suriya, Tatcha Sudtasan, Tonghui Wang, Octavio Lerma, and Vladik Kreinovich

Published in International Journal of Intelligent Technologies and Applied Statistics (IJITAS), 2015, Vol. 8, No. 2, pp. 153-158.

In many practical situations, we have probability distributions which are close to normal but skewed. Several families of distributions were proposed to describe such phenomena. The most widely used is skew-normal distribution, whose probability density (pdf) is equal to the product of the pdf of a normal distribution and a cumulative distribution function (cdf) of another normal distribution. Out of other possible generalizations of normal distributions, the skew-normal ones were selected because of their computational efficiency, and not because they represent any real-life phenomena. Interestingly, it turns out that these distributions do represent a real-life phenomena: namely, in a natural simple model of scientists' strength, this strength is skew-normally distributed. We also describe what happens if we consider more complex models of scientists' strength.

File in pdf


Technical Report UTEP-CS-15-13, February 2015
Updated version UTEP-CS-15-13a, March 2015
Adding possibilistic knowledge to probabilities makes many problems algorithmically decidable
Olga Kosheleva and Vladik Kreinovich

Published in Proceedings of the World Congress of the International Fuzzy Systems Association IFSA'2015, joint with the Annual Conference of the European Society for Fuzzy Logic and Technology EUSFLAT'2015, Gijon, Asturias, Spain, June 30 - July 3, 2015, pp. 1452-1458.

Many physical theories accurately predict which events are possible and which are not, or -- in situations where probabilistic (e.g., quantum) effects are important -- predict the probabilities of different possible outcomes. At first glance, it may seem that this probabilistic information is all we need. We show, however, that to adequately describe physicists' reasoning, it is important to also take into account additional knowledge -- about what is possible and what is not. We show that this knowledge can be described in terms of possibility theory, and that the presence of this knowledge makes many problems algorithmically decidable.

Original file UTEP-CS-15-13 in pdf
Updated version UTEP-CS-15-13a in pdf


Technical Report UTEP-CS-15-12, January 2015
Updated version UTEP-CS-15-12a, April 2015
Model reduction: why it is possible and how it can potentially help to control swarms of Unmanned Arial Vehicles (UAVs)
M. Ceberio, L. Valera, O. Kosheleva, and R. Romero

Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2015 and 5th World Conference on Soft Computing, Redmond, Washington, August 17-19, 2015.

In many application areas, such as meteorology, traffic control, etc., it is desirable to employ swarms of Unmanned Arial Vehicles (UAVs) to provide us with a good picture of the changing situation and thus, to help us make better predictions (and make better decisions based on these predictions). To avoid duplication, interference, and collisions, UAVs must coordinate their trajectories. As a result, the optimal control of each of these UAVs should depend on the positions and velocities of all others -- which makes the corresponding control problem very complicated. Since, in contrast to controlling a single UAV, the resulting problem is too complicated to expect an explicit solution, a natural idea is to extra expert rules and use fuzzy control methodology to translate these rules into a precise control strategy. However, with many possible combinations of variables, it is not possible to elicit that many rules.

In this paper, we show that, in general, it is possible to use model reduction techniques to decrease the number of questions and thus, to make rules elicitation possible. In addition to general results, we also show that for the UAVs, optimal control indeed leads to a model reduction -- and thus, to a drastic potential decrease in the corresponding number of questions.

Original file UTEP-CS-15-12 in pdf
Updated version UTEP-CS-15-12a in pdf


Technical Report UTEP-CS-15-11, January 2015
Updated version UTEP_CS-15-11a, March 2015
How geophysicists' intuition helps seismic data processing
Afshin Gholamy and Vladik Kreinovich

Published in Proceedings of the World Congress of the International Fuzzy Systems Association IFSA'2015, joint with the Annual Conference of the European Society for Fuzzy Logic and Technology EUSFLAT'2015, Gijon, Asturias, Spain, June 30 - July 3, 2015, pp. 749-756.

In geophysics, signals come with noise. It is desirable to minimize the effect of this noise. If we knew the probabilities of different values of signal and noise, we could use statistical filtering techniques. In geophysics, however, we rarely know the exact values of these probabilities; instead, we have to rely on the expertise and intuition of experts. We show how fuzzy techniques can transform this expertise into precise de-noising methods, we explain that the resulting methods indeed satisfy several natural requirements, and that these methods are in good accordance with heuristic techniques successfully used by geophysicists.

Original file UTEP-CS-15-11 in pdf
Updated version UTEP-CS-15-11a in pdf


Technical Report UTEP-CS-15-10, January 2015
Updated version UTEP-CS-15-10a, March 2015
How success in a task depends on the skills level: two uncertainty-based justifications of a semi-heuristic Rasch model
Joe Lorkowski, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the World Congress of the International Fuzzy Systems Association IFSA'2015, joint with the Annual Conference of the European Society for Fuzzy Logic and Technology EUSFLAT'2015, Gijon, Asturias, Spain, June 30 - July 3, 2015, pp. 506-511.

The more skills a student acquires, the more successful this student is with the corresponding tasks. Empirical data shows that the success in a task grows as a logistic function of skills; this dependence is known as the Rasch model. In this paper, we provide two uncertainty-based justifications for this model: the first justification provides a simple fuzzy-based intuitive explanation for this model, while the second -- more complex one -- explains the exact quantitative behavior of the corresponding dependence.

Original file UTEP-CS-15-10 in pdf
Updated version UTEP-CS_15-10a in pdf


Technical Report UTEP-CS-15-09, January 2015
How to Test Hypotheses When Exact Values are Replaced by Intervals to Protect Privacy: Case of t-Tests
Vladik Kreinovich and Christian Servin

Published in International Journal of Intelligent Technologies and Applied Statistics (IJITAS), 2015, Vol. 8, No. 2, pp. 93-102.

Researchers continuously look for possible relations between relevant quantities, e.g., relations which may help in preventing and curing diseases. Once a hypothesis is made about such a relation, it is necessary to test whether it is confirmed by the data. For such hypothesis testing, t-tests are most widely used. For example, a t-test can check, based on two samples, whether it is possible that they come from distributions with the same mean -- e.g., whether the average blood pressure after a proposed treatment is the same as before or it is provably smaller -- meaning that the tested treatment works.

In traditional statistics, we assume that we know the exact values of the corresponding quantities. In biomedical research, however, it is important to preserve patients' privacy and confidentiality -- and, knowing the exact values of all relevant parameters, one can uniquely identify the patient. One of the most efficient ways to preserve privacy is thus to replace the exact values with intervals containing such values. For example, instead of the exact age -- which can uniquely identify the patient -- we only store an interval containing this age: between 20 and 30, or between 30 and 40, etc.

Different values from the corresponding intervals lead, in general, to different values of the corresponding statistic. In such situations, to make sure that the data confirms the given hypothesis, we need to check that the corresponding statistic is within the desired interval for all possible values of the input quantities. In other words, we need to make sure that the whole range of possible values of the corresponding statistic is inside the desired interval. Computing this interval is, in general, NP-hard. In this talk, we provide efficient algorithms for computing t-tests under privacy-motivated interval uncertainty.

File in pdf


Technical Report UTEP-CS-15-08, January 2015
Constraint Approach to Multi-Objective Optimization
Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the Eighth International Workshop on Constraints Programming and Decision Making CoProd'2015, El Paso, Texas, November 6, 2015; detailed version published in Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making: Theory and Applications, Springer Verlag, Berlin, Heidelberg, 2018, pp. 21-26.

In many practical situations, we would like to maximize (or minimize) several different criteria, and it is not clear how much weight to assign to each of these criteria. Such situations are ubiquitous and thus, it is important to be able to solve the corresponding multi-objective optimization problems. There exist many heuristic methods for solving such problems. In this paper, we reformulate multi-objective optimization as a constraint satisfaction problem, and we show that this reformulation explains two widely use multi-objective optimization techniques: optimizing a weighted sum of the objective functions and optimizing the product of normalized values of these functions.

File in pdf


Technical Report UTEP-CS-15-07, January 2015
Testing a Power Law Model of Knowledge Propagation: Case Study of the Out of Eden Walk Project
Octavio Lerma, Leobardo Valera, Deana Pennington, and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2016, Vol. 10, No. 1, pp. 39-47.

To improve teaching and learning, it is important to understand how knowledge propagates. In general, when a new piece of knowledge is introduced, people start learning about it. Since the potential audience is limited, after some time, the number of new learners starts to decrease. Traditional models of knowledge propagation are based on differential equations; in these models, the number of new learners decreases exponentially with time. Recently, a new power law model for knowledge propagation was proposed. In this model, the number of learners decreases much slower, as a negative power of time. In this paper, we compare the two models on the example of readers' comments on the Out of Eden Walk, a journalistic and educational project in which informative messages ("dispatches") from different parts of the world are regularly posted on the web. Readers who learned the new interesting information from these dispatches are encouraged to post comments. Usually, a certain proportion of readers post comments, so the number of comments posted at different times can be viewed as a measure characterizing the number of new learners. So, we check whether the number of comments is consistent with the power law or with the exponential law. To make a statistically reliable conclusion on which model is more adequate, we need to have a sufficient number of comments. It turns out that for the majority of dispatches with sufficiently many comments, the observed decrease is consistent with the power law (and none of them is consistent with the exponential law).

File in pdf


Technical Report UTEP-CS-15-06, January 2015
Optimizing pred(25) Is NP-Hard
Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the Eighth International Workshop on Constraints Programming and Decision Making CoProd'2015, El Paso, Texas, November 6, 2015; detailed version published in Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making: Theory and Applications, Springer Verlag, Berlin, Heidelberg, 2018, pp. 33-38.

Usually, in data processing, to find the parameters of the models that best fits the data, people use the Least Squares method. One of the advantages of this method is that for linear models, it leads to an easy-to-solve system of linear equations. A limitation of this method is that even a single outlier can ruin the corresponding estimates; thus, more robust methods are needed. In particular, in software engineering, often, a more robust pred(25) method is used, in which we maximize the number of cases in which the model's prediction is within the 25% range of the observations. In this paper, we show that even for linear models, pred(25) parameter estimation is NP-hard.

File in pdf


Technical Report UTEP-CS-15-05, January 2015
Why Right-Brain Cultures Are More Flexible: A Possible Explanation of Yu. Manin's Observation
Olga Kosheleva and Vladik Kreinovich

Published in International Mathematical Forum, 2015, Vol. 10, No. 4, pp. 175-180.

Yuri Manin, a renowned mathematician, observed that it is much easier for a person raised in a right-brain culture to adjust to the left-brain environment than vice versa. In this paper, we provide a possible explanation for this phenomenon.

File in pdf


Technical Report UTEP-CS-15-04, January 2015
When an Idea Comes, Write It Down Right Away: Mathematical Justification of Vladimir Smirnov's Advice
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2015, Vol. 34, pp. 90-93.

Among several advices to students, Vladimir Smirnov, a renowned Russian mathematician, suggested that when an idea comes, it is better to write it down right away. In this paper, we provide a quantitative justification for this advice.

File in pdf


Technical Report UTEP-CS-15-03, January 2015
Wiener-Process-Type Evasive Aircraft Actions Are Indeed Optimal Against Anti-Aircraft Guns: Wiener's Data Revisited
Vladik Kreinovich and Olga Kosheleva

Published in Mathematical Structures and Modeling, 2015, Vol. 34, pp. 85-89.

In his 1940s empirical study of evasive aircraft actions, N.~Wiener, the father of cybernetics, founds out that the pilot's actions follow a Wiener-type-process. In this paper, we explain this empirical result by showing that such evasive actions are indeed optimal against the 1940s anti-aircraft guns.

File in pdf


Technical Report UTEP-CS-15-02, January 2015
Fuzzy (and interval) techniques in the age of big data: an overview with applications to environmental science, geosciences, engineering, and medicine
Vladik Kreinovich and Rujira Ouncharoen

Published in International Journal of Uncertainty, Fuzziness, and Knowledge-Based Systems, 2015, Vol. 23, Suppl. 1, pp. 75-89.

In some practical situations -- e.g., when treating a new illness -- we do not have enough data to make valid statistical conclusions. In such situations, it is necessary to use expert knowledge -- and thus, it is beneficial to use fuzzy techniques that were specifically designed to process such knowledge. At first glance, it may seem that in situations when we have large amounts of data, the relative importance of expert knowledge should decrease. However, somewhat surprisingly, it turns out that expert knowledge is still very useful in the current age of big data. In this paper, we explain how exactly (and why) expert knowledge is useful, and we overview efficient methods for processing this knowledge. This overview is illustrated by examples from environmental science, geosciences, engineering (in particular, aircraft maintenance and underwater robots), and medicine.

File in pdf


Technical Report UTEP-CS-15-01, January 2015
Revised version UTEP-CS-15-01b, July 2015
Towards the Possibility of Objective Interval Uncertainty
Luc Longpre, Olga Kosheleva, and Vladik Kreinovich

Published in: Proceedings of the 16th GAMM-IMACS International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computation SCAN'2014, Wuerzburg, Germany, September 21-26, 2014, pp. 54-65.

Applications of interval computations usually assume that while we only know an interval containing the actual (unknown) value of a physical quantity, there is the exact value of this quantity, and that in principle, we can get more and more accurate estimates of this value. Physicists know, however, that, due to uncertainty principle, there are limitations on how accurately we can measure the values of physical quantities. One of the important principles of modern physics is operationalism -- that a physical theory should only use observable properties. This principle is behind most successes of the 20th century physics, starting with relativity theory (vs. un-observable aether) and quantum mechanics. From this viewpoint, it is desirable to avoid using un-measurable exact values and to modify the mathematical formalisms behind physical theories so that they explicitly only take objective uncertainty into account. In this paper, we describe how this can be done for objective interval uncertainty.

Original file UTEP-CS-15-01 in pdf
Revised version UTEP-CS-15-01b in pdf