Computer Science Department

Abstracts of 2017 Reports

Technical Report UTEP-CS-17-100, December 2017

How to Store Tensors in Computer Memory: An Observation

Martine Ceberio and Vladik Kreinovich

Published in * Mathematical Structures and Modeling*, 2018,
Vol. 46, pp. 107-117.

In this paper, after explaining the need to use tensors in computing, we analyze the question of how to best store tensors in computer memory. Somewhat surprisingly, with respect to a natural optimality criterion, the standard way of storing tensors turns out to be one of the optimal ones.

Technical Report UTEP-CS-17-99, December 2017

Updated version UTEP-CS-17-99a, April 2018

Updated version UTEP-CS-17-99b, June 2018

Why Taylor Models And Modified Taylor Models are Empirically Successful: A Symmetry-Based Explanation

Mioara Joldes, Christoph Lauter, Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

Published in *Proceedings of the 8th International Workshop on
Reliable Engineering Computing REC'2018*, Liverpool, UK,
July 16-18, 2018.

In this paper, we show that symmetry-based ideas can explain the empirical success of Taylor models and modified Taylor models in representing uncertainty.

Original file UTEP-CS-17-99 in pdf

Updated version UTEP-CS-17-99a in pdf

Updated version UTEP-CS-17-99b in pdf

Technical Report UTEP-CS-17-98, December 2017

Updated version UTEP-CS-17-98a, May 2018

How to Best Apply Deep Neural Networks in Geosciences: Towards Optimal "Averaging" in Dropout Training

Afshin Gholamy, Justin Parra, Vladik Kreinovich, Olac Fuentes, and Elizabeth Anthony

To appear in:
Junzo Watada, Shing Chieng Tan, Pandian Vasant, Eswaran
Padmanabhan, and Lakhmi C. Jain (eds.), *Smart Unconventional
Modelling, Simulation and Optimization for Geosciences and
Petroleum Engineering*, Springer Verlag.

The main objectives of geosciences is to find the
current state of the Earth -- i.e., solve the corresponding
*inverse problems* -- and to use this knowledge for predicting the
future events, such as earthquakes and volcanic eruptions. In both
inverse and prediction problems, often, machine learning
techniques are very efficient, and at present, the most efficient
machine learning technique is deep neural training. To speed up
this training, the current learning algorithms use dropout
techniques: they train several sub-networks on different portions
of data, and then "average" the results. A natural idea is to
use arithmetic mean for this "averaging", but empirically,
geometric mean works much better. In this paper, we provide a
theoretical explanation for the empirical efficiency of selecting
geometric mean as the "averaging" in dropout training.

Original version UTEP-CS-17-98 in pdf

Updated version UTEP-CS-17-98a in pdf

Technical Report UTEP-CS-17-97, December 2017

Beyond Integration: A Symmetry-Based Approach to Reaching Stationarity in Economic Time Series

Songsak Sriboonchitta, Olga Kosheleva, and Vladik Kreinovich

To appear in: Olga Kosheleva, Sergey Shary, Gang Xiang, and Roman
Zapatrin (eds.), *Beyond Traditional Probabilistic Data
Processing Techniques: Interval, Fuzzy, etc. Methods and Their
Applications*, Springer, Cham, Switzerland, 2018.

Many efficient data processing techniques assume that the corresponding process is stationary. However, in areas like economics, most processes are not stationery: with the exception of stagnation periods, economies usually grow. A known way to apply stationarity-based methods to such processes -- integration -- is based on the fact that often, while the process itself is not stationary, its first or second differences are stationary. This idea works when the trend polynomially depends on time. In practice, the trend is usually non-polynomial: it is often exponentially growing, with cycles added. In this paper, we shod how integration techniques can be expanded to such trends.

Technical Report UTEP-CS-17-96, December 2017

Why Sparse?

Thongchai Dumrongpokaphan, Olga Kosheleva, Vladik Kreinovich, and Aleksandra Belina

To appear in: Olga Kosheleva, Sergey Shary, Gang Xiang, and Roman
Zapatrin (eds.), *Beyond Traditional Probabilistic Data
Processing Techniques: Interval, Fuzzy, etc. Methods and Their
Applications*, Springer, Cham, Switzerland, 2018.

In many situations, a solution to a practical problem
is *sparse*, i.e., corresponds to the case when most of the
parameters describing the solution are zeros, and only a few
attain non-zero values. This surprising empirical phenomenon helps
solve the corresponding problems -- but it remains unclear why
this phenomenon happens. In this paper, we provide a possible
theoretical explanation for this mysterious phenomenon.

Technical Report UTEP-CS-17-95, December 2017

Why Deep Learning Methods Use KL Divergence Instead of Least Squares: A Possible Pedagogical Explanation

Olga Kosheleva and Vladik Kreinovich

Published in * Mathematical Structures and Modeling*, 2018,
Vol. 46, pp. 102-106.

In most applications of data processing, we select the parameters that minimize the mean square approximation error. The same Least Squares approach has been used in the traditional neural networks. However, for deep learning, it turns out that an alternative idea works better -- namely, minimizing the Kullback-Leibler (KL) divergence. The use of KL divergence is justified if we predict probabilities, but the use of this divergence has been successful in other situations as well. In this paper, we provide a possible explanation for this empirical success. Namely, the Least Square approach is optimal when the approximation error is normally distributed -- and can lead to wrong results when the actual distribution is different from normal. The need to have a robust criterion, i.e., a criterion that does not depend on the corresponding distribution, naturally leads to the KL divergence.

Technical Report UTEP-CS-17-94, December 2017

How to Make A Proof of Halting Problem More Convincing: A Pedagogical Remark

Benjamin W. Robertson, Vladik Kreinovich, and Olga Kosheleva

Published in *International Mathematical Forum*, 2018, Vol. 13,
No. 1, pp. 9-13.

As an example of an algorithmically undecidable problem, most
textbooks list the impossibility to check whether a given program
halts on given data. A usual proof of this result is based on the
assumption that the hypothetical halt-checker works for *all*
programs. To show that a halt-checker is impossible, we design an
auxiliary program for which the existence of such a halt-checker
leads to a contradiction. However, this auxiliary program is
usually very artificial. So, a natural question arises: what if we
only require that the halt-checker work for *reasonable*
programs? In this paper, we show that even with such a
restriction, halt-checkers are not possible -- and thus, we make a
proof of halting problem more convincing for students.

Technical Report UTEP-CS-17-93, December 2017

Revised version UTEP-CS_17-93a, March 2018

Why Triangular Membership Functions Are Often Efficient in F-Transform Applications: Relation to Probabilistic and Interval Uncertainty and to Haar Wavelets

Olga Kosheleva and Vladik Kreinovich

Published in J. Medina et al. (eds.), *Proceedings of the 17th
International Conference on Information Processing and Management
of Uncertainty in Knowledge-Based Systems IPMU'2018*, Cadiz, Spain,
June 11-15, 2018.

Fuzzy techniques describe expert opinions. At first glance, we would therefore expect that the more accurately the corresponding membership functions describe the expert's opinions, the better the corresponding results. In practice, however, contrary to these expectations, the simplest -- and not very accurate -- triangular membership functions often work the best. In this paper, on the example of the use of membership functions in F-transform techniques, we provide a possible theoretical explanation for this surprising empirical phenomenon.

Original file UTEP-CS-17-93 in pdf

Updated file UTEP-CS-17-93a in pdf

Technical Report UTEP-CS-17-92, December 2017

Updated version UTEP-CS-17-92b, August 2018

Z-Numbers: How They Describe Student Confidence and How They Can Explain (and Improve) Laplacian and Schroedinger Eigenmap Dimension Reduction in Data Analysis

Vladik Kreinovich, Olga Kosheleva, and Michael Zakharevich

To appear in: Christophe Marsala and Marie-Jeanne Lesot (eds.),
*A Fuzzy Dictionary of Fuzzy Modelling: Common Concepts and
Perspectives*, Springer, Cham, Switzerland.

Experts have different degrees of confidence in their statements. To describe these different degrees of confidence, Lotfi A. Zadeh proposed the notion of a Z-number: a fuzzy set (or other type of uncertainty) supplemented by a degree of confidence in the statement corresponding to fuzzy sets. In this chapter, we show that Z-numbers provide a natural formalization of the competence-vs-confidence dichotomy, which is especially important for educating low-income students. We also show that Z-numbers provide a natural theoretical explanation for several empirically heuristic techniques of dimension reduction in data analysis, such as Laplacian and Schroedinger eigenmaps, and, moreover, show how these methods can be further improved.

Original file UTEP-CS-17-92 in pdf

Updated version UTEP-CS-17-92b in pdf

Technical Report UTEP-CS-17-91, November 2017

Sudoku App: Model-Driven Development of Android Apps Using OCL?

Yoonsik Cheon and Aditi Barua

Model driven development (MDD) shifts the focus of software development from writing code to building models by developing an application as a series of transformations on models including eventual code generation. Can the key ideas of MDD be applied to the development of Android apps, one of the most popular mobile platforms of today? To answer this question, we perform a small case study of developing an Android app for playing Sudoku puzzles. We use the Object Constraint Language (OCL) as the notation for creating precise models and translate OCL constraints to Android Java code. Our findings are mixed in that there is a great opportunity for generating a significant amount of both platform-neutral and Android-specific code automatically but there is a potential concern on the memory efficiency of the generated code. We also point out several shortcomings of OCL in writing precise and complete specifications for UML models and suggest a few extensions and improvements to make it more expressive and suitable for MDD. The reader is assumed to be familiar with OCL.

Technical Report UTEP-CS-17-90, November 2017

The Sums of m

John McClure, Olga Kosheleva, and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2018,
Vol. 45, pp. 49-51.

Students studying physics sometimes ask a natural question: the
momentum -- sum of m_{i} * v_{i} -- is preserved,
the energy --
one half of the sum of m_{i} * v_{i}^{2}
-- is preserved, why not
sum of m_{i} * v_{i}^{3}? In this paper,
we give a simple answer to
this question.

Technical Report UTEP-CS-17-89, November 2017

Rsvised version UTEP-CS_17-89b, January 2018

Can Mass Be Negative?

Vladik Kreinovich and Sergei Soloviev

Published in *Mathematical Structures and Modeling*, 2018,
Vol. 45, pp. 43-48.

Overcoming the force of gravity is an important part of space travel and a significant obstacle preventing many seemingly reasonable space travel schemes to become practical. Science fiction writers like to imagine materials that may help to make space travel easier. Negative mass -- supposedly causing anti-gravity -- is one of the popular ideas in this regard. But can mass be negative? In this paper, we show that negative masses are not possible -- their existence would enable us to create energy out of nothing, which contradicts to the energy conservation law.

Original file UTEP-CS-17-89 in pdf

Revised version UTEP-CS-17-89b in pdf

Technical Report UTEP-CS-17-88, November 2017

Propagation of Probabilistic Uncertainty: The Simplest Case (A Brief Pedagogical Introduction)

Olga Kosheleva and Vladik Kreinovich

Published in *International Mathematical Forum*, 2017,
Vol. 12, No. 20, pp. 943-952.

The main objective of this text is to provide a brief introduction to formulas describing the simplest case of propagation of probabilistic uncertainty -- for students who have not yet taken a probability course.

Technical Report UTEP-CS-17-87, September 2017

Need for a Large-N Array (and Wavelets and Differences) to Determine the Assumption-free 3-D Earth Model

Solymar Ayala Cortez, Aaron Velasco, and Vladik Kreinovich

Published in *Journal of Uncertain Systems*, 2018, Vol. 12,
No. 3, pp. 171-175.

One of the main objectives of geophysical seismic analysis is to determine the Earth's structure. Usually, to determine this structure, geophysicists supplement the measurement results with additional geophysical assumptions. An important question is: when is it possible to reconstruct the Earth's structure uniquely based on the measurement results only, without the need to use any additional assumptions? In this paper, we show that for this, one needs to use large-N arrays -- 2-D arrays of seismic sensors. To actually perform this reconstruction, we need to use differences between measurements by neighboring sensor and we need to apply wavelet analysis to the corresponding seismic signals.

Technical Report UTEP-CS-17-86, September 2017

Maximum Entropy Approach to Interbank Lending: Towards a More Accurate Algorithm

Thach N. Nguyen, Olga Kosheleva, and Vladik Kreinovich

Published in *Thai Journal of Mathematics*, 2017, Vol. 15,
Special Issue
on Entropy in Econometrics, pp. 45-51.

Banks loan money to each and borrow money from each other. To minimizing the risk caused by a possible default of one of the banks, a reasonable idea is to evenly spread the lending between different banks. A natural way to formalize this evenness requirement is to select the interbank amounts for which the entropy is the largest possible. The existing algorithms for solving the resulting constrained optimization problem provides only an approximate solution. In this paper, we propose a new algorithm that provides the exact solution to the maximum-entropy interbank lending problem.

Technical Report UTEP-CS-17-85, September 2017

Probabilistic Graphical Models Follow Directly From Maximum Entropy

Anh H. Ly, Francisco Zapata, Olac Fuentes, and Vladik Kreinovich

Published in *Thai Journal of Mathematics*, 2017, Vol. 15,
Special Issue
on Entropy in Econometrics, pp. 1-4.

Probabilistic graphical models are a very efficient machine learning technique. However, their only known justification is based on heuristic ideas, ideas that do not explain why exactly these models are empirically successful. It is therefore desirable to come up with a theoretical explanation for these models' empirical efficiency. At present, the only such explanation is that these models naturally emerge if we maximize the relative entropy; however, why the relative entropy should be maximized is not clear. In this paper, we show that these models can also be obtained from a more natural -- and well-justified -- idea of maximizing (absolute) entropy.

Technical Report UTEP-CS-17-84, September 2017

Impacts of Java Language Features on the Memory Performances of Android Apps

Adriana Escobar De La Torre and Yoonsik Cheon

Android apps are written in Java, but unlike Java applications they are resource-constrained in storage capacity and battery lifetime. In this document, we perform an experiment to measure quantitatively the impact of Java language and standard API features on the memory efficiency of Android apps. We focus on garbage collection because it is a critical process for performance affecting user experience. We learned that even Java language constructs and standard application programming interfaces (APIs) may be a source of a performance problem causing a significant memory overhead for Android apps. Any critical section of code needs to be scrutinized on the use of these Java features.

Technical Report UTEP-CS-17-83, August 2017

Does the Universe Really Expand Faster than the Speed of Light: Kinematic Analysis Based on Special Relativity and Copernican Principle

Reynaldo Martinez and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2017,
Vol. 44, pp. 66-72.

In the first approximation, the Universe's expansion is described by the Hubble's law v = H * R, according to which the relative speed v of two objects in the expanding Universe grows linearly with the distance R between them. This law can be derived from the Copernican principle, according to which, cosmology-wise, there is no special location in the Universe, and thus, the expanding Universe should look the same from every starting point. The problem with the Hubble's formula is that for large distance, it leads to non-physical larger-than-speed-of-light velocities. Since the Universe's expansion is a consequence of Einstein's General Relativity Theory (GRT), this problem is usually handled by taking into account GRT's curved character of space-time. In this paper, we consider this problem from a purely kinematic viewpoint. We show that if we take into account special-relativistic effects when applying the Copernican principle, we get a modified version of the Hubble's law in which all the velocities are physically meaningful -- in the sense that they never exceed the speed of light.

Technical Report UTEP-CS-17-82, August 2017

Revised version UTEP-CS-17-82a, December 2017

Revised version UTEP-CS-17-82b, December 2017

Revised version UTEP-CS-17-82c, January 2018

Fuzzy Analogues of Sets and Functions Can Be Uniquely Determined from the Corresponding Ordered Category: A Theorem

Christian Servin, Gerardo D. Muela, and Vladik Kreinovich

Published in *Axioms*, 2018, Vol. 7, Paper 8,
doi:10.3390/axioms7010008; reprinted in: Esteban Indurain,
Humberto Bustince, and Javier Fernandez (eds.), *New Trends in
Fuzzy Set Theory and Related Items*, MDPI, Basel, 2018,
pp. 118-124.

In modern mathematics, many concepts and ideas are described in terms of category theory. From this viewpoint, it is desirable to analyze what can be determined if, instead of the basic category of sets, we consider a similar category of fuzzy sets. In this paper, we describe a natural fuzzy analog of the category of sets and functions, and we show that, in this category, fuzzy relations (a natural fuzzy analogue of functions) can be determined in category terms -- of course, modulo 1-1 mapping of the corresponding universe of discourse and 1-1 re-scaling of fuzzy degrees.

Original file UTEP-CS-17-82 in pdf

Revised version UTEP-CS-17-82a in pdf

Revised version UTEP-CS-17-82b in pdf

Revised version UTEP-CS-17-82c in pdf

Technical Report UTEP-CS-17-81, August 2017

Efficient Parameter-Estimating Algorithms for Symmetry-Motivated Models: Econometrics and Beyond

Vladik Kreinovich, Anh H. Ly, Olga Kosheleva, and Songsak Sriboonchitta

Published in: In: Ly H. Anh, Le Si Dong, Vladik Kreinovich, and Nguyen Ngoc
Thach (eds.), *Econometrics for Financial Applications*, Springer
Verlag, Cham, Switzerland, 2018, pp. 134-145.

It is known that symmetry ideas can explain the empirical success of many non-linear models. This explanation makes these models theoretically justified and thus, more reliable. However, the models remain non-linear and thus, identification or the model's parameters based on the observations remains a computationally expensive nonlinear optimization problem. In this paper, we show that symmetry ideas can not only help to select and justify a nonlinear model, they can also help us design computationally efficient almost-linear algorithms for identifying the model's parameters.

Technical Report UTEP-CS-17-80, August 2017

Is It Legitimate Statistics or Is It Sexism: Why Discrimination Is Not Rational

Martha Osegueda Escobar, Vladik Kreinovich, and Thach N. Nguyen

Published in: In: Ly H. Anh, Le Si Dong, Vladik Kreinovich, and Nguyen Ngoc
Thach (eds.), *Econometrics for Financial Applications*, Springer
Verlag, Cham, Switzerland, 2018, pp. 235-242.

While in the ideal world, everyone should have the same chance to succeed in a given profession, in reality, often the probability of success is different for people of different gender and/or ethnicity. For example, in the US, the probability of a female undergraduate student in computer science to get a PhD is lower than a similar probability for a male student. At first glance, it may seem that in such a situation, if we try to maximize our gain and we have a limited amount of resources, it is reasonable to concentrate on students with the higher probability of success -- i.e., on males, and only moral considerations prevent us from pursuing this seemingly economically optimal discriminatory strategy. In this paper, we show that this first impression is wrong: the discriminatory strategy is not only morally wrong, it is also not optimal -- and the morally preferable inclusive strategy is actually also economically better.

Technical Report UTEP-CS-17-79, August 2017

Maximum Entropy Beyond Selecting Probability Distributions

Thach N. Nguyen, Olga Kosheleva, and Vladik Kreinovich

Published in: In: Ly H. Anh, Le Si Dong, Vladik Kreinovich, and Nguyen Ngoc
Thach (eds.), *Econometrics for Financial Applications*, Springer
Verlag, Cham, Switzerland, 2018, pp. 186-195.

Traditionally, the Maximum Entropy technique is used to select a probability distribution in situations when several different probability distributions are consistent with our knowledge. In this paper, we show that this technique can be extended beyond selecting probability distributions, to explain facts, numerical values, and even types of functional dependence.

Technical Report UTEP-CS-17-78, August 2017

Revised version UTEP-CS-17-78b, September 2017

An Ancient Bankruptcy Solution Makes Economic Sense

Anh H. Ly, Michael Zakharevich, Olga Kosheleva, and Vladik Kreinovich

Published in: In: Ly H. Anh, Le Si Dong, Vladik Kreinovich, and Nguyen Ngoc
Thach (eds.), *Econometrics for Financial Applications*, Springer
Verlag, Cham, Switzerland, 2018, pp. 152-160.

While econometrics is a reasonable recent discipline, quantitative solutions to economic problem have been proposed since the ancient times. In particular, solutions have been proposed for the bankruptcy problem: how to divide the assets between the claimants? One of the challenges of analyzing ancient solutions to economics problems is that these solutions are often presented not as a general algorithm, but as a sequence of examples. When there are only a few such example, it is often difficult to convincingly extract a general algorithm from them. This was the case, for example, for the supposedly fairness-motivated Talmudic solution to the bankruptcy problem: only in the mid 1980s, the Nobelist Robert Aumann succeeded in coming up with a convincing general algorithm explaining the original examples. What remained not so clear in Aumann's explanation is why namely this algorithm best reflects the corresponding idea of fairness. In this paper, we find a simple economic explanation for this algorithm.

Original file UTEP-CS-17-78 in pdf

Revised version UTEP-CS-17-78b in pdf

Technical Report UTEP-CS-17-77, August 2017

Almost All Diophantine Sets Are Undecidable

Vladik Kreinovich

Published in *International Mathematical Forum*,
2017, Vol. 12, No. 16, pp. 803-806.

The known 1970 solution to the 10th Hilbert problem says that no algorithm is possible that would decide whether a given Diophantine equation has a solution. In set terms, this means that not all Diophantine sets are decidable. In a posting to the Foundations of Mathematica mailing list, Timothy Y. Chow asked for possible formal justification for his impression that most Diophantine equations are not decidable. One such possible justification is presented in this paper.

Technical Report UTEP-CS-17-76, August 2017

Updated version UTEP-CS-17-76a, December 2017

Why Rectified Linear Neurons Are Efficient: A Possible Theoretical Explanations

Olac Fuentes, Justin Parra, Elizabeth Anthony, and Vladik Kreinovich

To appear in: Olga Kosheleva, Sergey Shary, Gang Xiang, and Roman
Zapatrin (eds.), *Beyond Traditional Probabilistic Data
Processing Techniques: Interval, Fuzzy, etc. Methods and Their
Applications*, Springer, Cham, Switzerland, 2019.

Traditionally, neural networks used a sigmoid activation function. Recently, it turned out that piecewise linear activation functions are much more efficient -- especially in deep learning applications. However, so far, there have been no convincing theoretical explanation for this empirical efficiency. In this paper, we provide such an explanation.

Original file UTEP-CS-17-76 in pdf

Updated version UTEP-CS-17-76a in pdf

Technical Report UTEP-CS-17-75, August 2017

Updated version UTEP-CS-17-75a, December 2017

Do It Today Or Do It Tomorrow: Empirical Non-Exponential Discounting Explained by Symmetry Ideas

Francisco Zapata, Olga Kosheleva, Vladik Kreinovich, and Thongchai Dumrongpokaphan

Published in: Van-Nam Huynh, Masahiro Inuiguchi, Dang-Hung Tran, and
Thierry Denoeux (eds.), *Proceedings of the International Symposium
on Integrated Uncertainty in Knowledge Modelling and Decision Making
IUKM'2018*, Hanoi, Vietnam, March 13-15, 2018.

At first glance, it seems to make sense to conclude that when a 1
dollar reward tomorrow is equivalent to a D < 1 dollar reward
today, the day-after-tomorrow's 1 dollar reward would be
equivalent to D * D = D^{2} dollars today, and, in general, a
reward after time t is equivalent to D(t) = D^{t}
dollars today.
This *exponential discounting* function D(t) was indeed
proposed by the economists, but it does not reflect the actual
human behavior. Indeed, according to this formula, the effect of
distant future events is negligible, and thus, it would be
reasonable for a person to take on huge loans or get engaged in
unhealthy behavior even when the long-term consequences will be
disastrous. In real life, few people behave like that, since the
actual empirical discounting function is different: it is
hyperbolic D(t) = 1 / (1 + k * t). In this paper, we use symmetry
ideas to explain this empirical phenomenon.

Original file UTEP-CS-17-75 in pdf

Updated version UTEP-CS-17-75a in pdf

Technical Report UTEP-CS-17-74, July 2017

Updated version UTEP-CS-17-74a, December 2017

Quantum Econometrics: How to Explain Its Quantitative Successes and How the Resulting Formulas Are Related to Scale Invariance, Entropy, and Fuzziness

Kittawit Autchariyapanitkul, Olga Kosheleva, Vladik Kreinovich, and Songsak Sriboonchitta

Published in: Van-Nam Huynh, Masahiro Inuiguchi, Dang-Hung Tran, and
Thierry Denoeux (eds.), *Proceedings of the International Symposium
on Integrated Uncertainty in Knowledge Modelling and Decision Making
IUKM'2018*, Hanoi, Vietnam, March 13-15, 2018.

Many aspects of human behavior seem to be well-described by formulas of quantum physics. In this paper, we explain this phenomenon by showing that the corresponding quantum-looking formulas can be derived from the general ideas of scale invariance and fuzziness. We also use these ideas to derive a general family of formulas that include non-quantum and quantum probabilities as particular cases -- formulas that may be more adequate for describing human behavior than purely non-quantum or purely quantum ones.

Original file UTEP-CS-17-74 in pdf

Revised version UTEP-CS-17-74a in pdf

Technical Report UTEP-CS-17-73, July 2017

How to Use Absolute-Error-Minimizing Software to Minimize Relative Error: Practitioner's Guide

Afshin Gholamy and Vladik Kreinovich

Published in *International Mathematical Forum*, 2017, Vol. 12,
No. 16, pp. 763-770.

In many engineering and scientific problems, there is a need to
find the parameters of a dependence from the experimental data.
There exist several software packages that find the values for
these parameters -- values for which the mean square value of the
absolute approximation error is the smallest. In practice,
however, we are often interested in minimizing the mean square
value of the *relative* approximation error. In this paper, we
show how we can use the absolute-error-minimizing software to
minimize the relative error.

Technical Report UTEP-CS-17-72, July 2017

Revised version UTEP-CS_17-72a, December 2017

Granular Approach to Data Processing Under Probabilistic Uncertainty

Andrzej Pownuk and Vladik Kreinovich

In many real-life situations, we need to process measurement results. Due to inevitable measurement errors, the measurement results are, in general, somewhat different from the actual (unknown) values of the corresponding quantities. As a result, the value that we obtained by processing the measurement results is, in general, different from what we would have got if we were able to process the actual (exact) values. In many practical situations, it is important to know how accurate is the resulting estimate. In such situations, processing data under probabilistic uncertainty involves not only processing the measurement results, but also providing a probability distribution describing how accurate is the result of this processing.

There exist several algorithms for such data processing under probabilistic uncertainty, but the existing algorithms often require too much computation time. To speed up the corresponding computations, we take into account the fact that in many real-life situations, uncertainty can be naturally described as a combination of several components, components which are described by different granules. In such situations, to process this uncertainty, it is often beneficial to take this granularity into account by processing these granules separately and then combining the results.

In this paper, we show that granular computing can help even in situations when there is no such natural decomposition into granules: namely, we can often speed up processing of uncertainty if we first (artificially) decompose the original uncertainty into appropriate granules.

Original file UTEP-CS-17-72 in pdf

Revised version UTEP-CS-17-72a in pdf

Technical Report UTEP-CS-17-71, July 2017

What Is the Optimal Bin Size of a Histogram: An Informal Description

Afshin Gholamy and Vladik Kreinovich

Published in *International Mathematical Forum*, 2017, Vol. 12,
No. 15, pp. 731-736

A natural way to estimate the probability density function of an unknown distribution from the sample of data points is to use histograms. The accuracy of the estimate depends on the size of the histogram's bins. There exist heuristic rules for selecting the bin size. In this paper, we show that these rules indeed provide the optimal value of the bin size.

Technical Report UTEP-CS-17-70, June 2017

Quantum Ideas in Economics Beyond Quantum Econometrics

Vladik Kreinovich, Hung T. Nguyen, and Songsak Sriboonchitta

Published in: In: Ly H. Anh, Le Si Dong, Vladik Kreinovich, and Nguyen Ngoc
Thach (eds.), *Econometrics for Financial Applications*, Springer
Verlag, Cham, Switzerland, pp. 146-151.

It is known that computational methods developed for
solving equations of quantum physics can be successfully applied
to solve economic problems; there is a whole related research area
called *quantum econometrics*. Current quantum econometrics
techniques are based on a purely mathematical similarity between
the corresponding equations, without any attempt to relate the
underlying ideas. We believe that the fact that quantum equations
can be successfully applied in economics indicates that there is a
deeper relation between these areas, beyond a mathematical
similarity. In this paper, we show that there is indeed a deep
relation between the main ideas of quantum physics and the main
ideas behind econometrics.

Technical Report UTEP-CS-17-69, June 2017

Why some physicists are excited about the undecidability of the spectral gap problem and why should we

Vladik Kreinovich

Published in *Bulletin of the European Association for
Theoretical Computer Science*, 2017, Vol. 122, pp. 100-113.

Since Turing's time, many problems have been proven undecidable. It is interesting though that, arguably, none of the working physicist problems had been ever proven undecidable -- until T. Cubitt, D. Perez-Garcia and M. M. Wolf proved recently that, for a physically reasonable class of systems, no algorithm can decide whether a given system has a spectral gap. We explain the spectral gap problem, its importance for physics and possible consequences of this exciting new result.

Technical Report UTEP-CS-17-68, June 2017

Markowitz Portfolio Theory Helps Decrease Medicines' Side Effect and Speed Up Machine Learning

Thongchai Dumrongpokaphan and Vladik Kreinovich

Published in: In: Ly H. Anh, Le Si Dong, Vladik Kreinovich, and Nguyen Ngoc
Thach (eds.), *Econometrics for Financial Applications*, Springer
Verlag, Cham, Switzerland, 2018, pp. 86-93.

In this paper, we show that, similarly to the fact that distributing the investment between several independent financial instruments decreases the investment risk, using a combination of several medicines can decrease the medicines' side effects. Moreover, the formulas for optimal combinations of medicine are the same as the formulas for the optimal portfolio, formulas first derived by the Nobel-prize winning economist H. M. Markowitz. A similar application to machine learning explains a recent success of a modified neural network in which the input neurons are also directly connected to the output ones.

Technical Report UTEP-CS-17-67, June 2017

Updated version UTEP-CS-17-67a, July 2017

Practical Need for Algebraic (Equality-Type) Solutions of Interval Equations and for Extended-Zero Solutions

Ludmila Dymova, Pavel Sevastjanov, Andrzej Pownuk, and Vladik Kreinovich

To appear in *Proceedings of the 12th International Conference on
Parallel Processing and Applied Mathematics PPAM'17*,
Lublin, Poland, September 10-13, 2017

One of the main problems in interval computations is solving systems of equations under interval uncertainty. Usually, interval computation packages consider united, tolerance, and control solutions. In this paper, we explain the practical need for algebraic (equality-type) solutions, when we look for solutions for which both sides are equal. In situations when such a solution is not possible, we provide a justification for extended-zero solutions, in which we ignore intervals of the type [−a, a].

Original file UTEP-CS-17-67 in pdf

Updated version UTEP-CS-17-67aa in pdf

Technical Report UTEP-CS-17-66, June 2017

Efficient Algorithms for Synchroning Localization Sensors under Interval Uncertainty

Raphael Voges, Bernardo Wagner, and Vladik Kreinovich

In this paper, we show that a practical need for synchronization of localization sensors leads to an interval-uncertainty problem. In principle, this problem can be solved by using the general linear programming algorithms, but this would take a long time -- and this time is not easy to decrease, e.g., by parallelization since linear programming is known to be provably hard to parallelize. To solve the corresponding problem, we propose more efficient and easy-to-parallelize algorithms.

Technical Report UTEP-CS-17-65, June 2017

Why Student Distributions? Why Matern's Covariance Model? A Symmetry-Based Explanation

Stephen Schoen, Gael Kermarrec, Boris Kargoll, Ingo Neumann, Olga Kosheleva, and Vladik Kreinovich

Published in: In: Ly H. Anh, Le Si Dong, Vladik Kreinovich, and Nguyen Ngoc
Thach (eds.), *Econometrics for Financial Applications*, Springer
Verlag, Cham, Switzerland, 2018, pp. 266-275.

In this paper, we show that empirical successes of Student distribution and of Matern's covariance models can be indirectly explained by a natural requirement of scale invariance -- that fundamental laws should not depend on the choice of physical units. Namely, while neither the Student distributions nor Matern's covariance models are themselves scale-invariant, they are the only one which can be obtained by applying a scale-invariant combination function to scale-invariant functions.

Technical Report UTEP-CS-17-64, June 2017

What If We Do Not Know Correlations?

Michael Beer, Zitong Gong, Ingo Neumann, Songsak Sriboonchitta, and Vladik Kreinovich

Published in: In: Ly H. Anh, Le Si Dong, Vladik Kreinovich, and Nguyen Ngoc
Thach (eds.), *Econometrics for Financial Applications*, Springer
Verlag, Cham, Switzerland, 2018, pp. 78-85.

It is well know how to estimate the uncertainty of the result y of data processing if we know the correlations between all the inputs. Sometimes, however, we have no information about the correlations. In this case, instead of a single value σ of the standard deviation of the result, we get a range [σ] of possible values. In this paper, we show how to compute this range.

Technical Report UTEP-CS-17-63, June 2017

Possible Explanation of Empirical Values of the Matern Smoothness Parameter for the Temporal Covariance of GPS Measurements

Gael Kermarrec, Steffen Schoen, and Vladik Kreinovich

Published in *Applied Mathematical Sciences*, 2017, Vol. 11,
No. 35, pp. 1733-1737.

The measurement errors of GPS measurements are largely due to the atmosphere, and the unpredictable part of these errors are due to the unpredictable (random) atmospheric phenomena, i.e., to turbulence. Turbulence-generated measurement errors should correspond to the smoothness parameter ν = 5/6 in the Matern covariance model. Because of this, we expected the empirical values of this smoothness parameter to be close to 5/6. When we estimated ν based on measurement results, we indeed got values close to 5/6, but interestingly, all our estimates were actually close to 1 (and slightly larger than 1). In this paper, we provide a possible explanation for this empirical phenomenon. This explanation is based on the fact that in the sensors, the quantity of interest is usually transformed into a current, and in electric circuits, current is a smooth function of time.

Technical Report UTEP-CS-17-62, June 2017

Why West-to-East Jetlag Is More Severe: A Simple Qualitative Explanation

Olga Kosheleva and Vladik Kreinovich

Published in *Journal of Innovative
Technology and Education*, 2017, Vol. 4, No. 1, pp. 113-116.

Empirical data shows that the jetlag when traveling west-to-east feels more severe than the jetlag when traveling east-to-west. At present, the only explanation of this empirical phenomenon is based on a complex dynamical systems model. In this paper, we provide a simple alternative explanation. This explanation also explains -- on the qualitative level -- the empirical data on relative severity of different jetlags.

Technical Report UTEP-CS-17-61, June 2017

Maximum Entropy as a Feasible Way to Describe Joint Distributions in Expert Systems

Thongchai Dumrongpokaphan, Vladik Kreinovich, and Hung T. Nguyen

Published in *Thai Journal of Mathematics*, 2017, Vol. 15,
Special Issue
on Entropy in Econometrics, pp. 35-44.

In expert systems, we elicit the probabilities of different statements from the experts. However, to adequately use the expert system, we also need to know the probabilities of different propositional combinations of the experts' statements -- i.e., we need to know the corresponding joint distribution. The problem is that there are exponentially many such combinations, and it is not practically possible to elicit all their probabilities from the experts. So, we need to estimate this joint distribution based on the available information. For this purpose, many practitioners use heuristic approaches -- e.g., the t-norm approach of fuzzy logic. However, this is a particular case of a situation for which the maximum entropy approach has been invented, so why not use the maximum entropy approach? The problem is that in this case, the usual formulation of the maximum entropy approach requires maximizing a function with exponentially many unknowns -- a task which is, in general, not practically feasible. In this paper, we show that in many reasonable example, the corresponding maximum entropy problem can be reduced to an equivalent problem with a much smaller (and feasible) number of unknowns -- a problem which is, therefore, much easier to solve.

Technical Report UTEP-CS-17-60, June 2017

MaxEnt-Based Explanation of Why Financial Analysts Systematically Under-Predict Companies' Performance

Vladik Kreinovich and Songsak Sriboonchitta

Published in *Thai Journal of Mathematics*, 2017, Vol. 15,
Special Issue
on Entropy in Econometrics, pp. 29-34.

Several studies have shown that financial analysts systematically under-predict the companies' performance, so that quarter after the quarter, 70-75% of the companies outperform these predictions. This percentage remains the same where the economy is in a boom or in a recession, whether we are in a period of strong or weak regulations. In this paper, we provide a possible Maximum Entropy-based explanation for this empirical phenomenon -- an explanation rooted in the fact that financial analysts mostly analyze financial data, while to get a more accurate prediction, it is important to go deeper, into the technical issues underlying the companies functioning.

Technical Report UTEP-CS-17-59, June 2017

Entropy as a Measure of Average Loss of Privacy

Luc Longpre, Vladik Kreinovich, and Thongchai Dumrongpokaphan

Published in *Thai Journal of Mathematics*, 2017, Vol. 15,
Special Issue
on Entropy in Econometrics, pp. 7-15.

Privacy means that
not everything about a person is known, that we need to ask
additional questions to get the full information about the person.
It therefore seems to reasonable to gauge the degree of privacy in
each situation by the average number of binary ("yes"-"no")
questions that we need to ask to determine the full information --
which is exactly Shannon's entropy. The problem with this idea is
that it is possible, by asking two binary questions -- and thus,
strictly speaking, getting only two bits of information -- to
sometimes learn a large amount of information. In this paper, we
show that while entropy is not always an adequate measure of the
*absolute* loss of privacy, it is a good idea for gauging the
*average* loss of privacy. To properly evaluate different
privacy-preserving schemes, so also propose to supplement the
average privacy loss with the standard deviation of privacy loss
-- to see how much the actual privacy loss cab deviate from its
average value.

Technical Report UTEP-CS-17-58, June 2017

Simplest Polynomial for Which Naive (Straightforward) Interval Computations Cannot Be Exact

Olga Kosheleva, Vladik Kreinovich, and Songsak Sriboonchitta

One of the main problem of interval computations is computing the range of a given function over given intervals. It is known that naive interval computations always provide an enclosure for the desired range. Sometimes -- e.g., for single use expressions -- naive interval computations compute the exact range. Sometimes, we do not get the exact range when we apply naive interval computations to the original expression, but we get the exact range if we apply naive interval computations to an equivalent reformulation of the original expression. For some other functions -- including some polynomials -- we do not get the exact range no matter how we reformulate the original expression. In this paper, we are looking for the simplest of such polynomials -- simplest in several reasonable senses: that it depends on the smallest possible number of variables, that it has the smallest possible number of monomials, that it has the smallest degree, etc. We then prove that among all polynomials for which naive interval computations cannot be exact, there exists a polynomial which is the simplest in all these senses.

Technical Report UTEP-CS-17-57, June 2017

Updated version UTEP-CS-17-57a, June 2017

Taking Into Account Interval (and Fuzzy) Uncertainty Can Lead to More Adequate Statistical Estimates

Ligang Sun, Hani Dbouk, Ingo Neumann, Steffen Schoen, and Vladik Kreinovich

Published in: Patricia Melin, Oscar
Castillo, Janusz Kacprzyk, Marek Reformat, and William Melek
(eds.), *Fuzzy Logic in Intelligent System Design: Theory and
Applications*, Springer Verlag, Cham, Switzerland, 2018,
pp. 371-381.

Traditional statistical data processing techniques (such as Least Squares) assume that we know the probability distributions of measurement errors. Often, we do not have full information about these distributions. In some cases, all we know is the bound of the measurement error; in such cases, we can use known interval data processing techniques. Sometimes, this bound is fuzzy; in such cases, we can use known fuzzy data processing techniques.

However, in many practical situations, we know the probability distribution of the random component of the measurement error and we know the upper bound -- numerical or fuzzy -- on the measurement error's systematic component. For such situations, no general data processing technique is currently known. In this paper, we describe general data processing techniques for such situations, and we show that taking into account interval and fuzzy uncertainty can lead to more adequate statistical estimates.

Original file UTEP-CS-17-57 in pdf

Updated version UTEP-CS-17-57a in
pdf

Technical Report UTEP-CS-17-56, June 2017

How to Gauge the Accuracy of Fuzzy Control Recommendations: A Simple Idea

Patricia Melin, Oscar Castillo, Andrzej Pownuk, Olga Kosheleva, and Vladik Kreinovich

Published in: Patricia Melin, Oscar
Castillo, Janusz Kacprzyk, Marek Reformat, and William Melek
(eds.), *Fuzzy Logic in Intelligent System Design: Theory and
Applications*, Springer Verlag, Cham, Switzerland, 2018,
pp. 287-292.

Fuzzy control is based on approximate expert information, so its recommendations are also approximate. However, the traditional fuzzy control algorithms do not tell us how accurate are these recommendations. In contrast, for the probabilistic uncertainty, there is a natural measure of accuracy: namely, the standard deviation. In this paper, we show how to extend this idea from the probabilistic to fuzzy uncertainty and thus, to come up with a reasonable way to gauge the accuracy of fuzzy control recommendations.

Technical Report UTEP-CS-17-55, June 2017

Normalization-Invariant Fuzzy Logic Operations Explain Empirical Success of Student Distributions in Describing Measurement Uncertainty

Hamza Alkhatib, Boris Kargoll, Ingo Neumann, and Vladik Kreinovich

Published in: Patricia Melin, Oscar
Castillo, Janusz Kacprzyk, Marek Reformat, and William Melek
(eds.), *Fuzzy Logic in Intelligent System Design: Theory and
Applications*, Springer Verlag, Cham, Switzerland, 2018,
pp. 300-306.

In engineering practice, usually measurement errors are described by normal distributions. However, in some cases, the distribution is heavy-tailed and thus, not normal. In such situations, empirical evidence shows that the Student distributions are most adequate. The corresponding recommendation -- based on empirical evidence -- is included in the International Organization for Standardization guide. In this paper, we explain this empirical fact by showing that a natural fuzzy-logic-based formalization of commonsense requirements leads exactly to the Student's distributions.

Technical Report UTEP-CS-17-54, June 2017

Are Permanent or Temporary Teams More Efficient: A Possible Explanation of the Empirical Data

Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Published in *Journal of Innovative Technology and Education*.
2017, Vol. 4, No. 1, pp. 113-116.

It is known that in education, stable (long-term) student teams are more effective than temporary (short-term) ones. It turned out that the same phenomenon is true for workers working on a long-term project. However, somewhat surprisingly, for small-scale projects, the opposite is true: teams without any prior collaboration experience are more successful. Moreover, it turns out that if combine in a team members with prior collaboration experience and members without such experience, the efficiency of the team gets even lower. In this paper, we provide a possible explanation for this strange empirical phenomenon.

Technical Report UTEP-CS-17-53, June 2017

Updated version UTEP-CS-17-53a, September 2017

Extended version UTEP-CS-17-53b, December 2017

How Accurate Are Expert Estimations of Correlation?

Michael Beer, Zitong Gong, Francisco Alejandro Diaz De La O, and Vladik Kreinovich

Published in *Proceedings of the 2017 IEEE Symposium on
Computational Intelligence for Engineering Solutions CIES'2017*,
Honolulu, Hawaii, November 27 - December 1, 2017, pp. 883-891.

In many practical situations, it is important to know the correlation between different quantities -- finding correlations helps to gain insights into various relationships and phenomena, and helps to inform analysts. Often, there is not enough empirical data to experimentally determine all possible correlations. In such cases, a natural idea is to supplement this situation with expert estimates. Expert estimates are rather crude. So, to decide whether to act based on these estimates, it is desirable to know how accurate are expert estimates. In this paper, we propose several techniques for gauging this accuracy.

Original file UTEP-CS-17-53 in pdf

Updated version UTEP-CS-17-53a in pdf

Extended version UTEP-CS-17-53b in pdf

Technical Report UTEP-CS-17-52, June 2017

In Education, Delayed Feedback Is Often More Efficient Than Immediate Feedback: A Geometric Explanation

Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Published in *Journal of Innovative Technology and Education*,
2017, Vol. 4, No. 1, pp. 109-112.

Feedback is important in education. It is commonly believed that immediate feedback is very important. That is why instructors stay often late at night grading students' assignments -- to make sure that the students get their feedback as early as possible. However, surprisingly, experiments show that in many cases, delayed feedback is more efficient that the immediate one. In this paper, we provide a simple geometric explanation of this seemingly counter-intuitive empirical phenomenon.

Technical Report UTEP-CS-17-51, June 2017

A Bad Plan Is Better Than No Plan: A Theoretical Justification of an Empirical Observation

Songsak Sriboonchitta and Vladik Kreinovich

Published in: In: Vladik Kreinovich, Songsak Sriboonchitta, and
Nopasit Chakpitak (Eds.), *Predictive Econometrics and Big
Data*, Springer Verlag, Cham, Switzerland, 2018, pp. 266-272.

In his 2014 book "Zero to One", a software mogul Peter Thiel lists the lessons he learned from his business practice. Most of these lessons make intuitive sense, with one exception -- his observation that "a bad plan is better than no plan" seems to be counterintuitive. In this paper, we provide a possible theoretical explanation for this somewhat counterintuitive empirical observation.

Technical Report UTEP-CS-17-50, June 2017

Quantitative Justification for the Gravity Model in Economics

Vladik Kreinovich and Songsak Sriboonchitta

Published in: In: Vladik Kreinovich, Songsak Sriboonchitta, and
Nopasit Chakpitak (Eds.), *Predictive Econometrics and Big
Data*, Springer Verlag, Cham, Switzerland, 2018, pp. 214-221.

The gravity model in economics describes the trade flow
between two countries as a function of their Gross Domestic
Products (GDPs) and the distance between them. This model is
motivated by the *qualitative* similarity between the desired
dependence and the dependence of the gravity force (or potential
energy) between the two bodies on their masses and on the distance
between them. In this paper, we provide a *quantitative*
justification for this economic formula.

Technical Report UTEP-CS-17-49, June 2017

How to Estimate Statistical Characteristics Based on a Sample: Nonparametric Maximum Likelihood Approach Leads to Sample Mean, Sample Variance, etc.

Vladik Kreinovich and Thongchai Dumrongpokaphan

Published in: In: Vladik Kreinovich, Songsak Sriboonchitta, and
Nopasit Chakpitak (Eds.), *Predictive Econometrics and Big
Data*, Springer Verlag, Cham, Switzerland, 2018, pp. 192-197.

In many practical situations, we need to estimate different statistical characteristics based on a sample. In some cases, we know that the corresponding probability distribution belongs to a known finite-parametric family of distributions. In such cases, a reasonable idea is to use the Maximum Likelihood method to estimate the corresponding parameters, and then to compute the value of the desired statistical characteristic for the distribution with these parameters.

In some practical situations, we do not know any family containing the unknown distribution. We show that in such nonparametric cases, the Maximum Likelihood approach leads to the use of sample mean, sample variance, etc.

Technical Report UTEP-CS-17-48, June 2017

Kuznets Curve: A Simple Dynamical System-Based Explanation

Thongchai Dumrongpokaphan and Vladik Kreinovich

Published in: In: Vladik Kreinovich, Songsak Sriboonchitta, and
Nopasit Chakpitak (Eds.), *Predictive Econometrics and Big
Data*, Springer Verlag, Cham, Switzerland, 2018, pp. 177-181.

In the 1950s, a future Nobelist Simon Kuznets discovered the following phenomenon: as a country's economy improves, inequality first grows but then decreases. In this paper, we provide a simple dynamical system-based explanation for this empirical phenomenon.

Technical Report UTEP-CS-17-47, June 2017

How to Gauge Accuracy of Processing Big Data: Teaching Machine Learning Techniques to Gauge Their Own Accuracy

Vladik Kreinovich, Thongchai Dumrongpokaphan, Hung T. Nguyen, and Olga Kosheleva

Published in: In: Vladik Kreinovich, Songsak Sriboonchitta, and
Nopasit Chakpitak (Eds.), *Predictive Econometrics and Big
Data*, Springer Verlag, Cham, Switzerland, 2018, pp. 198-204.

When the amount of data is reasonably small, we can usually fit this data to a simple model and use the traditional statistical methods both to estimate the parameters of this model and to gauge this model's accuracy. For big data, it is often no longer possible to fit them by a simple model. Thus, we need to use generic machine learning techniques to find the corresponding model. The current machine learning techniques estimate the values of the corresponding parameters, but they usually do not gauge the accuracy of the corresponding general non-linear model. In this paper, we show how to modify the existing machine learning methodology so that it will not only estimate the parameters, but also estimate the accuracy of the resulting model.

Technical Report UTEP-CS-17-46, June 2017

How Better Are Predictive Models: Analysis on the Practically Important Example of Robust Interval Uncertainty

Vladik Kreinovich, Hung T. Nguyen, Songsak Sriboonchitta, and Olga Kosheleva

Published in: In: Vladik Kreinovich, Songsak Sriboonchitta, and
Nopasit Chakpitak (Eds.), *Predictive Econometrics and Big
Data*, Springer Verlag, Cham, Switzerland, 2018, pp. 205-213.

One of the main applications of science and engineering
is to predict future value of different quantities of interest. In
the traditional statistical approach, we first use observations to
estimate the parameters of an appropriate model, and then use the
resulting estimates to make predictions. Recently, a relatively
new *predictive* approach has been actively promoted, the
approach where we make predictions directly from observations. It
is known that in general, while the predictive approach requires
more computations, it leads to more accurate predictions. In this
paper, on the practically important example of robust interval
uncertainty, we analyze how more accurate is the predictive
approach. Our analysis shows that predictive models are indeed
much more accurate: asymptotically, they lead to estimates which
are
√&nssp; n
more accurate, where $n$ is the number of estimated
parameters.

Technical Report UTEP-CS-17-45, June 2017

How to Get Beyond Uniform When Applying MaxEnt to Interval Uncertainty

Songsak Sriboonchitta and Vladik Kreinovich

Published in *Thai Journal of Mathematics*, 2017, Vol. 15,
Special Issue
on Entropy in Econometrics, pp. 17-27.

In many practical situations, the Maximum Entropy (MaxEnt) approach leads to reasonable distributions. However, in an important case when all we know is that the value of a random variable is somewhere within the interval, this approach leads to a uniform distribution on this interval -- while our intuition says that we should have a distribution whose probability density tends to 0 when we approach the interval's endpoints. In this paper, we show that in most cases of interval uncertainty, we have additional information, and if we account for this additional information when applying MaxEnt, we get distributions which are in perfect accordance with our intuition.

Technical Report UTEP-CS-17-44, June 2017

A Thought on Refactoring Java Loops Using Java 8 Streams

Khandoker Rahad, Zejing Cao, and Yoonsik Cheon

Java 8 has introduced a new abstraction called a stream to represent an immutable sequence of elements and to provide a variety of operations to be executed on the elements in series or in parallel. By processing a collection of data in a declarative way, it enables one to write more concise and clean code that can also leverage multi-core architectures without needing a single line of multithread code to be written. In this document, we describe our preliminary work on systematically refactoring loops with Java 8 streams to produce more concise and clean code. Our idea is to adapt existing work on analyzing loops and deriving their specifications written in a functional program verification style. Our extension is to define a set of transformation rules for loop patterns, one for each pattern, by decomposing the derived specification function into a series of stream operations.

Technical Report UTEP-CS-17-43, May 2017

Attraction-Repulsion Forces Between Biological Cells: A Theoretical Explanation of Empirical Formulas

Olga Kosheleva, Martine Ceberio, and Vladik Kreinovich

Published in *Proceedings of the 10th International Workshop on Constraint
Programming and Decision Making CoProd'2017*, El Paso, Texas,
November 3, 2017, pp. 28-32.

Biological calls attract and repulse each other: if they get too close to each other, they repulse, and if they get too far away from each other, they attract. There are empirical formulas that describe the dependence of the corresponding forces on the distance between the cells. In this paper, we provide a theoretical explanation for these empirical formulas.

Technical Report UTEP-CS-17-42, May 2017

A Natural Feasible Algorithm That Checks Satisfiability of 2-CNF Formulas and, if the Formulas Is Satisfiable, Finds a Satisfying Vector

Olga Kosheleva, and Vladik Kreinovich

Published in the *Proceedings of International Forum in Mathematics
Education*, Kazan, Russia, October 18-22, 2017, Vol. 2,
pp. 186-188.

One of the main results in Theory of Computation courses is the
proof that propositional satisfiability is NP-complete. This means
that, unless P = NP (which most computer scientists believe to be
impossible), no feasible algorithm is possible for solving
propositional satisfiability problems. This result is usually
proved on the example of 3-CNF formulas, i.e., formulas of the
type C_{1} & ... & C_{m}, where each clause
C_{i} has the form a \/ b or a \/ b \/ c, with no more than
three literals -- i.e., propositional variables v_{i} or
their negations ~v_{i}. Textbooks usually mention that for
2-CNF formulas -- in which every clause has at most 2 literals --
the corresponding problem can be solved by a feasible algorithm.
From the pedagogical viewpoint, however, the problem with known
feasible algorithms for 2-CNF formulas is that they are based on
clever tricks. In this paper, we describe a natural feasible
algorithm for solving 2-CNF, an algorithm whose ideas are similar
to Gauss elimination in linear algebra.

Technical Report UTEP-CS-17-41, May 2017

How to Teach Implication

Martha Osegueda Escobar, Olga Kosheleva, and Vladik Kreinovich

Published in the *Proceedings of International Forum in Mathematics
Education*, Kazan, Russia, October 18-22, 2017, Vol. 2,
pp. 193-195.

Logical implication is a somewhat counter-intuitive notion. For students, it is difficult to understand why a false statement implies everything. In this paper, we present a simple pedagogical way to make logical implication more intuitive.

Technical Report UTEP-CS-17-40, May 2017

Maybe the Usual Students’ Practice of Cramming For a Test Makes Sense: A Mathematical Analysis

Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Published in the *Proceedings of International Forum in Mathematics
Education*, Kazan, Russia, October 18-22, 2017, Vol. 2,
pp. 195-198.

We always teach students that cramming for a test is a bad idea, that they should study at the same speed throughout the semester – but many still cram. We ourselves are not that different: when we prepare papers for a conference, we often “cram” in the last days before the deadline instead of working with a regular speed for the whole time before the conference. The ubiquity of cramming makes us think that maybe it is not necessarily always a bad idea. And indeed, a simple model of a study process shows that an optimal solution often involve some cramming – to be more precise, a study schedule, in which in some periods we study much more intensely than in other periods, is often more efficient than studying at the same speed.

Technical Report UTEP-CS-17-39, May 2017

No Idea Is a Bad Idea: A Theoretical Explanation

Christian Servin and Vladik Kreinovich

Published in *Journal of Innovative Technology and
Education*, 2017, Vol. 4, No. 1, pp. 97-102.

Many business publications state that no idea is a bad idea, that even if the idea is, at first glance, not helpful, there are usually some aspects of this idea which are helpful Usually, this statement is based on the experience of the author, and it is given without any theoretical explanation. In this paper, we provide a theoretical explanation for this statement.

Technical Report UTEP-CS-17-38, May 2017

Updated version UTEP-CS-17-38a, July 2017

Updated version UTEP-CS-17-38b, September 2017

What Decision to Make In a Conflict Situation under Interval Uncertainty: Efficient Algorithms for the Hurwicz Approach

Bartlomiej Jacek Kubica, Andrzej Pownuk, and Vladik Kreinovich

To appear in *Proceedings of the 12th International Conference on
Parallel Processing and Applied Mathematics PPAM'17*, Lublin,
Poland, September 10-13, 2017

In this paper, we show how to take interval uncertainty into account when solving conflict situations. Algorithms for conflict situations under interval uncertainty are know under the assumption that each side of the conflict maximizes its worst-case expected gain. However, it is known that a more general Hurwicz approach provides a more adequate description of decision making under uncertainty. In this approach, each side maximizes the convex combination of the worst-case and the best-case expected gains. In this paper, we describe how to resolve conflict situations under the general Hurwicz approach to interval uncertainty.

Original file UTEP-CS-17-38 in pdf

Updated version UTEP-CS-17-38a in pdf

Updated version UTEP-CS-17-38b in pdf

Technical Report UTEP-CS-17-37, April 2017

Updated version UTEP-CS-17-37a, July 2017

In System Identification, Interval (and Fuzzy) Estimates Can Lead to Much Better Accuracy than the Traditional Statistical Ones: General Algorithm and Case Study

Sergey I. Kumkov, Vladik Kreinovich, and Andrzej Pownuk

Published in *Proceedings of the IEEE Conference on
Systems, Man, and Cybernetics SMC'2017*, Banff, Canada,
October 5-8, 2017, pp. 367-372.

In many real-life situations, we know the upper bound of the measurement errors, and we also know that the measurement error is the joint result of several independent small effects. In such cases, due to the Central Limit theorem, the corresponding probability distribution is close to Gaussian, so it seems reasonable to apply the standard Gaussian-based statistical techniques to process this data -- in particular, when we need to identify a system. Yes, in doing this, we ignore the information about the bounds, but since the probability of exceeding them is small, we do not expect this to make a big difference on the result. Surprisingly, it turns out that in some practical situations, we get a much more accurate estimates if we, vice versa, take into account the bounds -- and ignore all the information about the probabilities. In this paper, we explain the corresponding algorithms. and we show, on a practical example, that using this algorithm can indeed leave to a drastic improvement in estimation accuracy.

Original file UTEP-CS-17-37 in pdf

Updated version UTEP-CS-17-37a in pdf

Technical Report UTEP-CS-17-36, April 2017

Updated version UTEP-CS-17-36a, July 2017

Soft Computing Approach to Detecting Discontinuities: Seismic Analysis and Beyond

Solymar Ayala Cortez, Aaron A. Velasco, and Vladik Kreinovich

Published in *Proceedings of the IEEE Conference on
Systems, Man, and Cybernetics SMC'2017*, Banff, Canada,
October 5-8, 2017, pp. 363-366.

Starting from Newton, the main equations of physics are differential equations -- which implicitly implies that all the corresponding processes are differentiable -- and thus, continuous. However, in practice, we often encounter processes or objects that change abruptly in time or in space. In physics, we have phase transitions when the properties change abruptly. In geosciences, we have sharp boundaries between different layers and discontinuing representing faults. In many such situations, it is important to detect these discontinuities. In some cases, we know the equations, but in many other cases, we do not know the equations, we only know that the corresponding process is discontinuous. In this paper, we show that by applying the soft computing techniques to translate this imprecise knowledge into a precise strategy, we can get an efficient algorithm for detecting discontinuities; its efficiency is shown on the example of detecting a fault based on the seismic signals.

Original file UTEP-CS-17-36 in pdf

Updated version UTEP-CS-17-36a in pdf

Technical Report UTEP-CS-17-35, April 2017

Updated version UTEP-CS-17-35a, July 2017

Prediction of Volcanic Eruptions: Case Study of Rare Events in Chaotic Systems with Delay

Justin Parra, Olac Fuentes, Elizabeth Anthony, and Vladik Kreinovich

Published in *Proceedings of the IEEE Conference on
Systems, Man, and Cybernetics SMC'2017*, Banff, Canada,
October 5-8, 2017, pp. 351-356.

Volcanic eruptions can be disastrous; it is therefore important to be able to predict them as accurately as possible. Theoretically, we can use the general machine learning techniques for such predictions. However, in general, without any prior information, such methods require an unrealistic amount of computation time. It is therefore desirable to look for additional information that would enable us to speed up the corresponding computations. In this paper, we provide an empirical evidence that the volcanic system exhibit chaotic and delayed character. We also show that in general (and in volcanic predictions in particular), we can speed up the corresponding predictions if we take into account chaotic and delayed character of the corresponding system.

Original file UTEP-CS-17-35 in pdf

Updated version UTEP-CS-17-35a in pdf

Technical Report UTEP-CS-17-34, April 2017

A Symmetry-Based Explanation for an Empirical Model of Fatigue Damage of Composite Materials

Pedro Barragan Olague and Vladik Kreinovich

Published in *Journal of Uncertain Systems*, 2018, Vol. 12,
No. 3, pp. 176-179.

In this paper, we provide a symmetry-based explanation for an empirical formula that describes fatigue damage of composite materials.

Technical Report UTEP-CS-17-33, April 2017

Updated version UTEP-CS-17-33a, May 2017

Isn't Every Sufficiently Complex Logic Multi-Valued Already: Lindenbaum-Tarski Algebra and Fuzzy logic Are Both Particular Cases of the Same Idea

Andrzej Pownuk and Vladik Kreinovich

Published in *Proceedings of
the Joint 17th Congress of International Fuzzy Systems Association
and 9th International Conference on Soft Computing and Intelligent
Systems*, Otsu, Japan, June 27-30, 2017.

Usually, fuzzy logic (and multi-valued logics in general) are viewed as drastically different from the usual 2-valued logic. In this paper, we show that while on the surface, there indeed seems to be a major difference, a more detailed analysis shows that even in the theories based on the 2-valued logic, there naturally appear constructions which are, in effect, multi-valued, constructions which are very close to fuzzy logic.

Original file UTEP-CS-17-33 in pdf

Updated version UTEP-CS-17-33a in pdf

Technical Report UTEP-CS-17-32, April 2017

The Onsager Conjecture: A Pedagogical Explanation

Olga Kosheleva and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2017,
Vol. 44, pp. 125-129.

In 1949, a Nobelist Last Onsager considered liquid flows with
velocities changing as r^{α} for spatial points at distance
r, and conjectured that the threshold value α = 1/3
separates the two possible regimes: for α > 1/3 energy is
always preserved, while for α < 1/3 energy is possibly not
preserved. In this paper, we provide a simple pedagogical
explanation for this conjecture.

Technical Report UTEP-CS-17-31, April 2017

Negotiations vs. Confrontation: A Possible Explanation of the Empirical Results

Olga Kosheleva and Vladik Kreinovich

Published in *Journal of Innovative Technology and
Education*, 2017, Vol. 4, No. 1, pp. 77-81.

A recent book promoting negotiations as an alternative to confrontations cites the empirical evidence that in business situations, confrontational attitude leads, on average, to a 75% loss in comparison with negotiations. An additional empirical fact is that only in 10% of the cases, negotiations are not possible and confrontation is inevitable.

Technical Report UTEP-CS-17-30, April 2017

A Short Note on Pitch, Interval, and Melody Matching Assessment

Eric Hanson, Hannah Baslee, and Eric Freudenthal

This short note describes a metric and procedure for assessing an individual's overall simple pitch and interval matching proficiency when singing.

Technical Report UTEP-CS-17-29, March 2017

Towards Predictive Statistics: A Pedagogical Explanation

Vladik Kreinovich

Published in *Journal of Innovative Technology and Education*,
2017, Vol. 4, No. 1, pp. 71-75.

In statistics application area, lately, several publications
appeared that warn about the dangers of the inappropriate
application of statistics and remind the users of the recall that
prediction is the ultimate objective of the statistical analysis.
This trend is known as *predictive statistics*. However, while
the intended message is aimed at the very general audience of
practitioners and researchers who apply statistics, many of these
papers are not easy to read since they are either too technical
and/or too philosophical for the general reader. In this short
paper, we describe the main ideas and recommendation of predictive
statistics in -- hopefully -- clear terms.

Technical Report UTEP-CS-17-28, March 2017

Physical Induction Explains Why Over-Realistic Animation Sometimes Feels Creepy

Olga Kosheleva and Vladik Kreinovich

Published in *Journal of Innovative Technology and Education*,
2017, Vol. 4, No. 1, pp. 65-70.

In the past, every progress of movie animation towards realism was viewed positively. However, recently, as computer animation is becoming more and more realistic, some people perceive the resulting realism negatively, as creepy. Similarly, everyone used to welcome robots that looked and behaved somewhat like humans; however, lately, too-human-like robots have started causing a similar negative feeling of creepiness. There exist complex psychology-based explanations for this phenomenon. In this paper, we show that this empirical phenomenon can be naturally explained simply by physical induction -- the main way we cognize the world.

Technical Report UTEP-CS-17-27, March 2017

Contradictions Do Not Necessarily Make a Theory Inconsistent

Olga Kosheleva and Vladik Kreinovich

Published in *Journal of Innovative Technology and Education*,
2017, Vol. 4, No. 1, pp. 59-64.

Some religious scholars claim that while the corresponding holy texts may be contradictory, they lead to a consistent set of ethical and behavioral recommendations. Is this logically possible? In this paper, somewhat surprisingly, we kind of show that this is indeed possible: namely, we show that if we add, to statements about objects from a certain class, consequences of both contradictory abstract statements, we still retain a consistent theory. A more mundane example of the same phenomenon comes from mathematics: if we have a set-theoretical statement S which is independent from ZF and which is not equivalent to any arithmetic statement, then we can add both arithmetic statements derived from S and arithmetic statements derived from "not S" and still keep the resulting class of arithmetic statements consistent.

Technical Report UTEP-CS-17-26, March 2017

Why Stable Teams Are More Efficient in Education

Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2017,
Vol. 44, pp. 120-124.

It is known that study groups speed up learning. Recent studies have shown that stable study groups are more efficient than shifting-membership groups. In this paper, we provide a theoretical explanation for this empirical observation.

Technical Report UTEP-CS-17-25, March 2017

Updated version UTEP-CS-17-25b, May 2017

From Fuzzy Universal Approximation to Fuzzy Universal Representation: It All Depends on the Continuum Hypothesis

Mahdokht Michelle Afravi and Vladik Kreinovich

Published in *Proceedings of
the Joint 17th Congress of International Fuzzy Systems Association
and 9th International Conference on Soft Computing and Intelligent
Systems*, Otsu, Japan, June 27-30, 2017.

It is known that fuzzy systems have a universal approximation
property. A natural question is: can this property be extended to
a universal *representation* property? Somewhat surprisingly,
the answer to this question depends on whether the following
*Continuum Hypothesis:* every infinite subset of the real line has
either the same number of elements as the real line itself or as
many elements as natural numbers.

Original file UTEP-CS-17-25 in pdf

Updated version UTEP-CS-17-25b in pdf

Technical Report UTEP-CS-17-24, March 2017

Updated version UTEP-CS-17-24b, June 2017

Updated version UTEP-CS-17-24d, October 2017

Why Are FGM Copulas Successful: A Simple Explanation

Songsak Sriboonchitta and Vladik Kreinovich

Published in *Advances in Fuzzy Systems*, 2018, Vol. 2018,
Article ID 5872195

One of the most computationally convenient non-redundant ways to describe the dependence between two variables is by describing the corresponding copula. In many application, a special class of copulas -- known as FGM copulas -- turned out to be most successful in describing the dependence between quantities. The main result of this paper is that these copulas are the fastest-to-compute, and this explains their empirical success.

As an auxiliary result, we also show that a similar explanation can be given in terms of fuzzy logic.

Original file UTEP-CS-17-24 in pdf

Updated version UTEP-CS-17-24b in pdf

Updated version UTEP-CS-17-24d in pdf

Technical Report UTEP-CS-17-23, March 2017

Updated version UTEP-CS-17-23a, May 2017

Fuzzy Sets As Strongly Consistent Random Sets

Kittawit Autchariyapanitkul, Hung T. Nguyen, and Vladik Kreinovich

Published in *Proceedings of
the Joint 17th Congress of International Fuzzy Systems Association
and 9th International Conference on Soft Computing and Intelligent
Systems*, Otsu, Japan, June 27-30, 2017.

It is known that from the purely mathematical viewpoint, fuzzy
sets can be interpreted as equivalent classes of random sets. This
interpretations helps to teach fuzzy techniques to statisticians
and also enables us to apply results about random sets to fuzzy
techniques. The problem with this interpretation is that it is too
complicated: a random set is not an easy notion, and classes of
random sets are even more complex. This complexity goes against
the spirit of fuzzy sets, whose purpose was to be simple and
intuitively clear. From this viewpoint, it is desirable to
simplify this interpretation. In this paper, we show that the
random-set interpretation of fuzzy techniques can indeed be
simplified: namely, we can show that fuzzy sets can be interpreted
not as classes, but as *strongly consistent* random sets (in
some reasonable sense). This is not yet at the desired level of
simplicity, but this new interpretation is much simpler than the
original one and thus, constitutes an important step towards the
desired simplicity.

Original file UTEP-CS-17-23 in pdf

Updated version UTEP-CS-17-23a in pdf

Technical Report UTEP-CS-17-22, March 2017

How to Deal with Uncertainties in Computing: from Probabilistic and Interval Uncertainty to Combination of Different Approaches, with Applications to Engineering and Bioinformatics

Vladik Kreinovich

Published in: Jolanta Mizera-Pietraszko, Ricardo Rodriguez
Jorge, Diego Moises Almazo Perez, and Pit Pichappan (eds.),
*Advances in Digital technologies. Proceedings of the Eighth
International Conference on the Applications of Digital
Information and Web Technologies ICADIWT'2017*, Ciudad Juarez,
Chihuahua, Mexico, March 29-31, 2017, IOS Press, Amsterdam, 2017,
pp. 3-15.

Most data processing techniques traditionally used in scientific and engineering practice are statistical. These techniques are based on the assumption that we know the probability distributions of measurement errors etc.

In practice, often, we do not know the distributions, we only know the bound D on the measurement accuracy -- hence, after the get the measurement result X, the only information that we have about the actual (unknown) value x of the measured quantity is that $x$ belongs to the interval [X − D, X + D]. Techniques for data processing under such interval uncertainty are called interval computations; these techniques have been developed since 1950s.

In many practical problems, we have a combination of different types of uncertainty, where we know the probability distribution for some quantities, intervals for other quantities, and expert information for yet other quantities. The purpose of this paper is to describe the theoretical background for interval and combined techniques and to briefly describe the existing practical applications.

Technical Report UTEP-CS-17-21, March 2017

Why Convex Optimization Is Ubiquitous and Why Pessimism Is Widely Spread

Angel F. Garcia Contreras, Martine Ceberio, and Vladik Kreinovich

Published in *Proceedings of the 10th International Workshop on Constraint
Programming and Decision Making CoProd'2017*, El Paso, Texas,
November 3, 2017, pp. 38-42.

In many practical applications, the objective function is convex. The use of convex objective functions makes optimization easier, but ubiquity of such objective function is a mystery: many practical optimization problems are not easy to solve, so it is not clear why the objective function -- whose main goal is to describe our needs -- would always describe easier-to-achieve goals. In this paper, we explain this ubiquity based on the fundamental ideas about human decision making. This explanation also helps us explain why in decision making under uncertainty, people often make pessimistic decisions, i.e.., decisions based more on the worst-case scenarios.

Technical Report UTEP-CS-17-20, March 2017

Why Linear Interpolation?

Andrzej Pownuk and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2017,
Vol. 43, pp.3-49.

Linear interpolation is the computationally simplest of all possible interpolation techniques. Interestingly, it works reasonably well in many practical situations, even in situations when the corresponding computational models are rather complex. In this paper, we explain this empirical fact by showing that linear interpolation is the only interpolation procedure that satisfies several reasonable properties such as consistency and scale-invariance.

Technical Report UTEP-CS-17-19, March 2017

Derivation of Gross-Pitaevskii Version of Nonlinear Schroedinger Equation from Scale Invariance

Olga Kosheleva and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2017,
Vol. 43, pp. 35-42.

It is known that in the usual 3-D space, the Schroedinger equation can be derived from scale-invariance. In view of the fact that, according to modern physics, the actual dimension of proper space may be different from 3, it is desirable to analyze what happens in other spatial dimensions D. It turns out that while for D ≥ 3 we still get only the Schroedinger's equation, for D = 2, we also get the Gross-Pitaevskii version of a nonlinear Schroedinger equation that describes a quantum system of identical bosons, and for D = 1, we also get a new nonlinear version of the Schroedinger equation.

Technical Report UTEP-CS-17-18, March 2017

Can We Detect Crisp Sets Based Only on the Subsethood Ordering of Fuzzy Sets? Fuzzy Sets And/Or Crisp Sets Based on Subsethood of Interval-Valued Fuzzy Sets?

Christian Servin, Gerardo Muela, and Vladik Kreinovich

Published in: Patricia Melin, Oscar
Castillo, Janusz Kacprzyk, Marek Reformat, and William Melek
(eds.), *Fuzzy Logic in Intelligent System Design: Theory and
Applications*, Springer Verlag, Cham, Switzerland, 2018,
pp. 307-313.

Fuzzy sets are naturally ordered by the subsethood relation. If we only know which set which fuzzy set is a subset of which -- and have no access to the actual values of the corresponding membership functions -- can we detect which fuzzy sets are crisp? In this paper, we show that this is indeed possible. We also show that if we start with interval-valued fuzzy sets, then we can similarly detect type-1 fuzzy sets and crisp sets.

Technical Report UTEP-CS-17-17, February 2017

Fuzzy Systems Are Universal Approximators for Random Dependencies: A Simplified Proof

Mahdokht Afravi and Vladik Kreinovich

Published in *Proceedings of the 10th International Workshop on Constraint
Programming and Decision Making CoProd'2017*, El Paso, Texas,
November 3, 2017, pp. 14-18.

In many real-life situations, we do not know the actual dependence
y = f(x_{1}, ..., x_{n}) between the physical
quantities x_{i} and
y, we only know expert rules describing this dependence. These
rules are often described by using imprecise ("fuzzy") words
from natural language. Fuzzy techniques have been invented with
the purpose to translate these rules into a precise
dependence y = f(x_{1}, ..., x_{n}). For deterministic
dependencies y = f(x_{1}, ..., x_{n}), there are universal
approximation results according to which for each continuous
function on a bounded domain and for every ε > 0, there
exist fuzzy rules for which the resulting approximate dependence
y = F(x_{1}, ..., x_{n}) is ε-close to the
original function f(x_{1}, ..., x_{n}).

In practice, many dependencies are *random*, in the sense that
for each combination of the values x_{1}, ..., x_{n}, we may get
different values y with different probabilities. It has been
proven that fuzzy systems are universal approximators for such
random dependencies as well. However, the existing proofs are very
complicated and not intuitive. In this paper, we provide a
simplified proof of this universal approximation property.

Technical Report UTEP-CS-17-16, February 2017

Plans Are Worthless but Planning Is Everything: A Theoretical Explanation of Eisenhower's Observation

Angel F. Garcia Contreras, Martine Ceberio, and Vladik Kreinovich

Published in

The 1953-1961 US President Dwight D. Eisenhower emphasized that his experience as the Supreme Commander of the Allied Expeditionary Forces in Europe during the Second World War taught him that "plans are worthless, but planning is everything". This sound contradictory: if plans are worthless, why bother with planning at all? In this paper, we show that Eisenhower's observation has a meaning: while directly following the original plan in constantly changing circumstances is often not a good idea, the existence of a pre-computed original plan enables us to produce an almost-optimal strategy -- a strategy that would have been computationally difficult to produce on a short notice without the pre-existing plan.

Technical Report UTEP-CS-17-15, February 2017

Why Mixture of Probability Distributions

Andrzej Pownuk and Vladik Kreinovich

Published in *International Journal of Intelligent
Technologies and Applied Statistics IJITAS*, 2017, Vol. 10, No. 2,
pp. 41-45.

If we have two random variables ξ_{1} and
ξ_{1}, then we can form their *mixture* if we take
ξ_{1} with some probability w and ξ_{2} with
the remaining probability 1 − w. The probability density
function (pdf) ρ(x) of the mixture is a convex combination of
the pdfs of the original variables: ρ(x) = w *
ρ_{1}(x) +( 1 − w) * ρ_{2}(x). A
natural question is: can we use other functions
f(ρ_{1}, ρ_{2}) to combine the pdfs, i.e.,
to produce a new pdf ρ(x) =f(ρ_{1}(x),
ρ_{2}(x))? In this paper, we prove that the only
combination operations that always lead to a pdf are the
operations f(ρ_{1}, ρ_{2})=w *
ρ_{1} + (1 − w) * ρ_{2}
corresponding to mixture.

Technical Report UTEP-CS-17-14, February 2017

Experimentally Observed Dark Matter Confinement Clarifies a Discrepancy in Estimating the Universe's Expansion Speed

Olga Kosheleva and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2017,
Vol. 43, pp. 29-34.

It is well known the our Universe is expanding. In principle, we can estimate the expansion speed either directly, by observing the current state of the Universe, or indirectly, by analyzing the cosmic background radiation. Surprisingly, these two estimates lead to somewhat different expansion speeds. This discrepancy is a important challenge for cosmologists. Another challenge comes from recent experiments that show that, contrary to the original idea that dark matter and regular (baryonic) matter practically do not interact, dark matter actually "shadows" the normal matter.

In this paper, we show that this "dark matter confinement" can explain the discrepancy between different estimates of the Universe's expansion speed. It also explains the observed ratio of dark matter to regular matter.

Technical Report UTEP-CS-17-13, February 2017

A consultation system for Cold - Heat diagnosis according to Vietnamese Traditional medicine combining positive and negative knowledge

Tran Thi Hue, Nguyen Hoang Phuong, and V. Kreinovich

The aim of the paper is to show that in practice, the Cold - Heat diagnosis of Vietnamese Traditional Medicine is better to combining positive and negative knowledge. Based on text book and experiences of Traditional medicine practitioner we build the knowledge base combining positive knowledge and negative knowledge for Cold - Heat diagnosis. Then we use the FuzzRESS - A Fuzzy Rule-based Expert System Shell for Medical Consultation combining Positive and Negative Knowledge to such called A consultation system for Cold - Heat diagnosis according to Vietnamese Traditional medicine combining positive and negative knowledge. Finally, the developed system is demonstrated with diagnosis of Cold – Head status according to Vietnamese traditional medicine.

Technical Report UTEP-CS-17-12, February 2017

Revised version UTEP-CS-17-12a, March 2017

In Fuzzy Decision Making, General Fuzzy Sets Can Be Replaced by Fuzzy Numbers

Christian Servin, Olga Kosheleva, and Vladik Kreinovich

Published in *Journal of Uncertain Systems*, 2018, Vol. 12,
No. 3, pp. 208-214.

In many real decision situations, for each of the alternatives, we only have fuzzy information about the consequences of each action. This fuzzy information can be described by a fuzzy number, i.e., by a membership function with a single local maximum, or it can be described by a more complex fuzzy set, with several local maxima. We show that, from the viewpoint of decision making, it is sufficient to consider only fuzzy numbers. To be more precise, the decisions will be the same if we replace each original fuzzy set with the smallest fuzzy number of all fuzzy numbers of which the original fuzzy set is a subset.

Original file UTEP-CS-17-12a in pdf

Updated version UTEP-CS-17-12a in pdf

Technical Report UTEP-CS-17-11, February 2017

Updated version UTEP-CS-17-11a, April 2017

It Is Possible to Determine Exact Fuzzy Values Based on an Ordering of Interval-Valued or Set-Valued Fuzzy Degrees

Gerardo Muela, Olga Kosheleva, Vladik Kreinovich, and Christian Servin

Published in *Proceedings of IEEE International Conference on
Fuzzy Systems FUZZ-IEEE'2017*, Naples, Italy, July 9-12, 2017.

In the usual [0,1]-based fuzzy logic, the actual numerical value of a fuzzy degree can be different depending on a scale, what is important -- and scale-independent -- is the order between different values. To make a description of fuzziness more adequate, it is reasonable to consider interval-valued degrees instead of numerical ones. Here also, what is most important is the order between the degrees. If we have only order between the intervals, can we, based on this order, reconstruct the original numerical values -- i.e., the degenerate intervals? In this paper, we show that such a reconstruction is indeed possible, moreover, that it is possible under three different definitions of order between numerical values.

Original file UTEP-CS-17-11 in pdf

Updated version UTEP-CS-17-11a in pdf

Technical Report UTEP-CS-17-10, February 2017

Revised version UTEP-CS-17-10a, March 2017

Towards Decision Making under General Uncertainty

Andrzej Pownuk, Olga Kosheleva, and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2017,
Vol. 44, pp. 109-119.

There exist techniques for decision making under specific types of uncertainty, such as probabilistic, fuzzy, etc. Each of the corresponding ways of describing uncertainty has its advantages and limitations. As a result, new techniques for describing uncertainty appear all the time. Instead of trying to extend the existing decision making idea to each of these new techniques one by one, we attempt to develop a general approach that would cover all possible uncertainty techniques.

Original file UTEP-CS-17-10 in pdf

Updated version UTEP-CS-17-10a in pdf

Technical Report UTEP-CS-17-09, January 2017

Updated version UTEP-CS-17-09a, April 2017 Which Material Design Is Possible Under Additive Manufacturing: A Fuzzy Approach

Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Additive manufacturing -- also known as 3-D printing -- is a very promising new way to generate complex material designs. However, even with the modern advanced techniques, some designs are too complex to be implemented. There exist an empirical formula that describe when the design is implementable. In this paper, we use fuzzy ideas to provide a theoretical justification for this empirical formula.

Original file UTEP-CS-17-09 in pdf

Updated version UTEP-CS-17-09a in pdf

Technical Report UTEP-CS-17-08, January 2017

Revised version UTEP-17-08a, February 2017

(Hypothetical) Negative Probabilities Can Speed Up Uncertainty Propagation Algorithms

Andrzej Pownuk and Vladik Kreinovich

To appear in Aboul Ella Hassanien, Mohamed Elhoseny, Ahmed Farouk,
and Janusz Kacprzyk (eds.), *Quantum Computing: an Environment
for Intelligent Large Scale Real Application*, Springer Verlag

One of the main features of quantum physics is that, as basic objects describing uncertainty, instead of (non-negative) probabilities and probability density functions, we have complex-valued probability amplitudes and wave functions. In particular, in quantum computing, negative amplitudes are actively used. In the current quantum theories, the actual probabilities are always non-negative. However, there have been some speculations about the possibility of actually negative probabilities. In this paper, we show that such hypothetical negative probabilities can lead to a drastic speed up of uncertainty propagation algorithms.

Original file UTEP-CS-17-08 in pdf

Revised version UTEP-CS-17-08a in pdf

Technical Report UTEP-CS-17-07, January 2017

Beyond Traditional Applications of Fuzzy Techniques: Main Idea and Case Studies

Vladik Kreinovich, Olga Kosheleva, and Thongchai Dumrongpokaphan

Published in: Lotfi Zadeh, Ronald R. Yager, Shahnaz N. Sahbazova, Marek
Reformat, and Vladik Kreinovich (eds.), *Recent Developments and
New Direction in Soft Computing: Foundations and Applications*,
Springer Verlag, Cham, Switzerland, 2018, pp. 465-482.

Fuzzy logic techniques were originally designed to translate expert knowledge -- which is often formulated by using imprecise ("fuzzy") from natural language (like "small") -- into precise computer-understandable models and control strategies. Such a translation is still the main use of fuzzy techniques. Lately, it turned out that fuzzy methods can help in another class of applied problems: namely, in situations when there are semi-heuristic techniques for solving the corresponding problems, i.e., techniques for which there is no convincing theoretical justification. Because of the lack of a theoretical justification, users are reluctant to use these techniques, since their previous empirical success does not guarantee that these techniques will work well on new problems. In this paper, we show that in many such situations, the desired theoretical justification can be obtained if, in addition to known (crisp) requirements on the desired solution, we also take into account requirements formulated by experts in natural-language terms. Naturally, we use fuzzy techniques to translate these imprecise requirements into precise terms.

Technical Report UTEP-CS-17-06, January 2017

Probabilistic and More General Uncertainty-Based (e.g., Fuzzy) Approaches to Crisp Clustering Explain the Empirical Success of the K-Sets Algorithm

Vladik Kreinovich, Olga Kosheleva, Shahnaz Shabazova, and Songsak Sriboonchitta

Recently, a new empirically successful algorithm was proposed for crisp clustering: the K-sets algorithm. In this paper, we show that a natural uncertainty-based formalization of what is clustering automatically leads to the mathematical ideas and definitions behind this algorithm. Thus, we provide an explanation for this algorithm's empirical success.

Technical Report UTEP-CS-17-05, January 2017

Which Value x Best Represents a Sample x

Andrzej Pownuk and Vladik Kreinovich

Published in

In many practical situations, we have several estimates
x_{1}, ...,
x_{n} of the same quantity x. In such situations, it
is desirable to combine this information into a single estimate
x. Often, the estimates come with interval
uncertainty, i.e., instead of the exact values x_{i}, we only know
the intervals [x_{i}] containing these
values. In this paper, we formalize the problem of finding the
combined estimate x as the problem of maximizing the
corresponding utility, and we provide an efficient
(quadratic-time) algorithm for computing the resulting estimate.

Technical Report UTEP-CS-17-04, January 2017

Why Decimal System and Binary System Are the Most Widely Used: A Possible Explanation

Gerardo Muela

What is so special about numbers 10 and 2 that decimal and binary systems are the most widely used? One interesting fact about 10 is that when we start with a unit interval and we want to construct an interval of half width, then this width is exactly 5/10; when we want to find a square of half area, its sides are almost exactly 7/10, and when we want to construct a cube of half volume its sides are almost exactly 8/10. In this paper, we show that 2, 4, and 10 are the only numbers with this property -- at least among the first billion numbers. This may be a possible explanation of why decimal and binary systems are the most widely used.

Technical Report UTEP-CS-17-03, January 2017

F-Transform As a First Step Towards a General Approach to Data Processing and Data Fusion

Olga Kosheleva and Vladik Kreinovich

In data fusion, we have several approximations to the desired
objects, and we need to fuse them into a single -- more accurate
-- approximation. In the traditional approach to data fusion, we
usually assume that all the given approximations were obtained by
minimizing the same distance function -- most frequently, the
Euclidean (L_{2}) distance. In practice, however, we sometimes
need to use approximations corresponding to different distance
functions. To handle such situations, a new more general approach
to data processing and data fusion is needed. In this paper, we
show that the simplest cases of such new situations lead to
F-transform. Thus, F-transform can be viewed as a first step to
such a general approach. From this viewpoint, we explain the
formulas for the inverse F-transform, formulas which are
empirically successful but which look somewhat strange from the
viewpoint of the traditional approximation theory.

Technical Report UTEP-CS-17-02, January 2017

Updated version UTEP-CS-17-02a, April 2017

Uncertain Information Fusion and Knowledge Integration: How to Take Reliability into Account

Hung T. Nguyen, Kittawit Autchariyapanitkul, Olga Kosheleva, and Vladik Kreinovich

In many practical situations, we need to fuse and integrate information and knowledge from different sources -- and do it under uncertainty. Most existing methods for information fusion and knowledge integration take into account uncertainty. In addition to uncertainty, we also face the problem of reliability: sensors may malfunction, experts can be wrong, etc. In this paper, we show how to take into account both uncertainty and reliability in information fusion and knowledge integration. We show this on the examples of probabilistic and fuzzy uncertainty.

Original file UTEP-CS-17-02 in pdf

Updated version UTEP-CS-17-02a in pdf

Technical Report UTEP-CS-17-01, January 2017

Why Unexpectedly Positive Experiences Make Decision Makers More Optimistic: An Explanation

Andrzej Pownuk and Vladik Kreinovich

Published in

Experiments show that unexpectedly positive experiences make decision makers more optimistic. However, there seems to be no convincing explanation for this experimental fact. In this paper, we show that this experimental phenomenon can be naturally explained within the traditional utility-based decision theory.