University of Texas at El Paso
Computer Science Department
Abstracts of 2020 Reports


Technical Report UTEP-CS-20-76, July 2020
Let Us Use Negative Examples in Regression-Type Problems Too
Jonatan Contreras, Francisco Zapata, Olga Kosheleva, Vladik Kreinovich, and Martine Ceberio

In many practical situations, we need to reconstruct the dependence between quantities x and y based on several situations in which we know both x and y values. Such problems are known as regression problems. Usually, this reconstruction is based on positive examples, when we know y -- at least, with some accuracy. However, in addition, we often also know some examples in which we have negative information about y -- e.g., we know that y does not belong to a certain interval. In this paper, we show how such negative examples can be used to make the solution to a regression problem more accurate.

File in pdf


Technical Report UTEP-CS-20-75, July 2020
Why Quadratic Log-Log Dependence Is Ubiquitous And What Next
Sean R. Aguilar, Vladik Kreinovich, and Uyen Pham

To appear in Asian Journal of Economics and Banking (AJEB)

In many real-life situations ranging from financial to volcanic data, growth is described either by a power law -- which is linear in log-log scale, or by a quadratic dependence in the log-log scale. In this paper, we use natural scale invariance requirement to explain the ubiquity of such dependencies. We also explain what should be a reasonable choice of the next model, if quadratic turns out to be not too accurate: it turns out that under scale invariance, the next class of models are cubic dependencies in the log-log scale, then fourth order dependencies, etc.

File in pdf


Technical Report UTEP-CS-20-74, July 2020
How Can Econometrics Help Fight the COVID'19 Pandemic?
Kevin Alvarez and Vladik Kreinovich

To appear in Asian Journal of Economics and Banking (AJEB)

The current pandemic is difficult to model -- and thus, difficult to control. In contrast to the previous epidemics whose dynamics was smooth and well described by the existing models, the statistics of the current pandemic is highly oscillating. In this paper, we show that these oscillations can be explained if we take into account the disease's long incubation period -- as a result of which our control measures are determined by outdated data, showing number of infected people two weeks ago. To better control the pandemic, we propose to use the experience of economics, where also the effect of different measures can be observed only after some time. In the past, this led to wild oscillations of economy, with rapid growth periods followed by devastating crises. In time, economists learned how to smooth the cycles and thus, to drastically decrease the corresponding negative effects. We hope that this experience can help fight the pandemic.

File in pdf


Technical Report UTEP-CS-20-73, July 2020
Gifted and Talented: With Others? Separately? Mathematical Analysis of the Problem
Olga Kosheleva and Vladik Kreinovich

Crudely speaking, there are two main suggestions about teaching gifted and talented student: we can move them to a separate class section, or we can mix them with other students. Both options have pluses and minuses. In this paper, we formulate this problem in precise terms, we solve the corresponding mathematical optimization problem, and we come up with a somewhat unexpected optimal solution: mixing, but with an unusual twist.

File in pdf


Technical Report UTEP-CS-20-72, June 2020
A Fully Lexicographic Extension of Min or Max Operation Cannot Be Associative
Olga Kosheleva and Vladik Kreinovich

In many applications of fuzzy logic, to estimate the degree of confidence in a statement A&B, we take the minimum min(a,b) of the expert's degrees of confidence in the two statements A and B. When a < b, then an increase in b does not change this estimate, while from the commonsense viewpoint, our degree of confidence in A&B should increase. To take this commonsense idea into account, Ildar Batyrshin and colleagues proposed to extend the original order in the interval [0,1] to a lexicographic order on a larger set. This idea works for expressions of the type A&B, so maybe we can extend it to more general expressions? In this paper, we show that such an extension, while theoretically possible, would violate another commonsense requirement -- associativity of the "and"-operation. A similar negative result is proven for lexicographic extensions of the maximum operation -- that estimates the expert's degree of confidence in a statement A\/B.

File in pdf


Technical Report UTEP-CS-20-71, June 2020
What Is the Optimal Annealing Schedule in Quantum Annealing
Oscar Galindo and Vladik Kreinovich

In many real-life situations in engineering (and in other disciplines), we need to solve an optimization problem: we want an optimal design, we want an optimal control, etc. One of the main problems in optimization is avoiding local maxima (or minima). One of the techniques that helps with solving this problem is annealing: whenever we find ourselves in a possibly local maximum, we jump out with some probability and continue search for the true optimum. A natural way to organize such a probabilistic perturbation of the deterministic optimization is to use quantum effects. It turns out that often, quantum annealing works much better than non-quantum one. Quantum annealing is the main technique behind the only commercially available computational devices that use quantum effects -- D-Wave computers. The efficiency of quantum annealing depends on the proper selection of the annealing schedule, i.e., schedule that describes how the perturbations decrease with time. Empirically, it has been found that two schedules work best: power law and exponential ones. In this paper, we provide a theoretical explanation for these empirical successes, by proving that these two schedules are indeed optimal (in some reasonable sense).

File in pdf


Technical Report UTEP-CS-20-70, June 2020
Lexicographic-Type Extension of Min-Max Logic Is Not Uniquely Determined
Olga Kosheleva and Vladik Kreinovich

Since in a computer, "true" is usually represented as 1 and ``false'' as 0, it is natural to represent intermediate degrees of confidence by numbers intermediate between 0 and 1; this is one of the main ideas behind fuzzy logic -- a technique that has led to many useful applications. In many such applications, the degree of confidence in A & B is estimated as the minimum of the degrees of confidence corresponding to A and B, and the degree of confidence in A \/ B is estimated as the maximum; for example, 0.5 \/ 0.3 = 0.5. It is intuitively OK that, e.g., 0.5 \/ 0.3 < 0.51 and, more generally, that 0.5 \/ 0.3 < 0.5 + ε for all ε > 0. However, intuitively, an additional argument in favor of the statement should increase our degree of confidence, i.e., we should have 0.5 < 0.5 \/ 0.3. To capture this intuitive idea, we need to extend the min-max logic from the interval [0,1] to a lexicographic-type order on a larger set. Such extension has been proposed -- and successfully used in applications -- for some propositional formulas. A natural question is: can this construction be uniquely extended to all "and"-"or" formulas? In this paper, we show that, in general, such an extension is not unique.

File in pdf


Technical Report UTEP-CS-20-69, June 2020
How to Train A-to-B and B-to-A Neural Networks So That the Resulting Transformations Are (Almost) Exact Inverses
Paravee Maneejuk, Torben Peters, Claus Brenner, and Vladik Kreinovich

In many practical situations, there exist several representations, each of which is convenient for some operations, and many data processing algorithms involve transforming back and forth between these representations. Many such transformations are computationally time-consuming when performed exactly. So, taking into account that input data is usually only 1-10% accurate anyway, it makes sense to replace time-consuming exact transformations with faster approximate ones. One of the natural ways to get a fast-computing approximation to a transformation is to train the corresponding neural network. The problem is that if we train A-to-B and B-to-A networks separately, the resulting approximate transformations are only approximately inverse to each other. As a result, each time we transform back and forth, we add new approximation error -- and the accumulated error may become significant. In this paper, we show how we can avoid this accumulation. Specifically, we show how to train A-to-B and B-to-A neural networks so that the resulting transformations are (almost) exact inverses.

File in pdf


Technical Report UTEP-CS-20-68, June 2020
Why LASSO, Ridge Regression, and EN: Explanation Based on Soft Computing
Woraphon Yamaka, Hamza Alkhatib, Ingo Neumann, and Vladik Kreinovich

In many practical situations, observations and measurement results are consistent with many different models -- i.e., the corresponding problem is ill-posed. In such situations, a reasonable idea is to take into account that the values of the corresponding parameters should not be too large; this idea is known as {\it regularization}. Several different regularization techniques have been proposed; empirically the most successful are LASSO method, when we bound the sum of absolute values of the parameters, ridge regression method, when we bound the sum of the squares, and a EN method in which these two approaches are combined. In this paper, we explain the empirical success of these methods by showing that these methods can be naturally derived from soft computing ideas.

File in pdf


Technical Report UTEP-CS-20-67, June 2020
When Can We Be Sure that Measurement Results Are Consistent: 1-D Interval Case and Beyond
Hani Dbouk, Steffen Schoen, Ingo Neumann, and Vladik Kreinovich

In many practical situations, measurements are characterized by interval uncertainty -- namely, based on each measurement result, the only information that we have about the actual value of the measured quantity is that this value belongs to some interval. If several such intervals -- corresponding to measuring the same quantity -- have an empty intersection, this means that at least one of the corresponding measurement results is an outlier, caused by a malfunction of the measuring instrument. From the purely mathematical viewpoint, if the intersection is non-empty, there is no reason to be suspicious, but from the practical viewpoint, if the intersection is too narrow -- i.e., almost empty -- then we should also be suspicious. To be on the safe side, it is desirable to take the second measurement into account only if we are sufficiently sure that this measurement is not an outlier. In this paper, we describe a natural way to formalize this idea.

File in pdf


Technical Report UTEP-CS-20-66, June 2020
Which Classes of Bi-Intervals Are Closed Under Addition?
Olga Kosheleva, Vladik Kreinovich, and Jonatan Contreras

In many practical situations, uncertainty with which we know each quantity is described by an interval. In processing such data, it is useful to know that the sum of two intervals is always an interval. In some cases, however, the set of all possible value of a quantity is described by a bi-interval -- i.e., by a union of two intervals. It is known that the sum of two bi-intervals is not always a bi-interval. In this paper, we describe all the class of bi-intervals which are closed under addition -- i.e., for which the sum of bi-intervals is a bi-interval.

File in pdf


Technical Report UTEP-CS-20-65, June 2020
Preference for Boys Does Not Necessarily Lead to a Gender Disbalance: A Realistic Example
Olga Kosheleva and Vladik Kreinovich

To appear in International Mathematical Forum

Intuitively, it seems that cultural preference for boys should lead to a gender disbalance -- more boys than girls. This disbalance is indeed what is often observed, and this disbalance is what many models predict. However, in this paper, we show, on a realistic example, that preference for boys does not necessarily lead to a gender disbalance: in our simplified example, boys are clearly preferred, but still there are exactly as many girls as there are boys.

File in pdf


Technical Report UTEP-CS-20-64, June 2020
Common-Sense-Based Theoretical Explanation for an Empirical Formula Estimating Road Quality
Edgar Daniel Rodriguez Velasquez and Vladik Kreinovich

To appear in International Mathematical Forum

The quality of a road is usually gauged by a group of trained raters; the resulting numerical value is known as the Present Serviceability Index (PSI). There are, however, two problems with this approach. First, while it is practical to use trained raters to gauge the quality of major highways, there are also numerous not-so-major roads, and there is not enough trained raters to gauge the quality of all of them. Second, even for skilled raters, their estimates are somewhat subjective: different groups of raters may estimate the quality of the same road segment somewhat differently. Because of these two problems, it is desirable to be able to estimate PSI based on objective measurable characteristics. There exists a formula for such estimation recommended by the current standards. Its limitation is that this formula is purely empirical. In this paper, we provide a common-sense-based theoretical explanation for this formula.

File in pdf


Technical Report UTEP-CS-20-63, June 2020
Healthy Lifestyle Decreases the Risk of Alzheimer Disease: A Possible Partial Explanation of an Empirical Dependence
Olga Kosheleva and Vladik Kreinovich

To appear in International Mathematical Forum

A recent paper showed that for people who follow all five healthy lifestyle recommendations, the risk of Alzheimer disease is only 40% of the risk for those who do not follow any of these recommendations, and that for people two or three of these recommendations, the risk is 63% of the not-followers risk. In this paper, we show that a relation between the two numbers -- namely, that 0.40 is the square of 0.63 -- can be naturally explained by a simple model.

File in pdf


Technical Report UTEP-CS-20-62, June 2020
What If Not All Interval-Valued Fuzzy Degrees Are Possible?
Olga Kosheleva and Vladik Kreinovich

One of the applications of intervals is in describing experts' degrees of certainty in their statements. In this application, not all intervals are realistically possible. To describe all realistically possible degrees, we end up with a mathematical question of describing all topologically closed classes of intervals which are closed under the appropriate minimum and maximum operations. In this paper, we provide a full description of all such classes.

File in pdf


Technical Report UTEP-CS-20-61, June 2020
Decision Making Under Interval Uncertainty Revisited
Olga Kosheleva, Vladik Kreinovich, and Uyen Pham

To appear in Asian Journal of Economics and Banking, 2021.

In many real-life situations, we do not know the exact values of the expected gain corresponding to different possible actions, we only have lower and upper bounds on these gains -- i.e., in effect, intervals of possible gain values. How can we made decisions under such interval uncertainty? In this paper, we show that natural requirements lead to a 2-parametric family of possible decision-making strategies.

File in pdf


Technical Report UTEP-CS-20-60, June 2020
Are There Traces of Megacomputing in Our Universe
Olga Kosheleva and Vladik Kreinovich

The recent successes of quantum computing encouraged many researchers to search for other unconventional physical phenomena that could potentially speed up computations. Several promising schemes have been proposed that will -- hopefully -- lead to faster computations in the future. Some of these schemes -- similarly to quantum computing -- involve using events from the micro-world, others involve using large-scale phenomena. If some civilization used micro-world for computations, this will be difficult for us to notice, but if they use mega-scale effects, maybe we can notice these phenomena? In this paper, we analyze what possible traces such megacomputing can leave -- and come up with rather surprising conclusions.

File in pdf


Technical Report UTEP-CS-20-59, June 2020
How to Detect Future Einsteins: Towards Systems Approach
Olga Kosheleva and Vladik Kreinovich

Published in Exceptional Children: Education and Treatment, 2020, Vol. 2, No. 3, pp. 267-274.

Talents are rare. It is therefore important to detect and nurture future talents as early as possible. In many disciplines, this is already being done -- via gifted and talented programs, Olympiads, and other ways to select kids with unusually high achievements. However, the current approach is not perfect: some of the kids are selected simply because they are early bloomers, they do not grow into unusually successful researchers; on the other hand, many of those who later become very successful are not selected since they are late bloomers. To avoid these problems, we propose to use systems approach: to find the general formula for the students' growth rate, the formula that would predict the student's future achievements based on his current and previous achievement levels, and then to select students based on the formula's prediction of their future success.

File in pdf


Technical Report UTEP-CS-20-58, June 2020
Natural Invariance Explains Empirical Success of Specific Membership Functions, Hedge Operations, and Negation Operations
Julio C. Urenda, Orsolya Csiszar, Gabor Csiszar, Jozsef Dombi, Gyorgy Eigner, and Vladik Kreinovich

To appear in: Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020.

Empirical studies have shown that in many practical problems, out of all symmetric membership functions, special distending functions work best, and out of all hedge operations and negation operations, fractional linear ones work the best. In this paper, we show that these empirical successes can be explained by natural invariance requirements.

File in pdf


Technical Report UTEP-CS-20-57, June 2020
How Mathematics and Computing Can Help Fight the Pandemic: Two Pedagogical Examples
Julio Urenda, Olga Kosheleva, Martine Ceberio, and Vladik Kreinovich

To appear in: Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020.

With the 2020 pandemic came unexpected mathematical and computational problems. In this paper, we provide two examples of such problems -- examples that we present in simplified pedagogical form. The problems are related to the need for social distancing and to the need for fast testing. We hope that these examples will help students better understand the importance of mathematical models.

File in pdf


Technical Report UTEP-CS-20-56, June 2020
Approximate Version of Interval Computation Is Still NP-Hard
Vladik Kreinovich and Olga Kosheleva

It is known that, in general, the problem of computing the range of a given polynomial on given intervals is NP-hard. For some NP-hard optimization problems, the approximate version -- e.g., if we want to find the value differing from the maximum by no more than a factor of 2 -- becomes feasible. Thus, a natural question is: what if instead of computing the exact range, we want to compute the enclosure which is, e.g., no more than twice wider than the actual range? In this paper, we show that this approximate version is still NP-hard, whether we want it to be twice wider or k times wider, for any k.

File in pdf


Technical Report UTEP-CS-20-55, May 2020
Online Teaching -- Systems Approach: Questions and Answers
Olga Kosheleva and Vladik Kreinovich

At a recent International Forum on Teacher Education (Kazan, Russia, May 27-29, 2020), special sessions were devoted to questions related to online teaching -- in view of the recent forced world-wide transition to online-only education. This article summarizes, in a systematic way, issues discussed at these sessions.

File in pdf


Technical Report UTEP-CS-20-54, May 2020
Advice to New Instructors: Systems Approach
Olga Kosheleva and Vladik Kreinovich

A recent paper provided useful system-based approach to new school teachers. In this paper, we somewhat modify this advice so that it can be applied to new instructors on all levels, including new instructors at the university level.

File in pdf


Technical Report UTEP-CS-20-53, May 2020
Neural Networks
Vladik Kreinovich

A neural network is a general term for machine learning tools that emulate how neurons work in our brains.

Ideally, these tools do what we scientists are supposed to do: we feed them examples of the observed system's behavior, and hopefully, based on these examples, the tool will predict the future behavior of similar systems. Sometimes they do predict -- but in many other cases, the situation is not so simple.

The goal of this entry is to explain what these tools can and cannot do -- without going into too many technical details.

File in pdf


Technical Report UTEP-CS-20-52, May 2020
How Expert Knowledge Can Help Measurements: Three Case Studies
Vladik Kreinovich

In addition to measurement results, we often have expert estimates. These estimates provides an additional information about the corresponding quantities. However, it is not clear how to incorporate these estimates into a metrological analysis: metrological analysis is usually based on justified statistical estimates, but expert estimates are usually not similarly justified. One way to solve this problem is to calibrate an expert the same way we calibrate measuring instruments. In the first two case studies, we show that such a calibration indeed leads to useful result. The third case study provides an example of another use of expert knowledge in measurement practice: this knowledge can be used to make semi-empirical measurement models more explainable -- and thus, more reliable.

File in pdf


Technical Report UTEP-CS-20-51, May 2020
Formal Concept Analysis Techniques Can Help in Intelligent Control, Deep Learning, etc.
Vladik Kreinovich

To appear in Proceedings of the 15th International Conference on Concept Lattices and Their Applications CLA'2020, Tallinn, Estonia, June 29 - July 1, 2020.

In this paper, we show that formal concept analysis is a particular case of a more general problem that includes deriving rules for intelligent control, finding appropriate properties for deep learning algorithms, etc. Because of this, we believe that formal concept analysis techniques can be (and need to be) extended to these application areas as well. To show that such an extension is possible, we explain how these techniques can be applied to intelligent control.

File in pdf


Technical Report UTEP-CS-20-50, May 2020
Why Most Empirical Distributions Are Few-Modal
Julio C. Urenda, Olga Kosheleva, and Vladik Kreinovich

In principle, any non-negative function can serve as a probability density function -- provided that it adds up to 1. All kinds of processes are possible, so it seems reasonable to expect that observed probability density functions are random with respect to some appropriate probability measure on the set of all such functions -- and for all such measures, similarly to the simplest case of random walk, almost all functions have infinitely many local maxima and minima. However, in practice, most empirical distributions have only a few local maxima and minima -- often one (unimodal distribution), sometimes two (bimodal), and, in general, they are few-modal. From this viewpoint, econometrics is no exception: empirical distributions of economics-related quantities are also usually few-modal. In this paper, we provide a theoretical explanation for this empirical fact.

File in pdf


Technical Report UTEP-CS-20-49, May 2020
How to Estimate the Stiffness of the Multi-Layer Road Based on Properties of Layers: Symmetry-Based Explanation for Odemark's Equation
Edgar Daniel Rodriguez Velasquez, Vladik Kreinovich, Olga Kosheleva, and Hoang Phuong Nguyen

When we design a road, we would like to check that the current design provides the pavement with sufficient stiffness to withstand traffic loads and climatic conditions. For this purpose, we need to estimate the stiffness of the road based on stiffness and thickness of its different layers. There exists a semi-empirical formula for this estimation. In this paper, we show that this formula can be explained by natural scale-invariance requirements.

File in pdf


Technical Report UTEP-CS-20-48, May 2020
Why It Is Sufficient to Have Real-Valued Amplitudes in Quantum Computing
Isaac Bautista, Vladik Kreinovich, Olga Kosheleva, and Hoang Phuong Nguyen

In the last decades, a lot of attention has been placed on quantum algorithms -- algorithms that will run on future quantum computers. In principle, quantum systems can use any complex-valued amplitudes. However, in practice, quantum algorithms only use real-valued amplitudes. In this paper, we provide a simple explanation for this empirical fact.

File in pdf


Technical Report UTEP-CS-20-47, May 2020
Why Some Powers Laws Are Possible And Some Are Not
Edgar Daniel Rodriguez Velasquez, Vladik Kreinovich, Olga Kosheleva, and Hoang Phuong Nguyen

Many dependencies between quantities are described by power laws, in which y is proportional to x raised to some power a. In some application areas, in different situations, we observe all possible pairs (A,a) of the coefficient of proportionality A and of the exponent a. In other application areas, however, not all combinations (A,a) are possible: once we fix the coefficient A, it uniquely determines the exponent a. In such case, the dependence of a on A is usually described by an empirical logarithmic formula. In this paper, we show that natural scale-invariance ideas lead to a theoretical explanation for this empirical formula.

File in pdf


Technical Report UTEP-CS-20-46, May 2020
Optimization under Fuzzy Constraints: Need to Go Beyond Bellman-Zadeh Approach and How It Is Related to Skewed Distributions
Olga Kosheleva, Vladik Kreinovich, and Hoang Phuong Nguyen

In many practical situations, we need to optimize the objective function under fuzzy constraints. Formulas for such optimization are known since the 1970s paper by Richard Bellman and Lotfi Zadeh, but these formulas have a limitation: small changes in the corresponding degrees can lead to a drastic change in the resulting selection. In this paper, we propose a natural modification of this formula, a modification that no longer has this limitation. Interestingly, this formula turns out to be related for formulas for skewed (asymmetric) generalizations of the normal distribution.

File in pdf


Technical Report UTEP-CS-20-45, May 2020
Absence of Remotely Triggered Large Earthquakes: A Geometric Explanation
Laxman Bokati, Aaron Velasco, and Vladik Kreinovich

To appear in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland.

It is known that seismic waves from a large earthquake can trigger earthquakes in distant locations. Some of the triggered earthquakes are strong themselves. Interestingly, strong triggered earthquakes only happen within a reasonably small distance (less than 1000 km) from the original earthquake. Even catastrophic earthquakes do not trigger any strong earthquakes beyond this distance. In this paper, we provide a possible geometric explanation for this phenomenon.

File in pdf


Technical Report UTEP-CS-20-44, May 2020
How to Efficiently Store Intermediate Results in Quantum Computing: Theoretical Explanation of the Current Algorithm
Oscar Galindo, Olga Kosheleva, and Vladik Kreinovich

In complex time-consuming computations, we rarely have uninterrupted access to a high performance computer: usually, in the process of computation, some interruptions happen, so we need to store intermediate results until computations resume. To decrease the probability of a mistake, it is often necessary to run several identical computations in parallel, in which case several identical intermediate results need to be stored. In particular, for quantum computing, we need to store several independent identical copies of the corresponding qubits -- quantum versions of bits. Storing qubit states is not easy, but it is possible to compress the corresponding multi-qubit states: for example, it is possible to store the resulting 3-qubit state by using only two qubits. In principle, there are many different ways to store the state of 3 independent identical qubits by using two qubits. In this paper, we show that the current algorithm for such storage is uniquely determined by the natural symmetry requirements.

File in pdf


Technical Report UTEP-CS-20-43, May 2020
Economics of Reciprocity and Temptation
Laxman Bokati, Olga Kosheleva, Vladik Kreinovich, and Nguyen Ngoc Thach

Behavioral economics has shown that in many situations, people's behavior differs from what is predicted by simple traditional utility-maximization economic models. It is therefore desirable to be able to accurately describe people's actual behavior. In some cases, the difference from the traditional models is caused by bounded rationality -- our limited ability to process information and to come up with a truly optimal solutions. In such cases, predicting people's behavior is difficult. In other cases, however, people actually optimize -- but the actual expression for utility is more complicated than in the traditional models. In such case, it is, in principle, possible to predict people's behavior. In this paper, we show that two phenomena -- reciprocity and temptation -- can be explained by optimizing a complex utility expression. We hope that this explanation will eventually lead to accurate prediction of these phenomena.

File in pdf


Technical Report UTEP-CS-20-42, May 2020
Towards Fast and Understandable Computations: Which "And"- and "Or"-Operations Can Be Represented by the Fastest (i.e., 1-Layer) Neural Networks? Which Activations Functions Allow Such Representations?
Kevin Alvarez, Julio C. Urenda, Orsolya Csiszar, Gabor Csiszar, Jozsef Dombi, Gyorgy Eigner, and Vladik Kreinovich

We want computations to be fast, and we want them to be understandable. As we show, the need for computations to be fast naturally leads to neural networks, with 1-layer networks being the fastest, and the need to be understandable naturally leads to fuzzy logic and to the corresponding "and"- and "or"-operations. Since we want our computations to be both fast and understandable, a natural question is: which "and"- and "or"-operations of fuzzy logic can be represented by the fastest (i.e., 1-layer) neural network? And a related question is: which activation functions allow such a representation? In this paper, we provide an answer to both questions: the only "and"- and "or"-operations that can be thus represented are max(0, a + b − 1) and min(a + b, 1), and the only activations functions allowing such a representation are equivalent to the rectified linear function -- the one used in deep learning. This result provides an additional explanation of why rectified linear neurons are so successful. With also show that with full 2-layer networks, we can compute practically any "and"- and "or"-operation.

File in pdf


Technical Report UTEP-CS-20-41, May 2020
How the Proportion of People Who Agree to Perform a Task Depends on the Stimulus: A Theoretical Explanation of the Empirical Formula
Laxman Bokati, Vladik Kreinovich, and Doan Thanh Ha

For each task, the larger the stimulus, the larger proportion of people agree to perform this task. In many economic situations, it is important to know how much stimulus we need to offer so that a sufficient proportion of the people will agree to perform the needed task. There is an empirical formula describing how this proportion increases as we increase the amount of stimulus. However, this empirical formula lacks a convincing theoretical explanation, as a result of which practitioners are somewhat reluctant to use it. In this paper, we provide a theoretical explanation for this empirical formula, thus making it more reliable -- and hence, more useable.

File in pdf


Technical Report UTEP-CS-20-40, May 2020
Reward for Good Performance Works Better Than Punishment for Mistakes: Economic Explanation
Olga Kosheleva, Julio Urenda, and Vladik Kreinovich

How should we stimulate people to make them perform better? How should we stimulate students to make them study better? Many experiments have shown that reward for good performance works better than punishment for mistakes. In this paper, we provide a possible theoretical explanation for this empirical fact.

File in pdf


Technical Report UTEP-CS-20-39, May 2020
Commonsense Explanations of Sparsity, Zipf Law, and Nash's Bargaining Solution
Olga Kosheleva, Vladik Kreinovich, and Kittawit Autchariyapanikul

As econometric models become more and more accurate and more and more mathematically complex, they also become less and less intuitively clear and convincing. To make these models more convincing, it is desirable to supplement the corresponding mathematics with commonsense explanations. In this paper, we provide such explanation for three economics-related concepts: sparsity (as in LASSO), Zipf's Law, and Nash's bargaining solution.

File in pdf


Technical Report UTEP-CS-20-38, April 2020
A Recent Result about Random Metrics Explains Why All of Us Have Similar Learning Potential
Christian Servin, Olga Kosheleva, and Vladik Kreinovich

To appear in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland.

In the same class, after the same lesson, the amount of learned material often differs drastically, by a factor of ten. Does this mean that people have that different learning abilities? Not really: experiments show that among different students, learning abilities differ by no more than a factor of two. This fact have been successfully used in designing innovative teaching techniques, techniques that help students realize their full learning potential. In this paper, we deal with a different question: how to explain the above experimental result. It turns out that this result about learning abilities -- which are, due to genetics, randomly distributed among the human population -- can be naturally explained by a recent mathematical result about random metrics.

File in pdf


Technical Report UTEP-CS-20-37, April 2020
How to Explain the Anchoring Formula in Behavioral Economics
Laxman Bokati, Vladik Krenovich, and Chon Van Le

According to the traditional economics, the price that a person is willing to pay for an item should be uniquely determined by the value that this person will get from this item, it should not depend, e.g., on the asking price proposed by the seller. In reality, the price that a person is willing to pay does depend on the asking price; this is known as the anchoring effect. In this paper, we provide a natural justification for the empirical formula that describes this effect.

File in pdf


Technical Report UTEP-CS-20-36, April 2020
A "Fuzzy" Like Button Can Decrease Echo Chamber Effect
Olga Kosheleva and Vladik Kreinovich

To appear in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland.

One of the big problems of the US political life is the echo chamber effect -- in spite of the abundance of materials on the web, many people only read materials confirming their own opinions. The resulting polarization often deadlocks the political situation and prevents politicians from reaching compromises needed to make needed changes. In this paper, we show, on a simplified model, that the echo chamber effect can be decreased if we simply replace the currently prevalent binary (yes-no) Like button on webpages with a more gradual ("fuzzy") one -- a button that will capture the relative degree of likeness.

File in pdf


Technical Report UTEP-CS-20-35, April 2020
Why Geometric Progression in Selecting the LASSO Parameter: A Theoretical Explanation
William Kubin, Yi Xie, Laxman Bokati, Vladik Kreinovich, and Kittawit Autchariyapanitkul

In situations when we know which inputs are relevant, the least squares method is often the best way to solve linear regression problems. However, in many practical situations, we do not know beforehand which inputs are relevant and which are not. In such situations, a 1-parameter modification of the least squares method known as LASSO leads to more adequate results. To use LASSO, we need to determine the value of the LASSO parameter that best fits the given data. In practice, this parameter is determined by trying all the values from some discrete set. It has been empirically shown that this selection works the best if we try values from a geometric progression. In this paper, we provide a theoretical explanation for this empirical fact.

File in pdf


Technical Report UTEP-CS-20-34, April 2020
Why There Are Only Four Fundamental Forces: A Possible Explanation
Olga Kosheleva and Vladik Kreinovich

Published in International Mathematical Forum, 2020, Vol. 5, No. 4, pp. 151-153.

It is known that there are exactly four fundamental forces of nature: gravity forces, forces corresponding to weak interactions, electromagnetic forces, and forces corresponding to strong interactions. In this paper, we provide a possible explanation of why there are exactly four fundamental forces: namely, we relate this number with the dimension of physical space-time.

File in pdf


Technical Report UTEP-CS-20-33, April 2020
Scale-Invariance Ideas Explain the Empirical Soil-Water Characteristic Curve
Edgar Daniel Rodriguez Velasquez and Vladik Kreinovich

The prediction of the road's properties under the influence of water infiltration is important for pavement design and management. Traditionally, this prediction heavily relied on expert estimates. In the last decades, complex empirical formulas have been proposed to capture the expert's intuition in estimating the effect of water infiltration on the stiffness of the pavement's payers. Of special importance is the effect of water intrusion on the pavement's foundation -- known as subgrade soil. In this paper, we show that natural scale-invariance ideas lead to a theoretical explanation for an empirical formula describing the dependence between soil suction and water content; formulas describing this dependence are known as soil-water characteristic curves.

File in pdf


Technical Report UTEP-CS-20-32, April 2020
Optimal Search under Constraints
Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

To appear in: Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020.

In general, if we know the values a and b at which a continuous function has different signs -- and the function is given as a black box -- the fastest possible way to find the root x for which f(x) = 0 is by using bisection (also known as binary search). In some applications, however -- e.g., in finding the optimal dose of a medicine -- we sometimes cannot use this algorithm since, for avoid negative side effects, we can only try value which exceed the optimal dose by no more than some small value δ > 0. In this paper, we show how to modify bisection to get an optimal algorithm for search under such constraint.

File in pdf


Technical Report UTEP-CS-20-31, April 2020
Equations for Which Newton's Method Never Works: Pedagogical Examples
Leobardo Valera, Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

To appear in: Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020.

One of the most widely used methods for solving equations is the classical Newton's method. While this method often works -- and is used in computers for computations ranging from square root to division -- sometimes, this method does not work. Usual textbook examples describe situations when Newton's method works for some initial values but not for others. A natural question that students often ask is whether there exist functions for which Newton's method never works -- unless, of course, the initial approximation is already the desired solution. In this paper, we provide simple examples of such functions.

File in pdf


Technical Report UTEP-CS-20-30, April 2020
Centroids Beyond Defuzzification
Juan Carlos Figueroa–Garcia, Christian Servin, and Vladik Kreinovich

To appear in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020.

In general, expert rules expressed by imprecise (fuzzy) words of natural language like "small" lead to imprecise (fuzzy) control recommendations. If we want to design an automatic controller, we need, based on these fuzzy recommendations, to generate a single control value. A procedure for such generation is known as defuzzification. The most widely used defuzzification procedure is centroid defuzzification, in which, as the desired control value, we use one of the coordinates of the center of mass ("centroid") of an appropriate 2-D set. A natural question is: what is the meaning of the second coordinate of this center of mass? In this paper, we show that this second coordinate describes the overall measure of fuzziness of the resulting recommendation.

File in pdf


Technical Report UTEP-CS-20-29, April 2020
Which Algorithms Are Feasible and Which Are Not: Fuzzy Techniques Can Help in Formalizing the Notion of Feasibility
Olga Kosheleva and Vladik Kreinovich

To appear in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020.

Some algorithms are practically feasible, in the sense that for all inputs of reasonable length they provide the result in reasonable time. Other algorithms are not practically feasible, in the sense that they may work well for small-size inputs, but for slightly larger -- but still reasonable-size -- inputs, the computation time becomes astronomical (and not practically possible). How can we describe practical feasibility in precise terms? The usual formalization of the notion of feasibility states that an algorithm is feasible if its computation time is bounded by a polynomial of the size of the input. In most cases, this definition works well, but sometimes, it does not: e.g., according to this definition, every algorithm requiring a constant number of computational steps is feasible, even when this number of steps is larger than the number of particles in the Universe. In this paper, we show that by using fuzzy logic, we can naturally come up with a more adequate description of practical feasibility.

File in pdf


Technical Report UTEP-CS-20-28, April 2020
Is There a Contradiction Between Statistics and Fairness: From Intelligent Control to Explainable AI
Christian Servin and Vladik Kreinovich

To appear in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020.

At first glance, there seems to be a contradiction between statistics and fairness: statistics-based AI techniques lead to unfair discrimination based on gender, race, and socio-economical status. This is not just a fault of probability techniques: similar problems can happen if we use fuzzy or other techniques for processing uncertainty. To attain fairness, several authors proposed not to rely on statistics and instead, explicitly add fairness constraints into decision making. In this paper, we show that the seeming contradiction between statistics and fairness is caused mostly by the fact that the existing systems use simplified models; contradictions disappear if we replace them with more adequate (and thus more complex) statistical models.

File in pdf


Technical Report UTEP-CS-20-27, April 2020
Why Linear Expressions in Discounting and in Empathy: A Symmetry-Based Explanation
Supanika Leurcharusmee, Laxman Bokati, and Olga Kosheleva

To appear in Soft Computing.

People's preferences depend not only on the decision maker's immediate gain, they are also affected by the decision maker's expectation of future gains. A person's decisions are also affected by possible consequences for others. In decision theory, people's preferences are described by special quantities called utilities. In utility terms, the above phenomena mean that the person's overall utility of an action depends not only on the utility corresponding to the action's immediate consequences for this person, it also depends on utilities corresponding to future consequences and on utilities corresponding to consequences for others. These dependencies reflect discounting of future consequences in comparison with the current ones and to empathy (or lack of) of the person towards others. In general, many formulas involving utility are nonlinear, even formulas describing the dependence of utility on money. However, surprisingly, for discounting and for empathy, linear formulas work very well. In this paper, we show that natural symmetry requirements can explain this linearity.

File in pdf


Technical Report UTEP-CS-20-26, April 2020
Why Class-D Audio Amplifiers Work Well: A Theoretical Explanation
Kevin Alvarez, Julio C. Urenda, and Vladik Kreinovich

To appear in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland.

Most current high-quality electronic audio systems use class-D audio amplifiers (D-amps, for short), in which a signal is represented by a sequence of pulses of fixed height, pulses whose duration at any given moment of time linearly depends on the amplitude of the input signal at this moment of time. In this paper, we explain the efficiency of this signal representation by showing that this representation is the least vulnerable to additive noise (that affect measuring the signal itself) and to measurement errors corresponding to measuring time.

File in pdf


Technical Report UTEP-CS-20-25, April 2020
Why Black-Scholes Equations Are Effective Beyond Their Usual Assumptions: Symmetry-Based Explanation
Warattaya Chinnakum and Sean R. Aguilar

To appear in International Journal of Uncertainty, Fuzziness, and Knowledge-Based Systems

Nobel-Prize-winning Black-Scholes equations are actively used to estimate the price of options and other financial instruments. In practice, they provide a good estimate for the price, but the problem is that their original derivation is based on many simplifying statistical assumptions which are, in general, not valid for financial time series. The fact that these equations are effective way beyond their usual assumptions leads to a natural conclusion that there must be an alternative derivation for these equations, a derivation that does not use the usual too-strong assumptions. In this paper, we provide such a derivation in which the only substantial assumption is a natural symmetry: namely, scale-invariance of the corresponding processes. Scale-invariance also allows us to describe possible generalizations of Black-Scholes equations, generalizations that we hope will lead to even more accurate estimates for the corresponding prices.

File in pdf


Technical Report UTEP-CS-20-24, March 2020
Scale-Invariance and Fuzzy Techniques Explain the Empirical Success of Inverse Distance Weighting and of Dual Inverse Distance Weighting in Geosciences
Laxman Bokati, Aaron Velasco, and Vladik Kreinovich

To appear in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020.

Once we measure the values of a physical quantity at certain spatial locations, we need to interpolate these values to estimate the value of this quantity at other locations x. In geosciences, one of the most widely used interpolation techniques is inverse distance weighting, when we combine the available measurement results with the weights inverse proportional to some power of the distance from x to the measurement location. This empirical formula works well when measurement locations are uniformly distributed, but it leads to biased estimates otherwise. To decrease this bias, researchers recently proposed a more complex dual inverse distance weighting technique. In this paper, we provide a theoretical explanation both for the inverse distance weighting and for the dual inverse distance weighting. Specifically, we show that if we use the general fuzzy ideas to formally describe the desired property of the interpolation procedure, then physically natural scale-invariance requirement select only these two distance weighting techniques.

File in pdf


Technical Report UTEP-CS-20-23, March 2020
Decision Making Under Interval Uncertainty: Towards (Somewhat) More Convincing Justifications for Hurwicz Optimism-Pessimism Approach
Warattaya Chinnakum, Laura Berrout Ramos, Olugbenga Iyiola, and Vladik Kreinovich

To appear in Asian Journal of Economics and Banking (AJEB)

In the ideal world, we know the exact consequences of each action. In this case, it is relatively straightforward to compare different possible actions and, as a result of this comparison, to select the best action. In real life, we only know the consequences with some uncertainty. A typical example is interval uncertainty, when we only know the lower and upper bounds on the expected gain. How can we compare such interval-valued alternatives? A usual way to compare such alternatives is to use the optimism-pessimism criterion developed by Nobelist Leo Hurwicz. In this approach, we maximize a weighted combination of the worst-case and the best-case gains, with the weights reflecting the decision maker's degree of optimism. There exist several justifications for this criterion; however, some of the assumptions behind these justifications are not 100\% convincing. In this paper, we propose new, hopefully more convincing justifications for Hurwicz's approach.

File in pdf


Technical Report UTEP-CS-20-22, March 2020
Which Are the Correct Membership Functions? Correct "And"- and "Or"- Operations? Correct Defuzzification Procedure?
Olga Kosheleva, Vladik Kreinovich, and Shahnaz Shabazova

Even in the 1990s, when many successful examples of fuzzy control appeared all the time, many users were somewhat reluctant to use fuzzy control. One of the main reasons for this reluctance was the perceived subjective character of fuzzy techniques -- for the same natural-language rules, different experts may select somewhat different membership functions and thus get somewhat different control/recommendation strategies. In this paper, we promote the idea that this selection does not have to be subjective. We can always select the "correct" membership functions, i.e., functions for which, on previously tested case, we got the best possible control. Similarly, we can select the "correct" and- and or-operations, the correct defuzzification procedure, etc.

File in pdf


Technical Report UTEP-CS-20-21, March 2020
Theoretical Explanation of Recent Empirically Successful Code Quality Metrics
Vladik Kreinovich, Omar A. Masmali, Hoang Phuong Nguyen, and Omar Badreddin

To appear in Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII)

Millions of lines of code are written every day, and it is not practically possible to perfectly thoroughly test all this code on all possible situations. In practice, we need to be able to separate codes which are more probable to contain bugs -- and which thus need to be tested more thoroughly -- from codes which are less probable to contain flaws. Several numerical characteristics -- known as code quality metrics -- have been proposed for this separation. Recently, a new efficient class of code quality metrics have been proposed, based on the idea to assign consequent integers to different levels of complexity and vulnerability: we assign 1 to the simplest level, 2 to the next simplest level, etc. The resulting numbers are then combined -- if needed, with appropriate weights. In this paper, we provide a theoretical explanation for the above idea.

File in pdf


Technical Report UTEP-CS-20-20, March 2020
How to Combine (Dis)Utilities of Different Aspects into a Single (Dis)Utility Value, and How This Is Related to Geometric Images of Happiness
Laxman Bokati, Hoang Phuong Nguyen, Olga Kosheleva, and Vladik Kreinovich

To appear in Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII)

In many practical situations, a user needs our help in selecting the best out of a large number of alternatives. To be able to help, we need to understand the user's preferences. In decision theory, preferences are described by numerical values known as utilities. It is often not feasible to ask to user to provide utilities of all possible alternatives, so we must be able to estimate these utilities based on utilities of different aspects of these alternatives. In this paper, we provide a general formula for combining utilities of aspects into a single utility value. The resulting formula turns out to be in good accordance with the known correspondence between geometric images and different degrees of happiness.

File in pdf


Technical Report UTEP-CS-20-19, March 2020
How to Describe Conditions Like 2-out-of-5 in Fuzzy Logic: a Neural Approach
Olga Kosheleva, Vladik Kreinovich, and Hoang Phuong Nguyen

To appear in Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII)

In many medical applications, we diagnose a disease and/or apply a certain remedy if, e.g., two out of five conditions are satisfied. In the fuzzy case, i.e., when we only have certain degrees of confidence that each of n statement is satisfied, how do we estimate the degree of confidence that k out of n conditions are satisfied? In principle, we can get this estimate if we use the usual methodology of applying fuzzy techniques: we represent the desired statement in terms of "and" and "or", and use fuzzy analogues of these logical operations. The problem with this approach is that for large $n$, it requires too many computations. In this paper, we derive the fastest-to-compute alternative formula. In this derivation, we use the ideas from neural networks.

File in pdf


Technical Report UTEP-CS-20-18, March 2020
Quantum (and More General) Models of Research Collaboration
Oscar Galindo, Miroslav Svitek, and Vladik Kreinovich

Published in Asian Journal of Economics and Banking, 2020, Vol. 4, No. 1, pp. 77-86.

In the last decades, several papers have shown that quantum techniques can be successful in describing not only events in the micro-scale physical world -- for which they were originally invented -- but also in describing social phenomena, e.g., different economic processes. In our previous paper, we provide an explanation for this somewhat surprising successes. In this paper, we extend this explanation and show that quantum (and more general) techniques can also be used to model research collaboration.

File in pdf


Technical Report UTEP-CS-20-17, March 2020
Towards a Theoretical Explanation of How Pavement Condition Index Deteriorates over Time
Edgar Daniel Rodriguez Velasquez and and Vladik Kreinovich

To appear in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland.

To predict how the Pavement Condition Index will change over time, practitioners use a complex empirical formula derived in the 1980s. In this paper, we provide a possible theoretical explanation for this formula, an explanation based on general ideas of invariance. In general, the existence of a theoretical explanation makes a formula more reliable; thus, we hope that our explanation will make predictions of road quality more reliable.

File in pdf


Technical Report UTEP-CS-20-16, March 2020
New (Simplified) Derivation of Nash's Bargaining Solution
Hoang Phuong Nguyen, Laxman Bokati, and Vladik Kreinovich

To appear in Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII)

According to the Nobelist John Nash, if a group of people wants to selects one of the alternatives in which all of them get a better deal than in a status quo situations, then they should select the alternative that maximizes the product of their utilities. In this paper, we provide a new (simplified) derivation of this result, a derivation which is not only simpler -- it also does not require that the preference relation between different alternatives be linear.

File in pdf


Technical Report UTEP-CS-20-15, March 2020
Towards Making Fuzzy Techniques More Adequate for Combining Knowledge of Several Experts
Hoang Phuong Nguyen and Vladik Kreinovich

To appear in Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII)

In medical and other applications, expert often use rules with several conditions, each of which involve a quantity within the domain of expertise of a different expert. In such situations, to estimate the degree of confidence that all these conditions are satisfied, we need to combine opinions of several experts -- i.e., in fuzzy techniques, combine membership functions corresponding to different experts. In each area of expertise, different experts may have somewhat different membership functions describing the same natural-language ("fuzzy") term like small. It is desirable to present the user with all possible conclusions corresponding to all these membership functions. In general, even if, for each area of expertise, we have only a 1-parametric family characterizing different membership function, then for rules with 3 conditions, we already have a difficult-to-interpret 3-parametric family of possible consequences. It is thus desirable to limit ourselves to the cases when the resulting family is still manageable -- e.g., is 1-parametric. In this paper, we provide a full description of all such families. Interestingly, it turns out that such families are possible only if we allow non-normalized membership functions, i.e., functions for which the maximum may be smaller than 1. We argue that this is a way to go, since normalization loses some information that we receive from the experts.

File in pdf


Technical Report UTEP-CS-20-14, March 2020
Why Mean, Variance, Moments, Correlation, Skewness etc. – Invariance-Based Explanations
Olga Kosheleva, Laxman Bokati, and Vladik Kreinovich

To appear in Asian Journal of Economics and Banking, 2020, Vol. 4, No. 2.

In principle, we can use many different characteristics of a probability distribution. However, in practice, a few of such characteristics are mostly used: mean, variance, moments, correlation, etc. Why these characteristics and not others? The fact that these characteristics have been successfully used indicates that there must be some reason for their selection. In this paper, we show that the selection of these characteristics can be explained by the fact that these characteristics are invariant with respect to natural transformations -- while other possible characteristics are not invariant.

File in pdf


Technical Report UTEP-CS-20-13, February 2020
Updated version UTEP-CS-20-13d, April 2020
How Quantum Cryptography and Quantum Computing Can Make Cyber-Physical Systems More Secure
Deepak Tosh, Oscar Galindo, Vladik Kreinovich, and Olga Kosheleva

Published in Proceedings of the System of Systems Engineering Conference SoSE'2020, Budapest, Hungary, June 2-4, 2020, pp. 313-320.

For cyber-physical systems, cyber-security is vitally important. There are many cyber-security tools that make communications secure -- e.g., communications between sensors and the computers processing the sensor's data. Most of these tools, however, are based on RSA encryption, and it is known that with quantum computing, this encryption can be broken. It is therefore desirable to use an unbreakable alternative -- quantum cryptography -- for such communications. In this paper, we discuss possible consequences of this option. We also explain how quantum computers can help even more: namely, they can be used to optimize the system's design -- in particular, to maximize its security, and to make sure that we do not waste time on communicating and processing irrelevant information.

Original version UTEP-CS-20-13 in pdf
Updated version UTEP-CS-20-13d in pdf


Technical Report UTEP-CS-20-12, February 2020
Why Squashing Functions in Multi-Layer Neural Networks
Julio C. Urenda, Orsolya Csiszar, Gabor Csiszar, Jozsef Dombi, Olga Kosheleva, Vladik Kreinovich, and Gyorgy Eigner

Most multi-layer neural networks used in deep learning utilize rectified linear neurons. In our previous papers, we showed that if we want to use the exact same activation function for all the neurons, then the rectified linear function is indeed a reasonable choice. However, preliminary analysis shows that for some applications, it is more advantageous to use different activation functions for different neurons -- i.e., select a family of activation functions instead, and select the parameters of activation functions of different neurons during training. Specifically, this was shown for a special family of squashing functions that contain rectified linear neurons as a particular case. In this paper, we explain the empirical success of squashing functions by showing that the formulas describing this family follow from natural symmetry requirements.

File in pdf


Technical Report UTEP-CS-20-11, February 2020
Predictably (Boundedly) Rational: Examples of Seemingly Irrational Behavior Can Be Quantitatively Explained by Bounded Rationality
Laxman Bokati, Olga Kosheleva, and Vladik Kreinovich

Published in Asian Journal of Economics and Banking, 2020, Vol. 4, No. 1, pp. 20-48.

Traditional economics is based on the simplifying assumption that people behave perfectly rationally, that before making any decision, a person thoroughly analyzes all possible situations. In reality, we often do not have enough time to thoroughly analyze all the available information, as a result of which we make decisions of bounded rationality -- bounded by our inability to perform a thorough analysis of the situation. So, to predict human behavior, it is desirable to study how people actually make decisions. The corresponding area of economics is known as behavioral economics. It is known that many examples of seemingly irrational behavior can be explained, on the qualitative level, by this idea of bounded rationality. In this paper, we show that in many case, this qualitative explanation can be expanded into a quantitative one, that enables us to explain the numerical characteristics of the corresponding behavior.

File in pdf


Technical Report UTEP-CS-20-10, February 2020
How to Gauge the Quality of a Testing Method When Ground Truth Is Known with Uncertainty
Nicolas Gray, Scott Ferson, and Vladik Kreinovich

The quality of a testing method is usually measured by using sensitivity, specificity, and/or precision. To compute each of these three characteristics, we need to know the ground truth, i.e., we need to know which objects actually have the tested property. In many applications (e.g., in medical diagnostics), the information about the objects comes from experts, and this information comes with uncertainty. In this paper, we show how to take this uncertainty into account when gauging the quality of testing methods.

File in pdf


Technical Report UTEP-CS-20-09, February 2020
Fusion of Probabilistic Knowledge as Foundation for Sliced-Normal Approach
Michael Beer, Olga Kosheleva, and Vladik Kreinovich

In many practical applications, it turns out to be efficient to use Sliced-Normal multi-D distributions, i.e., distributions for which the logarithm of the probability density function (pdf) is a polynomial -- -- to be more precise, it is a sum of squares of several polynomials. This class is a natural extension of normal distributions, i.e., distributions for which the logarithm of the pdf is a quadratic polynomial.

In this paper, we provide a possible theoretical explanation for this empirical success.

File in pdf


Technical Report UTEP-CS-20-08, February 2020
Strength of Lime Stabilized Pavement Materials: Possible Theoretical Explanation of Empirical Dependencies
Edgar Daniel Rodriguez Velasquez and Vladik Kreinovich

To appear in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland.

When building a road, it is often necessary to strengthen the underlying soil layer. This strengthening is usually done by adding lime. There are empirical formulas that describe how the resulting strength depends on the amount of added lime. In this paper, we provide a theoretical explanation for these empirical formulas.

File in pdf


Technical Report UTEP-CS-20-07, February 2020
Why Ellipsoids in Mechanical Analysis of Wood Structures
F. Niklas Schietzold, Julio Urenda, Vladik Kreinovich, Wolfgang Graf, and Michael Kaliske

Wood is a very mechanically anisotropic material. At each point on the wooden beam, both average values and fluctuations of the local mechanical properties corresponding to a certain direction depend, e.g., on whether this direction is longitudinal, radial or tangential with respect to the grain orientation of the original tree. This anisotropy can be described in geometric terms, if we select a point x and form iso-correlation surfaces -- i.e., surfaces formed by points y with the same level of correlation ρ(x,y) between local changes in the vicinities of the points x and y. Empirical analysis shows that for each point x, the corresponding surfaces are well approximated by concentric homothetic ellipsoids. In this paper, we provide a theoretical explanation for this empirical fact.

File in pdf


Technical Report UTEP-CS-20-06, February 2020
Can We Preserve Physically Meaningful "Macro" Analyticity without Requiring Physically Meaningless "Micro" Analyticity?
Olga Kosheleva and Vladik Kreinovich

Physicists working on quantum field theory actively used "macro" analyticity -- e.g., that an integral of an analytical function over a large closed loop is 0 -- but they agree that "micro" analyticity -- the possibility to expand into Taylor series -- is not physically meaningful on the micro level. Many physicists prefer physical theories with physically meaningful mathematical foundations. So, a natural question is: can we preserve physically meaningful "macro" analyticity without requiring physically meaningless "micro" analyticity? In the 1970s, an attempt to do it was made by using constructive mathematics, in which only objects generated by algorithms are allowed. This did not work out, but, as we show in this paper, the desired separation between "macro" and "micro" analyticity can be achieved if we limit ourselves to feasible algorithms.

File in pdf


Technical Report UTEP-CS-20-05, February 2020
Several Years of Practice May Not Be As Good as Comprehensive Training: Zipf's Law Explains Why
Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Many professions practice certifications as a way to establish that a person practicing this profession has reached a certain skills level. At first glance, it may sound like several years of practice should help a person pass the corresponding certification test, but in reality, even after several years of practice, most people are not able to pass the test, while after a few weeks of intensive training, most people pass it successfully. This sounds counterintuitive, since the overall number of problems that a person solves during several years of practice is much larger than the number of problems solved during a few weeks of intensive training. In this paper, we show that Zipf's law explains this seemingly counterintuitive phenomenon.

File in pdf


Technical Report UTEP-CS-20-04, February 2020
A Mystery of Human Biological Development -- Can It Be Used to Speed up Computations?
Olga Kosheleva and Vladik Kreinovich

For many practical problems, the only known algorithms for solving them require non-feasible exponential time. To make computations feasible, we need an exponential speedup. A reasonable way to look for such possible speedup is to search for real-life phenomena where such a speedup can be observed. A natural place to look for such a speedup is to analyze the biological activities of human beings -- since we, after all, solve many complex problems that even modern super-fast computers have trouble solving. Up to now, this search was not successful -- e.g., there are people who compute much faster than others, but it turns out that their speedup is linear, not exponential. In this paper, we want to attract the researchers' attention to the fact that recently, an exponential speed up was indeed found -- namely, it turns out that the biological development of humans is, on average, exponentially faster than the biological development of such smart animals as dogs. We hope that unveiling the processes behind this unexpected speedup can help us achieve a similar speedup in computations.

File in pdf


Technical Report UTEP-CS-20-03, February 2020
Need for Simplicity and Everything Is a Matter of Degree: How Zadeh's Philosophy is Related to Kolmogorov Complexity, Quantum Physics, and Deep Learning
Vladik Kreinovich, Olga Kosheleva, and Andres Ortiz-Munoz

Many people remember Lofti Zadeh's mantra -- that everything is a matter of degree. This was one of the main principles behind fuzzy logic. What is somewhat less remembered is that Zadeh also used another important principle -- that there is a need for simplicity. In this paper, we show that together, these two principles can generate the main ideas behind such various subjects as Kolmogorov complexity, quantum physics, and deep learning. We also show that these principles can help provide a better understanding of an important notion of space-time causality.

File in pdf


Technical Report UTEP-CS-20-02, January 2020
Physical Randomness Can Help in Computations
Olga Kosheleva and Vladik Kreinovich

Can we use some so-far-unused physical phenomena to compute something that usual computers cannot? Researchers have been proposing many schemes that may lead to such computations. These schemes use different physical phenomena ranging from quantum-related to gravity-related to using hypothetical time machines. In this paper, we show that, in principle, there is no need to look into state-of-the-art physics to develop such a scheme: computability beyond the usual computations naturally appears if we consider such a basic notion as randomness.

File in pdf


Technical Report UTEP-CS-20-01, January 2020
Why Immediate Repetition Is Good for Short-Term Learning Results but Bad For Long-Term Learning: Explanation Based on Decision Theory
Laxman Bokati, Julio Urenda, Olga Kosheleva, and Vladik Kreinovich

To appear in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland.

It is well known that repetition enhances learning; the question is: when is a good time for this repetition? Several experiments have shown that immediate repetition of the topic leads to better performance on the resulting test than a repetition after some time. Recent experiments showed, however, that while immediate repetition leads to better results on the test, it leads to much worse performance in the long term, i.e., several years after the material have been studied. In this paper, we use decision theory to provide a possible explanation for this unexpected phenomenon.

File in pdf