University of Texas at El Paso
Computer Science Department
Abstracts of 2020 Reports


Technical Report UTEP-CS-20-128, December 2020
Students Who Took the Class Help Studenrs Who Are Taking It: What Is the Best Arrangement?
Olga Kosheleva and Vladik Kreinovich

Published in Proceedings of the 10th International Scientific-Practical Conference "Mathematical Education in Schools and Universities" MATHEDU'2021, Kazan, Russia, March 22-28, 2021, pp. 99-104.

The more help students get, the better. It is therefore reasonable to ask students who took the class to help students who are currently taking this class. This arrangement also help the helpers: it is known that the best way to learn the material is to teach it. An important question is: how to pair the students to get the maximal effect? In this paper, we show that, under reasonable conditions, the best effect is when we match the best performing "older" student with the worst performing "younger" one, the second best with the second worst, etc.

File in pdf


Technical Report UTEP-CS-20-127, December 2020
Just-In-Time Teaching Adds Motivation but Is Less Efficient
Vladik Kreinovich, Olga Kosheleva, and Christian Servin

Published in Proceedings of the 10th International Scientific-Practical Conference "Mathematical Education in Schools and Universities" MATHEDU'2021, Kazan, Russia, March 22-28, 2021, pp. 111-116.

For each major, in addition to directly major-related topics, students also need to learn auxiliary topics –- e.g., math topics –- which are needed to understand more direct major-related ones. To learn these auxiliary topics, students are required to take some prerequisite math class. The problem is that when the students take these classes, they do not fully understand how these classes are related to their major. As a result, they often lack motivations, do not study well, and this hinders their performance in the follow-up major-related classes. One way to enhance the students' motivation is to use just-in-time teaching, when each part of the auxiliary material is taught exactly when the need for this part appears in a major-related course. This idea definitely increases the students' motivation, but is this the most efficient way to increase their motivation? In this paper, we use a simple mathematical model to show that the traditional approach is more efficient –- and thus, other ways of raising students' motivation are desirable.

File in pdf


Technical Report UTEP-CS-20-126, December 2020
So How Were the Tents of Israel Placed? A Bible-Inspired Geometric Problem
Julio C. Urenda, Olga Kosheleva, and Vladik Kreinovich

Published in International Mathematical Forum, 2021, Vol. 16, No. 1, pp. 11-18.

In one of the Biblical stories, prophet Balaam blesses the tents of Israel for being good. But what can be so good about the tents? A traditional Rabbinical interpretation is that the placement of the tents provided full privacy: from each entrance, one could not see what is happening at any other entrance. This motivates a natural geometric question: how exactly were these tents placed? In this paper, we provide an answer to this question.

File in pdf


Technical Report UTEP-CS-20-125, December 2020
Building Postsecondary Pathways for Latinx Students in Computing: Lessons from Hispanic-Serving Institutions
Anne-Marie Nunez, David S. Knight, and Sanga Kim

While the COVID-19 pandemic has transformed the use of technology in education and the workforce, a shortage of computer scientists continues, and computing remains one of the least diverse STEM disciplines. Efforts to diversify the computing industry often focus on the most selective postsecondary institutions, which are predominantly White. We highlight the role of Hispanic-Serving Institutions (HSI) in gradating large numbers of STEM graduates of color, particularly Latinx students. HSIs are uniquely positioned to leverage asset-based approaches that value students’ cultural background. We describe the practices educators use in the Computing Alliance for Hispanic-Serving Institutions, a network of 40 HSIs that work together to improve postsecondary educational experiences for students in computing fields. We conclude with recommendations for federal, state, and local education leaders.

File in pdf


Technical Report UTEP-CS-20-124, December 2020
Why Physical Processes Are Smooth Or Almost Smooth: A Possible Physical Explanation Based on Intuitive Ideas Behind Energy Conservation
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2021, Vol. 59, pp. 58-63.

While there are some non-smooth (and even discontinuous) processes in nature, most processes are smooth or almost smooth. This smoothness help estimate physical quantities, but a natural question is: why are physical processes smooth or almost smooth? Are there any fundamental reasons for this ubiquitous smoothness? In this paper, we provide a possible physical explanation for emirical smoothness: namely, we show that smoothness naturally follows from intuitive ideas behind energy conservation.

File in pdf


Technical Report UTEP-CS-20-123, December 2020
Need for Shift-Invariant Fractional Differentiation Explains the Appearance of Complex Numbers in Physics
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2021, Vol. 59, pp. 53-57.

Complex numbers are ubiquitous in physics, they lead to a natural description of different physical processes and to efficient algorithms for solving the corresponding problems. But why this seemingly counterintuitive mathematical construction is so natural here? In this paper, we provide a possible explanation of this phenomenon: namely, we show that complex numbers appear if take into account that some physical system are described by derivatives of fractional order and that a physically meaningful analysis of such derivatives naturally leads to complex numbers.

File in pdf


Technical Report UTEP-CS-20-122, December 2020
Updated version UTEP-CS-20-122a, June 2021
Invariance-Based Approach: General Methods and Pavement Engineering Case Study
Edgar Daniel Rodriguez Velasquez, Vladik Kreinovich, and Olga Kosheleva

Published in International Journal of General Systems, 2021, Vol. 50, No. 6, pp. 672-702.

In many application areas such as pavement engineering, the phenomena are complex, and as a result, we do not have first-principle models describing the corresponding dependencies. Luckily, in many such areas, there is a lot of empirical data and, based on this data, many useful empirical dependencies have been found. The problem is that since many of these dependencies do not have a theoretical explanation, practitioners are often hesitant to use them: there have been many cases when an empirical formula stops being valid when circumstances change. To make the corresponding empirical formulas more reliable, it is therefore desirable to look for theoretical foundations of these formulas. In this paper, we show that many of such dependencies can be naturally explained by using invariances. We illustrate this approach on the example of pavement engineering, but the approach is very general, and can be applied to other systems as well.

Original file UTEP-CS-20-122 in pdf
Updated file UTEP-CS-20-122a in pdf


Technical Report UTEP-CS-20-121, December 2020
Why Was Nicholson's Theory So Successful: An Explanation of a Mysterious Episode in 20 Century Atomic Physics
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2021, Vol. 60, pp. 39-43.

In the early 1910s, John Nicholson suggested that all atoms are formed by four basic elementary particles. This theory had a spectacular match with observations: it explained, with an unbelievable accuracy of 0.1, the atomic weights of all 92 elements known at that time. Specifically, it was shown that every atomic weight can be represented, with this accuracy, as an integer combination of four basic atomic weights. However, in a few years, this theory turned out to be completely wrong: atoms consist of protons, neutrons, and electrons, not of Nicholson's particles. This mysterious episode seems to contradict the usual development of science, when an experimental confirmation means that the corresponding theory is true. In this paper, we explain this mystery by showing that, in fact, there was no experimental confirmation, Namely, we prove that any real number larger than 3.03 can be represented, with accuracy 0.1, as a linear combination of four Nicholson's basic weights. So, this past ``experimental confirmation'' has nothing to do with atomic weights or any experimental data at all -- it is simply an easy-to-prove general mathematical result.

File in pdf


Technical Report UTEP-CS-20-120, November 2020
Updated version UTEP-CS-20-120b, March 2021
A Natural Formalization of Changing-One's-Mind Leads to Square Root of "Not" and to Complex-Valued Fuzzy Logic
Olga Kosheleva and Vladik Kreinovich

Published in Julia Rayz, Victor Raskin, Scott Dick, and Vladik Kreinovich (eds.), Explainable AI and Other Applications of Fuzzy Techniques, Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2021, West Lafayette, Indiana, June 7-9, 2021, Springer, Cham, Switzerland, 2022, pp. 190-195

We show that a natural formalization of the process of changing one's mind leads to such seemingly non-intuitive ideas as square root of "not" and complex-valued fuzzy degrees.

Original file UTEP-CS-20-120 in pdf
Updated version UTEP-CS-20-120b in pdf


Technical Report UTEP-CS-20-119, November 2020
Yet Another Possible Explanation of Egyptian Fractions: Motivated by Fairness
Olga Kosheleva and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2020, Vol. 14, No. 19, pp. 919-924.

Ancient Egyptians represented fractions as sums of inverses of natural numbers, and they made sure that all these natural numbers are different. The representation as a sum of inverses makes some sense: it is known to lead to an optimal solution to the problem of dividing bread between workers, a problem often described in the Egyptian papyri. However, this does not explain why the corresponding natural numbers should be all different: some representations with the same natural number repeated several times lead to the same smallest number of cuts as the representations that the ancient Egyptians actually used. In this paper, we provide yet another possible explanation of Egyptian fractions -- based on fairness; this idea explains also why all the natural numbers should be different.

File in pdf


Technical Report UTEP-CS-20-118, November 2020
Being Active in Research Makes a Person a Better Teacher and Even Helps When Working for a Company
Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Published in International Mathematical Forum, 2020, Vol. 15, No. 8, pp. 417-420.

At first glance, it looks like being active in research is not necessarily related to a person's success in being a teacher or being a productive company employee -- moreover, it looks like research distracts from other tasks. Somewhat surprisingly, however, in practice, the best teachers and the best employees are actually the ones who are active in research. In this paper, we provide an explanation for this seemingly counter-intuitive phenomenon.

File in pdf


Technical Report UTEP-CS-20-117, November 2020
Why Quantiles Are a Good Description of Volatility in Economics: A Pedagogical Explanation
Sean Aguilar, Vladik Kreinovich, and Uyen Pham

Published in: Songsak Sriboonchitta, Vladik Kreinovich, and Woraphon Yamaka (eds.), Credible Asset Allocation, Optimal Transport Methods, and Related Topics, Springer, Cham, Switzerland, 2022, pp. 3-6.

To make investment decisions, we need to know, for each financial instrument, not only its expected return -- but also how the actual return may deviate from its expected value. A numerical measure of such deviations is known as volatility. Originally, volatility was measured by the srabdard deviation from the expected price, but it turned out that this measure does not always adequately describe our perception of volatility. Empirically, it turned out that quantiles are a more adequate description of volatility. In this paper, we provide an explanation of this empirical phenomenon.

File in pdf


Technical Report UTEP-CS-20-116, November 2020
Updated version UTEP-CS-20-116a, July 2024
Why Min, Max, Opening, and Closing Stock Prices Are Empirically Most Appropriate for Predictions, and Why Their Linear Combination Provides the Best Estimate for Beta
Somsak Chanaim, Olga Kosheleva, and Vladik Kreinovich

To appear in: Vladik Kreinovich, Woraphon Yamaka, and Supanika Leurcharusmee (eds.), Data Science for Econometrics and Related Topics, Springer, Cham, Switzerland, to appear.

While we have moment-by-moment prices of each stock, we cannot use all this information to predict the future stock prices, we need to combine them into a few characteristics of the daily stock price. Empirically, it turns out that the best characteristics are the lowest daily price, the highest daily price, the opening price, and the closing price. In the paper, we provide a theoretical explanation for this empirical phenomenon. We also explain why empirically, it turns out that the best way to find the stock's beta coefficient is to consider a convex combination of the about four characteristics.

Original file UTEP-CS-20-116 in pdf
Updated version UTEP-CS-20-116a in pdf


Technical Report UTEP-CS-20-115, November 2020
Should Fighting Corruption Always Be One of the Main Pre-Requisites for Economic Help?
Sean Aguilar and Vladik Kreinovich

Published in International Mathematical Forum, 2020, Vol. 5, No. 8, pp. 407--415.

In general, corruption is bad. In many cases, it makes sense to make fighting corruption one of the main pre-requisites for getting financial help: we do not want this money to line the pockets of corrupted officials, we want to help the people. In this paper, we argue, however, that in some cases -- of over-regulated and/or oppressive regimes -- too much emphasis on fighting corruption may be counter-productive: instead of helping people, it may hurt them.

File in pdf


Technical Report UTEP-CS-20-114, November 2020
A Possible (Qualitative) Explanation of the Hierarchy Problem in Theoretical Physics
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2021, Vol. 58, pp. 48-52.

One of the important open problem in theoretical physics is the hierarchy problem: how to explain that some physical constant are many orders of magnitude larger than others. In this paper, we provide a possible qualitative explanation for this phenomenon.

File in pdf


Technical Report UTEP-CS-20-113, November 2020
Why Strings, Why Quark Confinement: A Simple Qualitative Explanation
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2021, Vol. 57, pp. 60-63.

In this pedagogical article, we recall the infinities problem of modern physics, and we show that the natural way to overcome this problem naturally leads to strings and to quark confinement.

File in pdf


Technical Report UTEP-CS-20-112, November 2020
Possibility to Algorithmically Check: Yet Another Reason Why Current Definitions Have Been Selected in Elementary Mathematics
Christian Servin, Olga Kosheleva, and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2020, Vol. 14, No. 18, pp. 881-886.

At first glance, many definitions in mathematics -- especially in elementary mathematics -- seem arbitrary. Why is 1 not considered a prime number? Why is a square considered to be a particular case of a parallelogram -- in some old textbooks, a parallelogram was defined in such a way as to exclude the square. In his 2018 article, Art Duval explained many such definitions by a natural requirement to make the corresponding results (theorems) as simple as possible. However, elementary mathematics is not just about theorems and proofs, it is also about computations. In this paper, we show that from the computational viewpoint, it is also preferable, e.g., to view a square as a particular case of a parallelogram.

File in pdf


Technical Report UTEP-CS-20-111, November 2020
What If You Are Late on Several (Relatively Small) Tasks?
Olga Kosheleva and Vladik Kreinovich

Published in International Mathematical Forum, 2020, Vol. 15, No. 8, pp. 401-406.

In practice, we sometimes end up in a situation when we are late on several relatively small tasks. We cannot finish them all, so which ones should we do first? We show that in general, this is an NP-complete problem. In the typical situation when all the tasks are of approximately the same importance and requires approximately the same time to finish, we can have an explicit solution to this problem. In a nutshell, the resulting (somewhat counterintuitive) recommendation is to start with things which are not yet late or only a few days late. Actually, this recommendation makes sense: on a task on which we are already 30 days late, making it 31 days late will not change much, but if a task is due today, we want to finish it today, to avoid late penalty.

File in pdf


Technical Report UTEP-CS-20-110, November 2020
How to Find the Dependence Based on Measurements with Unknown Accuracy: Towards a Theoretical Justification for Midpoint and Convex-Combination Interval Techniques and Their Generalizations
Somsak Chanaim and Vladik Kreinovich

Published in: Nguyen Ngoc Thach, Vladik Kreinovich, Doan Thanh Ha, and Nguyen Duc Trung (eds.), Financial Econometrics: Bayesian Analysis, Quantum Uncertainty, and Related Topics, Springer, Cham, Switzerland, 2022, pp. 21-26.

In practice, we often need to find regression parameters in situations when for some of the values, we have several results of measuring this same value. If we know the accuracy of each of these measurements, then we can use the usual statistical techniques to combine the measurement results into a single estimate for the corresponding value. In some cases, however, we do not know these accuracies, so what can we do? In this paper, we describe two natural approaches to solving this problem. In addition to describing general techniques, our results also provide a theoretical explanation for several semi-heuristic ideas proposed for solving an important particular case of this problem -- the case when we deal with interval uncertainty.

File in pdf


Technical Report UTEP-CS-20-109, November 2020
Why Ancient Egyptians Preferred Some Sum-of-Inverses Representations of Fractions?
Olga Kosheleva and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2020, Vol. 14, No. 18, pp. 859-865.

Ancient Egyptians represented a fraction as a sum of inverses of natural numbers, with the smallest possible number of terms. In our previous paper, we explained that this representation makes sense since it leads to the optimal way of solving a problem frequently mentioned in the Egyptian papyri: dividing bread between workers. However, this does not explain why ancient Egyptians preferred some representations with the same number of terms but not others. For example, to represent 2/3, they used the sum 1/2 + 1/6 but not the sum 1/3 + 1/3 with the same number of terms. In this paper, we use a more detailed analysis of the same dividing-bread problem to explain this preference. Namely, in our previous explanation, we assumed that each cut requires the same amount of time. If we take into account that in practice, each consequent cut of the same loaf -- just like any other repetitive action -- takes a little less time, we get the desired explanation of why ancient Egyptians preferred, e.g., 1/2 + 1/6.

File in pdf


Technical Report UTEP-CS-20-108, November 2020
We Need Fuzzy Techniques to Design Successful Human-Like Robots
Vladik Kreinovich, Olga Kosheleva, and Laxman Bokati

Published in: Cengiz Kahraman and Eda Bolturk (Eds.), Toward Humanoid Robots: The Role of Fuzzy Sets, Springer, Cham, Switzerland, 2021, pp. 121-131.

In this chapter, we argue that to make sure that human-like robots exhibit human-like behavior, we need to use fuzzy techniques -- and we also provide details of this usage. The chapter is intended both for researchers and practitioners who are very familiar with fuzzy techniques and also for researchers and practitioners who do not know these techniques -- but who are interested in designing human-like robots.

File in pdf


Technical Report UTEP-CS-20-107, October 2020
Data Analytics Beyond Traditional Probabilistic Approach to Uncertainty
Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2021, Vol, 58, pp. 112-131.

Data for processing mostly comes from measurements, and measurements are never absolutely accurate: there is always the "measurement error" -- the difference between the measurement result and the actual (unknown) value of the measured quantity. In many applications, it is important to find out how these measurement errors affect the accuracy of the result of data processing. Traditional data processing techniques implicitly assume that we know the probability distributions. In many practical situations, however, we only have partial information about these distributions. In some cases, all we know is the upper bound on the absolute value of the measurement error. In other cases, data comes not from measurements but from expert estimates. In this paper, we explain how to estimate the accuracy of the results of data processing in all these situations. We tried to explain not only what methods can be used, but also why these methods have been proposed and have been successfully used. We hope that this overview will be helpful both to users solving practical problems and to researchers interested in extending and improving the existing techniques.

File in pdf


Technical Report UTEP-CS-20-106, October 2020
Coding Overhead of Mobile Apps
Yoonsik Cheon

A mobile app runs on small devices such as smartphones and tablets. Perhaps, because of this, there is a common misconception that writing a mobile app is simpler than a desktop application. In this paper, we show that this is indeed a misconception, and it's the other way around. We perform a small experiment to measure the source code sizes of a desktop application and an equivalent mobile app written in the same language. We found that the mobile version is 19% bigger than the desktop version in terms of the source lines of code, and the mobile code is a lot more involved and complicated with code tangling and scattering. This coding overhead of the mobile version is mostly due to the additional requirements and constraints specific to mobile platforms, such as diversity and mobility.

File in pdf


Technical Report UTEP-CS-20-105, October 2020
White- and Black-Box Computing and Measurements under Limited Resources: Cloud, High Performance, and Quantum Computing, and Two Case Studies -- Robotic Boat and Hierarchical Covid Testing
Vladik Kreinovich, Martine Ceberio, and Olga Kosheleva

Published in Proceedings of the 2nd Information-Communication Technologies & Embedded Systems Workshop ICT&ES-2020, Mykolaiv, Ukraine, November 12, 2020

In many practical problems, it is important to take into account that our computational and measuring resources are limited. In this paper, we overview main resource limitations for different types of computers, and we provide two case studies explaining how to best take this resource limitation into account.

File in pdf


Technical Report UTEP-CS-20-104, October 2020
Egyptian Fractions as Approximators
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2021, Vol. 57, pp. 46-59.

In ancient Egypt, fractions were represented as the sum of inverses to natural numbers. Processing fractions in this representation is computationally complicated. Because of this complexity, traditionally, Egyptian fractions used to be considered an early inefficient approach. In our previous papers, we showed, however, that the Egyptian fractions actually provide an optimal solution to problems important for ancient Egypt -- such as the more efficient distribution of food between workers. In these papers, we assumed, for simplicity, that we know the exact amount of food needed for each worker -- and that this value must be maintained with absolute accuracy. In this paper, we show that the corresponding food distribution can become even more efficient if we make the setting more realistic by allowing "almost exact" (approximate) representations.

File in pdf


Technical Report UTEP-CS-20-103, October 2020
Why Significant Wave Height And Rogue Waves Are So Defined: A Possible Explanation
Laxman Bokati, Olga Kosheleva, and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2021, Vol. 57, pp. 96-100.

Data analysis has shown that if we want to describe the wave pattern by a single characteristic, the best characteristic is the average height of the highest one third of the waves; this characteristic is called significant wave height. Once we know the value of this characteristic, a natural next question is: what is the highest wave that we should normally observe -- so that waves higher than this amount would be rare ("rogue"). Empirically, it has been shown that rogue waves are best defined as the ones which are at least twice higher than the significant wave height. In this paper, we provide a possible theoretical explanation for these two empirical facts.

File in pdf


Technical Report UTEP-CS-20-102, October 2020
How to Explain the Relation Between Different Empirical Covid-19 Self-Isolation Periods
Christian Servin, Olga Kosheleva, and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2020, Vol. 14, No. 17, pp. 811-815.

Empirical data implies that, to avoid infecting others, an asymptomatic career of Covid-19 should self-isolate for a period of 10 days, a patient who experiences symptoms for 20 days, and a person who was in contact with a Covid-19 patient should self-isolate for 14 days. In this paper, we use Laplace's Principle of Insufficient Reason to provide a simple explanation for the relation between these three self-isolation periods.

File in pdf


Technical Report UTEP-CS-20-101, October 2020
Updated version UTEP-CS-20-101a, November 2020
How to Separate Absolute and Relative Error Components: Interval Case
Christian Servin, Olga Kosheleva, and Vladik Kreinovich

Published in: Franco Pavese, Alistair B. Forbes, Nien Fan Zhang, and Anna G. Chunovkina (eds.), Advanced Mathematical and Computational Tools in Metrology and Testing XII, World Scientific, Singapore, 2021, pp. 390-405.

Usually, measurement errors contain both absolute and relative components. To correctly gauge the amount of measurement error for all possible values of the measured quantity, it is important to separate these two error components. For probabilistic uncertainty, this separation can be obtained by using traditional probabilistic techniques. The problem is that in many practical situations, we do not know the probability distribution, we only know the upper bound on the measurement error. In such situations of interval uncertainty, separation of absolute and relative error components is not easy. In this paper, we propose a technique for such a separation based on the maximum entropy approach, and we provide feasible algorithms -- both sequential and parallel -- for the resulting separation.

Original file UTEP-CS-20-101 in pdf
Updated version UTEP-CS-20-101a in pdf


Technical Report UTEP-CS-20-100, October 2020
Updated version UTEP-CS-20-100a, November 2020
How to Describe Measurement Errors: A Natural Generalization of the Central Limit Theorem Beyond Normal (and Other Infinitely Divisible) Distributions
Julio C. Urenda, Olga Kosheleva, and Vladik Kreinovich

Published in: Franco Pavese, Alistair B. Forbes, Nien Fan Zhang, and Anna G. Chunovkina (eds.), Advanced Mathematical and Computational Tools in Metrology and Testing XII, World Scientific, Singapore, 2021, pp. 418-428.

When precise measurement instruments are designed, designers try their best to decrease the effect of the main factors leading to measurement errors. As a result of this decrease, the remaining measurement error is the joint result of a large number of relatively small independent error components. According to the Central Limit Theorem, under reasonable conditions, when the number of components increases, the resulting distribution tends to Gaussian (normal). Thus, in practice, when the number of components is large, the distribution is close to normal -- and normal distributions are indeed ubiquitous in measurements. However, in some practical situations, the distribution is different from Gaussian. How can we describe such distributions? In general, the more parameters we use, the more accurately we can describe a distribution. The class of Gaussian distributions is 2-dimensional, in the sense that each distribution from this family can be uniquely determined by 2 parameters: e.g., mean and standard deviations. Thus, when the approximation of the measurement error by a normal distribution is insufficiently accurate, a natural idea is to consider families with more parameters. What are 3-, 4-, 5-, n-dimensional limit families of this type? Researchers have considered 3-dimensional classes of distributions, which can -- under weaker assumptions -- to describe similar limit cases; distributions from these families are known as infinitely divisible ones. A natural next question is to describe all possible n-dimensional families for all n. Such a description is provided in this paper.

Original file UTEP-CS-20-100 in pdf
Updated version UTEP-CS-20-100a in pdf


Technical Report UTEP-CS-20-99, October 2020
Updated version UTEP-CS-20-99a, November 2020
What If We Use Almost-Linear Functions Instead of Linear Ones as a First Approximation in Interval Computations
Martine Ceberio, Olga Kosheleva, Vladik Kreinovich

Published in: Franco Pavese, Alistair B. Forbes, Nien Fan Zhang, and Anna G. Chunovkina (eds.), Advanced Mathematical and Computational Tools in Metrology and Testing XII, World Scientific, Singapore, 2021, pp. 149-166.

In many practical situations, the only information that we have about measurement errors is the upper bound on their absolute values. In such situations, the only information that we have after the measurement about the actual (unknown) value of the corresponding quantity is that this value belongs to the corresponding interval: e.g., if the measurement result is 1.0, and the upper bound is 0.1, then this interval is [1.0−0.1,1.0+0.1] = [0.9,1.1]. An important practical question is what is the resulting interval uncertainty of indirect measurements, i.e., in other words, how interval uncertainty propagates through data processing. There exist feasible algorithms for solving this problem when data processing is linear, but for quadratic data processing techniques, the problem is, in general, NP-hard. This means that (unless P=NP) we cannot have a feasible algorithm that always computes the exact range, we can only find good approximations for the desired interval. In this paper, we propose two new metrologically motivated approaches (and algorithms) for computing such approximations.

Original file UTEP-CS-20-99 in pdf
Updated version UTEP-CS-20-99a in pdf


Technical Report UTEP-CS-20-98, October 2020
Why Number of Color Difference Works Better In Detecting Melanoma Than Number of Colors: A Possible Fractal-Based Explanation
Julio Urenda, Olga Kosheleva, and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2020, Vol. 14, No. 17, pp. 817-821.

At present, the best way to detect melanoma based on an image of a skin spot is to count the number of different colors in this image. A recent paper has shown that the detection can improve if instead of the number of colors, we use the difference between numbers of colors computed by using different thresholds. In this paper, we provide a possible fractal-based explanation for this empirical fact.

File in pdf


Technical Report UTEP-CS-20-97, September 2020
Rosenzweig, Equality, and Assignment
Olga Kosheleva and Vladik Kreinovich

Published in International Mathematical Forum, 2020, Vol. 15, No. 7, pp. 339-342.

In his seminal book "The Star of Redemption", the renowned philosopher Franz Rosenzweig illustrated his ideas by the intuitive difference between mathematical statements A=B and B=A. Of course, from the purely mathematical viewpoint, these two statements are always equivalent, so to a person trained in mathematics -- even in simple school mathematics -- this illustration is puzzling. What we show is that from the viewpoint of common folks, there is indeed a subtle difference between how people understand these two equalities. To us, the understanding of this difference helped us better understand Rosenzweig's ideas. But we believe that this difference has application way beyond those interested in Rosenzweig's philosophy: namely, it makes sense to take this subtle difference into account when teaching mathematics in school.

File in pdf


Technical Report UTEP-CS-20-96, September 2020
Does Transition to Democracy Lead to Chaos: A Theorem
Olga Kosheleva and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2020, Vol. 15, No. 7, pp. 765-769.

When a country transitions to democracy, at first, many political parties appear. A natural question is whether the number of such parties feasible and reasonable -- or whether this is a complete chaos. In this paper, we formulate a simplified version of this question in precise terms and show that the number of parties will be feasible -- i.e., that transition to democracy does not lead to chaos.

File in pdf


Technical Report UTEP-CS-20-95, September 2020
Creating Flutter Apps from Native Android Apps
Yoonsik Cheon and Carlos Chavez

Flutter is a development framework for building applications for mobile, web, and desktop platforms from a single codebase. Since its first official release by Google in less than a couple of years ago, it is gaining so much popularity among mobile application developers, even being regarded as a game-changer. There are, however, millions of existing native apps in use that meet the requirements of a particular operating system by using its SDK. Thus, one natural question to ask is about rewriting an existing native app in Flutter. In this paper, we look at the technical side of the above question by considering Android apps written in Java. We create a Flutter version of our existing Android app written in Java to support both Android and iOS by rewriting the entire app in Flutter. We share our development experience by discussing technical issues, problems, and challenges associated with such rewriting effort. We describe our approach as well as the lessons that we learned.

File in pdf


Technical Report UTEP-CS-20-94, September 2020
Updated version UTEP-CS-20-94b, May 2021
How to Extend Interval Arithmetic So That Inverse and Division Are Always Defined
Tahea Hossain, Jonathan Rivera, Yash Sharma, and Vladik Kreinovich

Published in Reliable Computing, 2021, Vol. 28, pp. 10-23.

In many real-life data processing situations, we only know the values of the inputs with interval uncertainty. In such situations, it is necessary to take this interval uncertainty into account when processing data. Most existing methods for dealing with interval uncertainty are based on interval arithmetic, i.e., on the formulas that describe the range of possible values of the result of an arithmetic operation when the inputs are known with interval uncertainty. For most arithmetic operations, this range is also an interval, but for division, the range is sometimes a disjoint union of two semi-infinite intervals. It is therefore desirable to extend the formulas of interval arithmetic to the case when one or both inputs is such a union. The corresponding extension is described in this paper.

Original file UTEP-CS-20-94 in pdf
Updated version UTEP-CS-20-94b in pdf


Technical Report UTEP-CS-20-93, August 2020
The Similarity Between Earth's and Mars's Core-Mantle Boundary Seems to Be Statistically Significant
Laxman Bokati, Olga Kosheleva, and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2020, Vol. 14, No. 15, pp. 711-715.

Latest, most accurate measurements of the depth of the Mars's core-mantle boundary shows that the ratio between this depth and Mars's radius is the same as for the Earth -- and with new measurements, this coincidence has become statistically significance. This coincidence seems to confirm a simple scale-invariant model in which for planets of Earth-Mars type, this depth is proportional to the planet's radius. Of course, we need more observations to confirm this model, but the fact that, for the first time, we got a statistically significant confirmation, is encouraging: it makes us believe that this coincidence is not accidental.

Original file UTEP-CS-20-93 in pdf


Technical Report UTEP-CS-20-92, August 2020
Under Limited Resources, Lottery-Based Tutoring Is the Most Efficient
Olga Kosheleva, Christian Servin, and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2020, Vol. 14, No. 15, pp. 705-710.

In the ideal world, every student who needs tutoring should receive intensive one-on-one tutoring. In practical, schools' resources are limited, so the students get only a portion of needed tutoring. It would have been not so bad if, e.g., half-time tutoring would be half as efficient as the intensive one. However, research shows that half-time tutoring is, on average, 15 times less efficient -- and, e.g., for math tutoring 20 times less efficient. To increase the efficiency, we propose to randomly divide the students who need tutoring into equal-size groups, and each year (or each semester) provide intensive tutoring to only one of these groups. This will drastically increase the efficiency of tutoring. Even larger efficiency can be attained if we determine, for each student, the optimal number of tutoring per week -- the formulas for this determinations are provided -- and distribute the resources accordingly.

Original file UTEP-CS-20-92 in pdf


Technical Report UTEP-CS-20-91, August 2020
Updated version UTEP-CS-20-91b, August 2020
How the Amount of Cracks and Potholes Grows with Time: Symmetry-Based Explanation of Empirical Dependencies
Edgar Daniel Rodriguez Velasquez, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the International Conference on Smart Sustainable Materials and Technologies ICSSMT'2020, Madurai, India, August 12-13, 2020, American Institute of Physics (AIP) Conference Proceedings, 2020, Vol. 2297, Paper 020034.

Empirical double-exponential formulas are known that describe how the amount of cracks and potholes in a pavement grows with time. In this paper, we show that these formulas can be explained based on natural symmetries (invariances) -- such as invariance with respect to changing the measuring unit or invariance with respect to changing a starting point for measuring time.

Original file UTEP-CS-20-91 in pdf
Updated version UTEP-CS-20-91b in pdf


Technical Report UTEP-CS-20-90, August 2020
Two Runners in the Time of Social Distancing, Speedboats in the Gulf of Finland: How to Best Pass?
Julio Urenda, Olga Kosheleva, and Vladik Kreinovich

Published in International Mathematical Forum, 2020, Vol. 15, No. 7, pp. 317-323.

If two runners follow the same running path, what is the best trajectory for the faster runner to pass the slower one, taking into account that they should always maintain a prescribed social distance? If a speedboat wants to pass a slower ship following a special canal in the Gulf of Finland, what is the best trajectory? In this paper, we provide answers to both questions.

File in pdf


Technical Report UTEP-CS-20-89, August 2020
Why 3D Fragmentation Usually Leads to Cuboids: A Simple Geometric Explanation
Laxman Bokati, Olga Kosheleva, and Vladik Kreinovich

Published in International Mathematical Forum, 2020, Vol. 15, No. 6, pp. 277-281.

It has been empirically observed that the average shape of natural fragmentation results -- such as natural rock fragments -- is a distorted cube (known as cuboid). Recently, a complex explanation was provides for this empirical fact. In this paper, we propose a simple geometry-based physical explanation for the ubiquity of cuboid fragments.

File in pdf


Technical Report UTEP-CS-20-88, August 2020
Why Cutting Trajectories Into Small Pieces Helps to Learn Dynamical Systems Better: A Seemingly Counterintuitive Empirical Result Explained
Olga Kosheleva and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2020, Vol. 14, No. 13, pp. 653-658.

In general, the more information we use in machine learning, the more accurate predictions we get. However, recently, it was observed that for prediction of the behavior of dynamical systems, the opposite effect happens: when we replace the original trajectories with shorter pieces -- thus ignoring the information about the system's long-term behavior -- the accuracy of machine learning predictions actually increases. In this paper, we provide an explanation for this seemingly counterintuitive result.

File in pdf


Technical Report UTEP-CS-20-87, August 2020
Need for Diversity in Elected Decision-Making Bodies: Economics-Related Analysis
Nguyen Ngoc Thach, Olga Kosheleva, and Vladik Kreinovich

Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Soft Computing: Biomedical and Related Applications, Springer, Cham, Switzerland, 2021, pp. 227-231.

On a qualitative level, everyone understands the need to have diversity in elected decision-making bodies, so that the viewpoint of each group be properly taken into account. However, when only the usual economic criteria are used in this election -- e.g., in the election of company's board -- the resulting bodies often under-represent some groups (e.g., women). A frequent way to remedy this situation is to artificially enforce diversity instead of strictly following purely economic criteria. In this paper, we show the current seeming contradiction between economics and diversity is caused by the imperfection of the use economic models: in an accurate economics-related decision making model, optimization directly implies diversity.

File in pdf


Technical Report UTEP-CS-20-86, July 2020
Grading Homeworks, Verifying Code: How Thorough Should the Feedback Be?
Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the 4th International Conference on Intelligent Decision Science IDS'2020, Istanbul, Turkey, August 7-8, 2020.

In the ideal world, we should assign many homeworks and give a thorough feedback for each homework. However, in reality, the instructor's time is limited, so we can either assign few homeworks and give a detailed feed back for all of them, or we can assign many homeworks, but give a less thorough feedback. What is the optimal thoroughness? A similar question can be raised for code verification: what is the optimal amount of feedback that should be provided to each programmer? In this paper, we provide answers to these questions.

File in pdf


Technical Report UTEP-CS-20-85, July 2020
It Is Important to Take All Available Information into Account When Making a Decision: Case of the Two Envelopes Problem
Laxman Bokati, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the 4th International Conference on Intelligent Decision Science IDS'2020, Istanbul, Turkey, August 7-8, 2020.

In situations when we know the probabilities of all possible consequences, traditional decision theory recommends selecting the action that maximizes expected utility. In many practical situations, however, we only have partial information about the corresponding probabilities. In this case, for different possible probability distributions, we get different values of expected utility. In general, possible values of expected utility form an interval. One way to approach this situation is to use the optimism-pessimism approach proposed by Nobelist Leo Hurwicz. Another approach is to select one of the possible probability distributions -- e.g., the one that has the largest possible entropy. Both approaches have led to many good practical applications. Usually, we get reasonable conclusions even when we ignore some of the available information -- e.g., because this information is too vague to be easily formalized. In this paper, we show, on the example of the two envelopes problem, that ignoring available information can lead to counter-intuitive recommendations.

File in pdf


Technical Report UTEP-CS-20-84, July 2020
How to Make Sure That Robot's Behavior Is Human-Like
Vladik Kreinovich, Olga Kosheleva, and Laxman Bokati

Published in: Bin Wei (ed.), Brain and Cognitive Intelligence -- Control in Robotics, CRC Press, Boca Raton, Florida, 2022, pp. 70-80.

In many applications -- e.g., in health care -- it is desirable to make robots behave human-like. This means, in particular, that robotic control should not be optimal, it should be similar to human (suboptimal) behavior. People's decisions are based on bounded rationality: since we cannot compute an optimal solution for all possible situations, we divide situations into groups and come up with a solution appropriate for each group. What is optimal here is the division into groups. It is therefore desirable to implement a similar algorithm for robots. To help with such algorithms, we provide techniques that help optimally divide situations into groups.

File in pdf


Technical Report UTEP-CS-20-83, July 2020
Updated version UTEP-CS-20-83b, May 2021
Euclidean Distance Between Intervals Is the Only Representation-Invariant One
Olga Kosheleva and Vladik Kreinovich

Published in Reliable Computing, 2021, Vol. 28, pp. 4-9.

An interval can be represented as a point in a plane, e.g., as a point with its endpoints as coordinates. We can thus define distance between intervals as the Euclidean distance between the corresponding points. Alternatively, we can describe an interval by its center and radius, which leads to a different definition of distance. Interestingly, these two definitions lead, in effect, to the same distance -- to be more precise, these two distances differ by a multiplicative constant. In principle, we can have more general distances on the plane. In this paper, we show that only for Euclidean distance, the two representations lead to the same distance between intervals.

Original file UTEP-CS-20-83 in pdf
Updated version UTEP-CS-20-83b in pdf


Technical Report UTEP-CS-20-82, July 2020
Revised version UTEP-CS-80-28b, August 2020
How to Decide Which Cracks Should be Repaired First: Theoretical Explanation of Empirical Formulas
Edgar Daniel Rodriguez Velasquez, Olga Kosheleva, and Vladik Kreinovich

Published in: Lourdes Martinez-Villasenor, Oscar Herrera-Alcantara, Hiram Ponce, and Felix A. Castro-Espinoza (eds.), Advances in Soft Computing. Proceedings of the 19th Mexican International Conference on Artificial Intelligence MICAI'2020, Mexico City, Mexico, October 12-17, 2020, Springer Lecture Notes in Computer Science, Vol. 12468, pp. 402-410.

Due to stress, cracks appear in constructions: cracks appear in buildings, bridges, pavements, among other structures. In the long run, cracks need to be repaired. However, our resources are limited, so we need to decide which cracks are more dangerous. To make this decision, we need to be able to predict how different cracks will grow. There are several empirical formulas describing crack growth. In this paper, we show that by using scale invariance, we can provide a theoretical explanation for these empirical formulas. The existence of such an explanation makes us confident that the existing empirical formulas can (and should) be used in the design of the corresponding automatic decision systems.

Original file UTEP-CS-20-82 in pdf
Updated version UTEP-CS-20-82b in pdf


Technical Report UTEP-CS-20-81, July 2020
Two Pens In a Pocket Must Be Different: A Nerd-Oriented Lesson From Statistics
Olga Kosheleva, and Vladik Kreinovich

Published in International Mathematical Forum, 2020, Vol. 15, No. 6, pp. 251-254.

Some people always carry a pen with them, so that if an idea comes to mind, they will always be able to write it down. Pens sometimes run out of ink. So, just in case, people carry two pens. The problem is that often, when one carries two identical pens, they seem to run out of ink at about the same time -- which defeats the whole purpose of carrying two pens. In this paper, we provide a simple statistics-based explanation of this phenomenon, and show that a seemingly natural idea of carrying three pens will not help. The only way to avoid the situation when all the pens stop working at about the same time is to carry pens of different types.

File in pdf


Technical Report UTEP-CS-20-80, July 2020
Why Majority Rule Does Not Work in Quantum Computing: A Pedagogical Explanation
Oscar Galindo, Olga Kosheleva, and Vladik Kreinovich

Published in: Lourdes Martinez-Villasenor, Oscar Herrera-Alcantara, Hiram Ponce, and Felix A. Castro-Espinoza (eds.), Advances in Soft Computing. Proceedings of the 19th Mexican International Conference on Artificial Intelligence MICAI'2020, Mexico City, Mexico, October 12-17, 2020, Springer Lecture Notes in Computer Science, Vol. 12468, pp. 396-401.

To increase the reliability of computations result, a natural idea is to use duplication: we let several computers independently perform the same computations, and then, if their results differ, we select the majority's result. Reliability is an important issue for quantum computing as well, since in quantum physics, all the processes are probabilistic, so there is always a probability that the result will be wrong. It thus seems natural to use the same majority rule for quantum computing as well. However, it is known that for general quantum computing, this scheme does not work. In this paper, we provide a simplified explanation of this impossibility.

File in pdf


Technical Report UTEP-CS-20-79, July 2020
COVID-19 Peak Immunity Values Seem to Follow Lognormal Distribution
Julio Urenda, Olga Kosheleva, Vladik Kreinovich, and Tonghui Wang

Published in Applied Mathematical Sciences, 2020, Vol. 14, No. 12, pp. 599-606.

For the current pandemic, an important open problem is immunity: do people who had this disease become immune against further infections? In the immunity study, it is important to know how frequent are different levels of immunity, i.e., what is the probability distribution of the immunity levels. Different people have different rates of immunity dynamics: for some, immunity gets to the level faster, for others the immunity effect is slower. Similarly, in some patients, immunity stays longer, it others, it decreases faster. In view of this, an important characteristic is peak immunity. A recent study provides some statistics on the peak immunity. There is not enough data to provide a statistically guaranteed selection of a probability distribution, but we can already make some preliminary conclusions. Specifically, based on the available data, we argue that the COVID-19 peak immunity values follow lognormal distribution.

File in pdf


Technical Report UTEP-CS-20-78, July 2020
Updated version UTEP-CS-20-78a, July 2020
Adversarial Teaching Approach to Cybersecurity: A Mathematical Model Explains Why It Works Well
Christian Servin, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the 24th International Conference on Information Visualisation IV'2020, Vienna and Melbourne, September 7-11, 2020, pp. 313-316.

Teaching cybersecurity means teaching all possible ways how software can be attacked -- and how to fight such attacks. From the usual pedagogical viewpoint, a natural idea seems to be to teach all these ways one by one. Surprisingly, a completely different approach works even better: when the class is divided into sparring mini-teams that try their best to attack each other and defend from each other. In spite of the lack of thoroughness, this approach generates good specialists -- but why? In this paper, by analyzing a simple mathematical model of this situation, we explain why this approach work -- and, moreover, we show that it is optimal in some reasonable sense.

Original file UTEP-CS-20-78 in pdf
Updated version UTEP-CS-20-78a in pdf


Technical Report UTEP-CS-20-78b, April 2021
Updated version UTEP-CS-20-78c, July 2021
Geometric Analysis Leads to Adversarial Teaching of Cybersecurity
Christian Servin, Olga Kosheleva, and Vladik Kreinovich

Published in: Boris Kovalerchuk, Kawa Nazemi, Razvan Andonie, and Nuno Datia, and Ebad Banissi (eds.), "Integrating AI and Visualisation for Visual Knowledge Discovery", Springer, Cham, Switzerland, 2022, pp. 613-629.

As time goes, our civilization becomes more and more dependent on computers and therefore, more and more vulnerable to cyberattacks. Because of this threat, it is very important to make sure that computer science students -- tomorrow's computer professionals -- are sufficiently skilled in cybersecurity. In this paper, we analyze the need for teaching cybersecurity from the geometric viewpoint. We show that the corresponding geometric analysis leads to adversarial teaching -- an empirically effective but not-well-theoretically-understood approach, when the class is divided into sparring mini-teams that try their best to attack each other and defend from each other. Thus, our analysis provides a theoretical explanation for the empirical efficiency of this approach.

Original file UTEP-CS-20-78b in pdf
Updated version UTEP-CS-20-78c in pdf


Technical Report UTEP-CS-20-77, July 2020
The Less We Love a Woman, the More She Likes Us: Pushkin's Observation Explained
Olga Kosheleva and Vladik Kreinovich

Published in International Mathematical Forum, 2020, Vol. 15, No. 5, pp. 245-250.

Alexander Pushkin, the most famous Russian poet, made this observation in "Eugene Onegin", his novel in verse which is most known to non-Russian readers via Tchaikovsky's opera. This observation may not be an absolute truth -- there are counterexamples -- but the fact that it is still widely cited shows that there is some truth in this statement. In this paper, we recall the usual utility-based explanation for a similar statement, and propose a new explanation, which is even more fundamental -- it is on the biological level.

File in pdf


Technical Report UTEP-CS-20-76, July 2020
Updated version UTEP-CS-20-76b, July 2020
Let Us Use Negative Examples in Regression-Type Problems Too
Jonatan Contreras, Francisco Zapata, Olga Kosheleva, Vladik Kreinovich, and Martine Ceberio

Published in Proceedings of the 24th International Conference on Information Visualisation IV'2020, Vienna and Melbourne, September 7-11, 2020, pp. 296-300.

In many practical situations, we need to reconstruct the dependence between quantities x and y based on several situations in which we know both x and y values. Such problems are known as regression problems. Usually, this reconstruction is based on positive examples, when we know y -- at least, with some accuracy. However, in addition, we often also know some examples in which we have negative information about y -- e.g., we know that y does not belong to a certain interval. In this paper, we show how such negative examples can be used to make the solution to a regression problem more accurate.

Original file UTEP-CS-20-76 in pdf
Updated version UTEP-CS-20-76b in pdf


Technical Report UTEP-CS-20-76c, April 2021
Updated version UTEP-CS-20-76d, July 2021
"Negative" Results -- When the Measured Quantity Is Outside the Sensor's Range -- Can Help Data Processing
Jonatan Contreras, Francisco Zapata, Olga Kosheleva, Vladik Kreinovich, and Martine Ceberio

Published in: Boris Kovalerchuk, Kawa Nazemi, Razvan Andonie, and Nuno Datia, and Ebad Banissi (eds.), "Integrating AI and Visualisation for Visual Knowledge Discovery", Springer, Cham, Switzerland, 2022, pp. 197-211.

In many real-life situations, we know the general form of the dependence y = f(x, c1, ..., cm) between physical quantities, but the values need to be determined experimentally, based on the results of measuring x and y. In some cases, we do not get any result of measuring y since the actual value is outside the range of the measuring instrument. Usually, such cases are ignored. In this paper, we show that taking these cases into account can help data processing -- by improving the accuracy of our estimates of ci and thus, by improving the accuracy of the resulting predictions of y.

Original version UTEP-CS-20-76c in pdf
Updated version UTEP-CS-20-76d in pdf


Technical Report UTEP-CS-20-75, July 2020
Why Quadratic Log-Log Dependence Is Ubiquitous And What Next
Sean R. Aguilar, Vladik Kreinovich, and Uyen Pham

Published in Asian Journal of Economics and Banking (AJEB), 2021, Vol. 5, No. 1, pp. 25-31.

In many real-life situations ranging from financial to volcanic data, growth is described either by a power law -- which is linear in log-log scale, or by a quadratic dependence in the log-log scale. In this paper, we use natural scale invariance requirement to explain the ubiquity of such dependencies. We also explain what should be a reasonable choice of the next model, if quadratic turns out to be not too accurate: it turns out that under scale invariance, the next class of models are cubic dependencies in the log-log scale, then fourth order dependencies, etc.

File in pdf


Technical Report UTEP-CS-20-74, July 2020
How Can Econometrics Help Fight the COVID'19 Pandemic?
Kevin Alvarez and Vladik Kreinovich

Published in Asian Journal of Economics and Banking (AJEB), 2020, Vol. 4, No. 3, pp. 29-36.

The current pandemic is difficult to model -- and thus, difficult to control. In contrast to the previous epidemics whose dynamics was smooth and well described by the existing models, the statistics of the current pandemic is highly oscillating. In this paper, we show that these oscillations can be explained if we take into account the disease's long incubation period -- as a result of which our control measures are determined by outdated data, showing number of infected people two weeks ago. To better control the pandemic, we propose to use the experience of economics, where also the effect of different measures can be observed only after some time. In the past, this led to wild oscillations of economy, with rapid growth periods followed by devastating crises. In time, economists learned how to smooth the cycles and thus, to drastically decrease the corresponding negative effects. We hope that this experience can help fight the pandemic.

File in pdf


Technical Report UTEP-CS-20-73, July 2020
Updated version UTEP-CS-20-73b, August 2024
Gifted and Talented: With Others? Separately? Mathematical Analysis of the Problem
Olga Kosheleva and Vladik Kreinovich

To appear in Proceedings of the 25th Annual Conference on Information Technology Education SIGITE'24, El Paso, Texas, October 10-12, 2024.
Crudely speaking, there are two main suggestions about teaching gifted and talented student: we can move them to a separate class section, or we can mix them with other students. Both options have pluses and minuses. In this paper, we formulate this problem in precise terms, we solve the corresponding mathematical optimization problem, and we come up with a somewhat unexpected optimal solution: mixing, but with an unusual twist.

File in pdf


Technical Report UTEP-CS-20-72, June 2020
A Fully Lexicographic Extension of Min or Max Operation Cannot Be Associative
Olga Kosheleva and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2020, Vol. 14, No. 11, pp. 499-504.

In many applications of fuzzy logic, to estimate the degree of confidence in a statement A&B, we take the minimum min(a,b) of the expert's degrees of confidence in the two statements A and B. When a < b, then an increase in b does not change this estimate, while from the commonsense viewpoint, our degree of confidence in A&B should increase. To take this commonsense idea into account, Ildar Batyrshin and colleagues proposed to extend the original order in the interval [0,1] to a lexicographic order on a larger set. This idea works for expressions of the type A&B, so maybe we can extend it to more general expressions? In this paper, we show that such an extension, while theoretically possible, would violate another commonsense requirement -- associativity of the "and"-operation. A similar negative result is proven for lexicographic extensions of the maximum operation -- that estimates the expert's degree of confidence in a statement A\/B.

File in pdf


Technical Report UTEP-CS-20-71, June 2020
Updated version UTEP-CS-20-71b, September 2020
What Is the Optimal Annealing Schedule in Quantum Annealing
Oscar Galindo and Vladik Kreinovich

Published in the Proceedings of the IEEE Series of Symposia on Computational Intelligence SSCI'2020, Canberra, Australia, December 1-4, 2020.

In many real-life situations in engineering (and in other disciplines), we need to solve an optimization problem: we want an optimal design, we want an optimal control, etc. One of the main problems in optimization is avoiding local maxima (or minima). One of the techniques that helps with solving this problem is annealing: whenever we find ourselves in a possibly local maximum, we jump out with some probability and continue search for the true optimum. A natural way to organize such a probabilistic perturbation of the deterministic optimization is to use quantum effects. It turns out that often, quantum annealing works much better than non-quantum one. Quantum annealing is the main technique behind the only commercially available computational devices that use quantum effects -- D-Wave computers. The efficiency of quantum annealing depends on the proper selection of the annealing schedule, i.e., schedule that describes how the perturbations decrease with time. Empirically, it has been found that two schedules work best: power law and exponential ones. In this paper, we provide a theoretical explanation for these empirical successes, by proving that these two schedules are indeed optimal (in some reasonable sense).

Original file UTEP-CS-20-71 in pdf
Updated version UTEP-CS-20-71b in pdf


Technical Report UTEP-CS-20-70, June 2020
Lexicographic-Type Extension of Min-Max Logic Is Not Uniquely Determined
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2020, Vol. 55, pp. 55-60.

Since in a computer, "true" is usually represented as 1 and ``false'' as 0, it is natural to represent intermediate degrees of confidence by numbers intermediate between 0 and 1; this is one of the main ideas behind fuzzy logic -- a technique that has led to many useful applications. In many such applications, the degree of confidence in A & B is estimated as the minimum of the degrees of confidence corresponding to A and B, and the degree of confidence in A \/ B is estimated as the maximum; for example, 0.5 \/ 0.3 = 0.5. It is intuitively OK that, e.g., 0.5 \/ 0.3 < 0.51 and, more generally, that 0.5 \/ 0.3 < 0.5 + ε for all ε > 0. However, intuitively, an additional argument in favor of the statement should increase our degree of confidence, i.e., we should have 0.5 < 0.5 \/ 0.3. To capture this intuitive idea, we need to extend the min-max logic from the interval [0,1] to a lexicographic-type order on a larger set. Such extension has been proposed -- and successfully used in applications -- for some propositional formulas. A natural question is: can this construction be uniquely extended to all "and"-"or" formulas? In this paper, we show that, in general, such an extension is not unique.

File in pdf


Technical Report UTEP-CS-20-69, June 2020
How to Train A-to-B and B-to-A Neural Networks So That the Resulting Transformations Are (Almost) Exact Inverses
Paravee Maneejuk, Torben Peters, Claus Brenner, and Vladik Kreinovich

Published in: Songsak Sriboonchitta, Vladik Kreinovich, and Woraphon Yamaka (eds.), Credible Asset Allocation, Optimal Transport Methods, and Related Topics, Springer, Cham, Switzerland, 2022, pp. 203-209.

In many practical situations, there exist several representations, each of which is convenient for some operations, and many data processing algorithms involve transforming back and forth between these representations. Many such transformations are computationally time-consuming when performed exactly. So, taking into account that input data is usually only 1-10% accurate anyway, it makes sense to replace time-consuming exact transformations with faster approximate ones. One of the natural ways to get a fast-computing approximation to a transformation is to train the corresponding neural network. The problem is that if we train A-to-B and B-to-A networks separately, the resulting approximate transformations are only approximately inverse to each other. As a result, each time we transform back and forth, we add new approximation error -- and the accumulated error may become significant. In this paper, we show how we can avoid this accumulation. Specifically, we show how to train A-to-B and B-to-A neural networks so that the resulting transformations are (almost) exact inverses.

File in pdf


Technical Report UTEP-CS-20-68, June 2020
Why LASSO, Ridge Regression, and EN: Explanation Based on Soft Computing
Woraphon Yamaka, Hamza Alkhatib, Ingo Neumann, and Vladik Kreinovich

Published in: Nguyen Ngoc Thach, Doan Thanh Ha, Nguyen Duc Trung, and Vladik Kreinovich (eds.),

  • Prediction and Causality in Econometrics and Related Topics, Springer, Cham, 2022, pp. 123-130.

    In many practical situations, observations and measurement results are consistent with many different models -- i.e., the corresponding problem is ill-posed. In such situations, a reasonable idea is to take into account that the values of the corresponding parameters should not be too large; this idea is known as {\it regularization}. Several different regularization techniques have been proposed; empirically the most successful are LASSO method, when we bound the sum of absolute values of the parameters, ridge regression method, when we bound the sum of the squares, and a EN method in which these two approaches are combined. In this paper, we explain the empirical success of these methods by showing that these methods can be naturally derived from soft computing ideas.

    File in pdf


    Technical Report UTEP-CS-20-67, June 2020
    Updated version UTEP-CS-20-67b, June 2021
    How to Detect Possible Additional Outliers: Case of Interval Uncertainty
    Hani Dbouk, Steffen Schoen, Ingo Neumann, and Vladik Kreinovich

    Published in Reliable Computing, 2021, Vol. 28, pp. 100-106.

    In many practical situations, measurements are characterized by interval uncertainty -- namely, based on each measurement result, the only information that we have about the actual value of the measured quantity is that this value belongs to some interval. If several such intervals -- corresponding to measuring the same quantity -- have an empty intersection, this means that at least one of the corresponding measurement results is an outlier, caused by a malfunction of the measuring instrument. From the purely mathematical viewpoint, if the intersection is non-empty, there is no reason to be suspicious. However, from the practical viewpoint, if the intersection is too narrow -- i.e., almost empty -- then we should also be suspicious, and mark this as an possible additional outlier case. In this paper, we describe a natural way to formalize this idea, and an algorithm for detecting such additional possible outliers.

    Original file UTEP-CS-20-67 in pdf
    Updated version UTEP-CS-20-67b in pdf


    Technical Report UTEP-CS-20-66, June 2020
    Updated version UTEP-CS-20-66b, May 2021
    Which Classes of Bi-Intervals Are Closed Under Addition? Under Linear Combination? Under Other Operations?
    Olga Kosheleva, Vladik Kreinovich, and Jonatan Contreras

    Published in Reliable Computing, 2021, Vol. 28, pp. 24-35.

    In many practical situations, uncertainty with which we know each quantity is described by an interval. Techniques for processing such interval uncertainty use the fact that the sum, difference, and product of two intervals is always an interval. In some cases, the set of all possible value of a quantity is described by a bi-interval -- i.e., by a union of two disjoint intervals. It is known that already the sum of two bi-intervals is not always a bi-interval. In this paper, we describe all the classes of bi-intervals which are closed under addition (i.e., for which the sum of bi-intervals is a bi-interval), closed under linear combination, and closed under other operations.

    Original file UTEP-CS-20-66 in pdf
    Updated version UTEP-CS-20-66b in pdf


    Technical Report UTEP-CS-20-65, June 2020
    Preference for Boys Does Not Necessarily Lead to a Gender Disbalance: A Realistic Example
    Olga Kosheleva and Vladik Kreinovich

    Published in International Mathematical Forum, 2020, Vol. 15, No. 5, pp. 211-214.

    Intuitively, it seems that cultural preference for boys should lead to a gender disbalance -- more boys than girls. This disbalance is indeed what is often observed, and this disbalance is what many models predict. However, in this paper, we show, on a realistic example, that preference for boys does not necessarily lead to a gender disbalance: in our simplified example, boys are clearly preferred, but still there are exactly as many girls as there are boys.

    File in pdf


    Technical Report UTEP-CS-20-64, June 2020
    Common-Sense-Based Theoretical Explanation for an Empirical Formula Estimating Road Quality
    Edgar Daniel Rodriguez Velasquez and Vladik Kreinovich

    Published in International Mathematical Forum, 2020, Vol. 15, No. 5, pp. 207-210.

    The quality of a road is usually gauged by a group of trained raters; the resulting numerical value is known as the Present Serviceability Index (PSI). There are, however, two problems with this approach. First, while it is practical to use trained raters to gauge the quality of major highways, there are also numerous not-so-major roads, and there is not enough trained raters to gauge the quality of all of them. Second, even for skilled raters, their estimates are somewhat subjective: different groups of raters may estimate the quality of the same road segment somewhat differently. Because of these two problems, it is desirable to be able to estimate PSI based on objective measurable characteristics. There exists a formula for such estimation recommended by the current standards. Its limitation is that this formula is purely empirical. In this paper, we provide a common-sense-based theoretical explanation for this formula.

    File in pdf


    Technical Report UTEP-CS-20-63, June 2020
    Healthy Lifestyle Decreases the Risk of Alzheimer Disease: A Possible Partial Explanation of an Empirical Dependence
    Olga Kosheleva and Vladik Kreinovich

    Published in International Mathematical Forum, 2020, Vol. 15, No. 5, pp. 201-205.

    A recent paper showed that for people who follow all five healthy lifestyle recommendations, the risk of Alzheimer disease is only 40% of the risk for those who do not follow any of these recommendations, and that for people two or three of these recommendations, the risk is 63% of the not-followers risk. In this paper, we show that a relation between the two numbers -- namely, that 0.40 is the square of 0.63 -- can be naturally explained by a simple model.

    File in pdf


    Technical Report UTEP-CS-20-62, June 2020
    Updated version UTEP-CS-20-62b, May 2021
    Realistic Intervals of Degrees of Confidence
    Olga Kosheleva and Vladik Kreinovich

    Published in Reliable Computing, 2021, Vol. 28, pp. 36-42.

    One of the applications of intervals is in describing experts' degrees of certainty in their statements. In this application, not all intervals are realistically possible. To describe all realistically possible degrees, we end up with a mathematical question of describing all topologically closed classes of intervals which are closed under the appropriate minimum and maximum operations. In this paper, we provide a full description of all such classes.

    Original file UTEP-CS-20-62 in pdf
    Updated version UTEP-CS-20-62b in pdf


    Technical Report UTEP-CS-20-61, June 2020
    Decision Making Under Interval Uncertainty Revisited
    Olga Kosheleva, Vladik Kreinovich, and Uyen Pham

    Published in Asian Journal of Economics and Banking, 2021, Vol. 5, No. 1, pp. 79-85.

    In many real-life situations, we do not know the exact values of the expected gain corresponding to different possible actions, we only have lower and upper bounds on these gains -- i.e., in effect, intervals of possible gain values. How can we made decisions under such interval uncertainty? In this paper, we show that natural requirements lead to a 2-parametric family of possible decision-making strategies.

    File in pdf


    Technical Report UTEP-CS-20-60, June 2020
    Are There Traces of Megacomputing in Our Universe
    Olga Kosheleva and Vladik Kreinovich

    Published in LINKs, 2021, Special Issue 1, pp. 23-25; detailed version to appear in: Andrew Adamatzky (ed.), Unconventional Computing, Arts, Philosophy, World Scientific.

    The recent successes of quantum computing encouraged many researchers to search for other unconventional physical phenomena that could potentially speed up computations. Several promising schemes have been proposed that will -- hopefully -- lead to faster computations in the future. Some of these schemes -- similarly to quantum computing -- involve using events from the micro-world, others involve using large-scale phenomena. If some civilization used micro-world for computations, this will be difficult for us to notice, but if they use mega-scale effects, maybe we can notice these phenomena? In this paper, we analyze what possible traces such megacomputing can leave -- and come up with rather surprising conclusions.

    File in pdf


    Technical Report UTEP-CS-20-59, June 2020
    How to Detect Future Einsteins: Towards Systems Approach
    Olga Kosheleva and Vladik Kreinovich

    Published in Exceptional Children: Education and Treatment, 2020, Vol. 2, No. 3, pp. 267-274.

    Talents are rare. It is therefore important to detect and nurture future talents as early as possible. In many disciplines, this is already being done -- via gifted and talented programs, Olympiads, and other ways to select kids with unusually high achievements. However, the current approach is not perfect: some of the kids are selected simply because they are early bloomers, they do not grow into unusually successful researchers; on the other hand, many of those who later become very successful are not selected since they are late bloomers. To avoid these problems, we propose to use systems approach: to find the general formula for the students' growth rate, the formula that would predict the student's future achievements based on his current and previous achievement levels, and then to select students based on the formula's prediction of their future success.

    File in pdf


    Technical Report UTEP-CS-20-58, June 2020
    Natural Invariance Explains Empirical Success of Specific Membership Functions, Hedge Operations, and Negation Operations
    Julio C. Urenda, Orsolya Csiszar, Gabor Csiszar, Jozsef Dombi, Gyorgy Eigner, and Vladik Kreinovich

    Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020, pp. 443-456.

    Empirical studies have shown that in many practical problems, out of all symmetric membership functions, special distending functions work best, and out of all hedge operations and negation operations, fractional linear ones work the best. In this paper, we show that these empirical successes can be explained by natural invariance requirements.

    File in pdf


    Technical Report UTEP-CS-20-57, June 2020
    How Mathematics and Computing Can Help Fight the Pandemic: Two Pedagogical Examples
    Julio Urenda, Olga Kosheleva, Martine Ceberio, and Vladik Kreinovich

    Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020, pp. 439-442.

    With the 2020 pandemic came unexpected mathematical and computational problems. In this paper, we provide two examples of such problems -- examples that we present in simplified pedagogical form. The problems are related to the need for social distancing and to the need for fast testing. We hope that these examples will help students better understand the importance of mathematical models.

    File in pdf


    Technical Report UTEP-CS-20-56, June 2020
    Updated file UTEP-CS-20-56b, May 2021
    Approximate Version of Interval Computation Is Still NP-Hard
    Vladik Kreinovich and Olga Kosheleva

    Published in Reliable Computing, 2021, Vol. 28, pp. 43-48.

    It is known that, in general, the problem of computing the range of a given polynomial on given intervals is NP-hard. For some NP-hard optimization problems, the approximate version -- e.g., if we want to find the value differing from the maximum by no more than a factor of 2 -- becomes feasible. Thus, a natural question is: what if instead of computing the exact range, we want to compute the enclosure which is, e.g., no more than twice wider than the actual range? In this paper, we show that this approximate version is still NP-hard, whether we want it to be twice wider or k times wider, for any k.

    Original file UTEP-CS-20-56 in pdf
    Updated version UTEP-CS-20-56b in pdf


    Technical Report UTEP-CS-20-55, May 2020
    Online Teaching -- Systems Approach: Questions and Answers
    Olga Kosheleva and Vladik Kreinovich

    Published in Mathematical Structures and Modeling, 2020, Vol. 55, pp. 127-133.

    At a recent International Forum on Teacher Education (Kazan, Russia, May 27-29, 2020), special sessions were devoted to questions related to online teaching -- in view of the recent forced world-wide transition to online-only education. This article summarizes, in a systematic way, issues discussed at these sessions.

    File in pdf


    Technical Report UTEP-CS-20-54, May 2020
    Advice to New Instructors: Systems Approach
    Olga Kosheleva and Vladik Kreinovich

    Published in Mathematical Structures and Modeling, 2020, Vol. 55, pp. 123-126.

    A recent paper provided useful system-based approach to new school teachers. In this paper, we somewhat modify this advice so that it can be applied to new instructors on all levels, including new instructors at the university level.

    File in pdf


    Technical Report UTEP-CS-20-53, May 2020
    Neural Networks
    Vladik Kreinovich

    Published in: S. Daya Sagar, Qiuming Cheng, Jennifer McKinley, and Frits Agterberg (eds.), Encyclopedia of Mathematical Geosciences, Springer, Cham, Switzerland, 2021.

    A neural network is a general term for machine learning tools that emulate how neurons work in our brains.

    Ideally, these tools do what we scientists are supposed to do: we feed them examples of the observed system's behavior, and hopefully, based on these examples, the tool will predict the future behavior of similar systems. Sometimes they do predict -- but in many other cases, the situation is not so simple.

    The goal of this entry is to explain what these tools can and cannot do -- without going into too many technical details.

    File in pdf


    Technical Report UTEP-CS-20-52, May 2020
    Updated version UTEP-CS-20-52b, November 2022
    Updated version UTEP-CS-20-52c, December 2022
    How Measurement-Related Ideas Can Help Us Use Expert Knowledge When Making Decisions: Three Case Studies
    Edgar Daniel Rodriguez Velasquez, Olga Kosheleva, and Vladik Kreinovich

    Published in: Chiranjibe Jana, Madhumangal Pal, G. Muhiuddin, and Peide Liu (eds.), "Fuzzy Optimization, Decision-making and Operations Research -- Theory and Applications", Springer, Cham, Switzerland, 2023, pp. 51-72.

    Ultimately, all our knowledge about the world comes from observations and measurements. An important part of this knowledge comes directly from observations and measurements. For example, when a person becomes sick, we can measure this person's body temperature, blood prressure, etc. -- and thus, usually get a good understanding of the problem. In addition, a significant part of our knowledge comes from experts who -- inspired by previous observations and measurements -- supplement the measurement results with their estimates. For example, a skilled medical doctor can supplement the measurement results with his/her experience-based intuition. Measurements exist for several millennia, many effective techniques have been developed for processing measurement results. In contrast, processing expert opinions is a reasonably new field, with many open problems. A natural idea is thus to see if measurement-related ideas can help is use expert knowledge as well. In this paper, we provide three case studies where such help turned out to be possible.

    Original file UTEP-CS-20-52 in pdf
    Updated version UTEP-CS-20-52b in pdf
    Updated version UTEP-CS-20-52c in pdf


    Technical Report UTEP-CS-20-51, May 2020
    Formal Concept Analysis Techniques Can Help in Intelligent Control, Deep Learning, etc.
    Vladik Kreinovich

    Published in Proceedings of the 15th International Conference on Concept Lattices and Their Applications CLA'2020, Tallinn, Estonia, June 29 - July 1, 2020.

    In this paper, we show that formal concept analysis is a particular case of a more general problem that includes deriving rules for intelligent control, finding appropriate properties for deep learning algorithms, etc. Because of this, we believe that formal concept analysis techniques can be (and need to be) extended to these application areas as well. To show that such an extension is possible, we explain how these techniques can be applied to intelligent control.

    File in pdf


    Technical Report UTEP-CS-20-50, May 2020
    Why Most Empirical Distributions Are Few-Modal
    Julio C. Urenda, Olga Kosheleva, and Vladik Kreinovich

    Published in: Nguyen Ngoc Thach, Doan Thanh Ha, Nguyen Duc Trung, and Vladik Kreinovich (eds.), Prediction and Causality in Econometrics and Related Topics, Springer, Cham, Switzerland, 2022, pp. 89-96.

    In principle, any non-negative function can serve as a probability density function -- provided that it adds up to 1. All kinds of processes are possible, so it seems reasonable to expect that observed probability density functions are random with respect to some appropriate probability measure on the set of all such functions -- and for all such measures, similarly to the simplest case of random walk, almost all functions have infinitely many local maxima and minima. However, in practice, most empirical distributions have only a few local maxima and minima -- often one (unimodal distribution), sometimes two (bimodal), and, in general, they are few-modal. From this viewpoint, econometrics is no exception: empirical distributions of economics-related quantities are also usually few-modal. In this paper, we provide a theoretical explanation for this empirical fact.

    File in pdf


    Technical Report UTEP-CS-20-49, May 2020
    How to Estimate the Stiffness of the Multi-Layer Road Based on Properties of Layers: Symmetry-Based Explanation for Odemark's Equation
    Edgar Daniel Rodriguez Velasquez, Vladik Kreinovich, Olga Kosheleva, and Hoang Phuong Nguyen

    Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Soft Computing: Biomedical and Related Applications, Springer, Cham, Switzerland, 2021, pp. 219-225.

    When we design a road, we would like to check that the current design provides the pavement with sufficient stiffness to withstand traffic loads and climatic conditions. For this purpose, we need to estimate the stiffness of the road based on stiffness and thickness of its different layers. There exists a semi-empirical formula for this estimation. In this paper, we show that this formula can be explained by natural scale-invariance requirements.

    File in pdf


    Technical Report UTEP-CS-20-48, May 2020
    Why It Is Sufficient to Have Real-Valued Amplitudes in Quantum Computing
    Isaac Bautista, Vladik Kreinovich, Olga Kosheleva, and Hoang Phuong Nguyen

    Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Soft Computing: Biomedical and Related Applications, Springer, Cham, Switzerland, 2021, pp. 131-136.

    In the last decades, a lot of attention has been placed on quantum algorithms -- algorithms that will run on future quantum computers. In principle, quantum systems can use any complex-valued amplitudes. However, in practice, quantum algorithms only use real-valued amplitudes. In this paper, we provide a simple explanation for this empirical fact.

    File in pdf


    Technical Report UTEP-CS-20-47, May 2020
    Why Some Powers Laws Are Possible And Some Are Not
    Edgar Daniel Rodriguez Velasquez, Vladik Kreinovich, Olga Kosheleva, and Hoang Phuong Nguyen

    Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Soft Computing: Biomedical and Related Applications, Springer, Cham, Switzerland, 2021, pp. 213-218.

    Many dependencies between quantities are described by power laws, in which y is proportional to x raised to some power a. In some application areas, in different situations, we observe all possible pairs (A,a) of the coefficient of proportionality A and of the exponent a. In other application areas, however, not all combinations (A,a) are possible: once we fix the coefficient A, it uniquely determines the exponent a. In such case, the dependence of a on A is usually described by an empirical logarithmic formula. In this paper, we show that natural scale-invariance ideas lead to a theoretical explanation for this empirical formula.

    File in pdf


    Technical Report UTEP-CS-20-46, May 2020
    Optimization under Fuzzy Constraints: Need to Go Beyond Bellman-Zadeh Approach and How It Is Related to Skewed Distributions
    Olga Kosheleva, Vladik Kreinovich, and Hoang Phuong Nguyen

    Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Soft Computing: Biomedical and Related Applications, Springer, Cham, Switzerland, 2021, pp. 175-182.

    In many practical situations, we need to optimize the objective function under fuzzy constraints. Formulas for such optimization are known since the 1970s paper by Richard Bellman and Lotfi Zadeh, but these formulas have a limitation: small changes in the corresponding degrees can lead to a drastic change in the resulting selection. In this paper, we propose a natural modification of this formula, a modification that no longer has this limitation. Interestingly, this formula turns out to be related for formulas for skewed (asymmetric) generalizations of the normal distribution.

    File in pdf


    Technical Report UTEP-CS-20-45, May 2020
    Absence of Remotely Triggered Large Earthquakes: A Geometric Explanation
    Laxman Bokati, Aaron Velasco, and Vladik Kreinovich

    Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 37-41.

    It is known that seismic waves from a large earthquake can trigger earthquakes in distant locations. Some of the triggered earthquakes are strong themselves. Interestingly, strong triggered earthquakes only happen within a reasonably small distance (less than 1000 km) from the original earthquake. Even catastrophic earthquakes do not trigger any strong earthquakes beyond this distance. In this paper, we provide a possible geometric explanation for this phenomenon.

    File in pdf


    Technical Report UTEP-CS-20-44, May 2020
    How to Efficiently Store Intermediate Results in Quantum Computing: Theoretical Explanation of the Current Algorithm
    Oscar Galindo, Olga Kosheleva, and Vladik Kreinovich

    Published in: Songsak Sriboonchitta, Vladik Kreinovich, and Woraphon Yamaka (eds.), Credible Asset Allocation, Optimal Transport Methods, and Related Topics, Springer, Cham, Switzerland, 2022, pp. 51-64.

    In complex time-consuming computations, we rarely have uninterrupted access to a high performance computer: usually, in the process of computation, some interruptions happen, so we need to store intermediate results until computations resume. To decrease the probability of a mistake, it is often necessary to run several identical computations in parallel, in which case several identical intermediate results need to be stored. In particular, for quantum computing, we need to store several independent identical copies of the corresponding qubits -- quantum versions of bits. Storing qubit states is not easy, but it is possible to compress the corresponding multi-qubit states: for example, it is possible to store the resulting 3-qubit state by using only two qubits. In principle, there are many different ways to store the state of 3 independent identical qubits by using two qubits. In this paper, we show that the current algorithm for such storage is uniquely determined by the natural symmetry requirements.

    File in pdf


    Technical Report UTEP-CS-20-43, May 2020
    Economics of Reciprocity and Temptation
    Laxman Bokati, Olga Kosheleva, Vladik Kreinovich, and Nguyen Ngoc Thach

    Published in: Songsak Sriboonchitta, Vladik Kreinovich, and Woraphon Yamaka (eds.), Credible Asset Allocation, Optimal Transport Methods, and Related Topics, Springer, Cham, Switzerland, 2022, pp. 31-38.

    Behavioral economics has shown that in many situations, people's behavior differs from what is predicted by simple traditional utility-maximization economic models. It is therefore desirable to be able to accurately describe people's actual behavior. In some cases, the difference from the traditional models is caused by bounded rationality -- our limited ability to process information and to come up with a truly optimal solutions. In such cases, predicting people's behavior is difficult. In other cases, however, people actually optimize -- but the actual expression for utility is more complicated than in the traditional models. In such case, it is, in principle, possible to predict people's behavior. In this paper, we show that two phenomena -- reciprocity and temptation -- can be explained by optimizing a complex utility expression. We hope that this explanation will eventually lead to accurate prediction of these phenomena.

    File in pdf


    Technical Report UTEP-CS-20-42, May 2020
    Updated version UTEP-CS-20-42b, November 2020
    Towards Fast and Understandable Computations: Which "And"- and "Or"-Operations Can Be Represented by the Fastest (i.e., 1-Layer) Neural Networks? Which Activations Functions Allow Such Representations?
    Kevin Alvarez, Julio C. Urenda, Orsolya Csiszar, Gabor Csiszar, Jozsef Dombi, Gyorgy Eigner, and Vladik Kreinovich

    Published in Acta Polytechnica Hungarica, 2021, Vol. 18, No. 1, pp. 27-45.

    We want computations to be fast, and we want them to be understandable. As we show, the need for computations to be fast naturally leads to neural networks, with 1-layer networks being the fastest, and the need to be understandable naturally leads to fuzzy logic and to the corresponding "and"- and "or"-operations. Since we want our computations to be both fast and understandable, a natural question is: which "and"- and "or"-operations of fuzzy logic can be represented by the fastest (i.e., 1-layer) neural network? And a related question is: which activation functions allow such a representation? In this paper, we provide an answer to both questions: the only "and"- and "or"-operations that can be thus represented are max(0, a + b − 1) and min(a + b, 1), and the only activations functions allowing such a representation are equivalent to the rectified linear function -- the one used in deep learning. This result provides an additional explanation of why rectified linear neurons are so successful. With also show that with full 2-layer networks, we can compute practically any "and"- and "or"-operation.

    Original file UTEP-CS-20-42 in pdf
    Updated version UTEP-CS-20-42b in pdf


    Technical Report UTEP-CS-20-41, May 2020
    How the Proportion of People Who Agree to Perform a Task Depends on the Stimulus: A Theoretical Explanation of the Empirical Formula
    Laxman Bokati, Vladik Kreinovich, and Doan Thanh Ha

    Published in: Nguyen Ngoc Thach, Doan Thanh Ha, Nguyen Duc Trung, and Vladik Kreinovich (eds.), Prediction and Causality in Econometrics and Related Topics, Springer, Cham, Switzerland, 2022, pp. 22-27.

    For each task, the larger the stimulus, the larger proportion of people agree to perform this task. In many economic situations, it is important to know how much stimulus we need to offer so that a sufficient proportion of the people will agree to perform the needed task. There is an empirical formula describing how this proportion increases as we increase the amount of stimulus. However, this empirical formula lacks a convincing theoretical explanation, as a result of which practitioners are somewhat reluctant to use it. In this paper, we provide a theoretical explanation for this empirical formula, thus making it more reliable -- and hence, more useable.

    File in pdf


    Technical Report UTEP-CS-20-40, May 2020
    Reward for Good Performance Works Better Than Punishment for Mistakes: Economic Explanation
    Olga Kosheleva, Julio Urenda, and Vladik Kreinovich

    Published in: Songsak Sriboonchitta, Vladik Kreinovich, and Woraphon Yamaka (eds.), Credible Asset Allocation, Optimal Transport Methods, and Related Topics, Springer, Cham, Switzerland, 2022, pp. 121-126.

    How should we stimulate people to make them perform better? How should we stimulate students to make them study better? Many experiments have shown that reward for good performance works better than punishment for mistakes. In this paper, we provide a possible theoretical explanation for this empirical fact.

    File in pdf


    Technical Report UTEP-CS-20-39, May 2020
    Commonsense Explanations of Sparsity, Zipf Law, and Nash's Bargaining Solution
    Olga Kosheleva, Vladik Kreinovich, and Kittawit Autchariyapanikul

    Published in: Nguyen Ngoc Thach, Doan Thanh Ha, Nguyen Duc Trung, and Vladik Kreinovich (eds.), Prediction and Causality in Econometrics and Related Topics, Springer, Cham, Switzerland, 2022, pp. 67-74.

    As econometric models become more and more accurate and more and more mathematically complex, they also become less and less intuitively clear and convincing. To make these models more convincing, it is desirable to supplement the corresponding mathematics with commonsense explanations. In this paper, we provide such explanation for three economics-related concepts: sparsity (as in LASSO), Zipf's Law, and Nash's bargaining solution.

    File in pdf


    Technical Report UTEP-CS-20-38, April 2020
    A Recent Result about Random Metrics Explains Why All of Us Have Similar Learning Potential
    Christian Servin, Olga Kosheleva, and Vladik Kreinovich

    Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 129-132.

    In the same class, after the same lesson, the amount of learned material often differs drastically, by a factor of ten. Does this mean that people have that different learning abilities? Not really: experiments show that among different students, learning abilities differ by no more than a factor of two. This fact have been successfully used in designing innovative teaching techniques, techniques that help students realize their full learning potential. In this paper, we deal with a different question: how to explain the above experimental result. It turns out that this result about learning abilities -- which are, due to genetics, randomly distributed among the human population -- can be naturally explained by a recent mathematical result about random metrics.

    File in pdf


    Technical Report UTEP-CS-20-37, April 2020
    How to Explain the Anchoring Formula in Behavioral Economics
    Laxman Bokati, Vladik Krenovich, and Chon Van Le

    Published in: Nguyen Ngoc Thach, Doan Thanh Ha, Nguyen Duc Trung, and Vladik Kreinovich (eds.), Prediction and Causality in Econometrics and Related Topics, Springer, Cham, Switzerland, 2022, pp. 28-34.

    According to the traditional economics, the price that a person is willing to pay for an item should be uniquely determined by the value that this person will get from this item, it should not depend, e.g., on the asking price proposed by the seller. In reality, the price that a person is willing to pay does depend on the asking price; this is known as the anchoring effect. In this paper, we provide a natural justification for the empirical formula that describes this effect.

    File in pdf


    Technical Report UTEP-CS-20-36, April 2020
    A "Fuzzy" Like Button Can Decrease Echo Chamber Effect
    Olga Kosheleva and Vladik Kreinovich

    Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 57-61.

    One of the big problems of the US political life is the echo chamber effect -- in spite of the abundance of materials on the web, many people only read materials confirming their own opinions. The resulting polarization often deadlocks the political situation and prevents politicians from reaching compromises needed to make needed changes. In this paper, we show, on a simplified model, that the echo chamber effect can be decreased if we simply replace the currently prevalent binary (yes-no) Like button on webpages with a more gradual ("fuzzy") one -- a button that will capture the relative degree of likeness.

    File in pdf


    Technical Report UTEP-CS-20-35, April 2020
    Why Geometric Progression in Selecting the LASSO Parameter: A Theoretical Explanation
    William Kubin, Yi Xie, Laxman Bokati, Vladik Kreinovich, and Kittawit Autchariyapanitkul

    Published in: Songsak Sriboonchitta, Vladik Kreinovich, and Woraphon Yamaka (eds.), Credible Asset Allocation, Optimal Transport Methods, and Related Topics, Springer, Cham, Switzerland, 2022, pp. 195-202.

    In situations when we know which inputs are relevant, the least squares method is often the best way to solve linear regression problems. However, in many practical situations, we do not know beforehand which inputs are relevant and which are not. In such situations, a 1-parameter modification of the least squares method known as LASSO leads to more adequate results. To use LASSO, we need to determine the value of the LASSO parameter that best fits the given data. In practice, this parameter is determined by trying all the values from some discrete set. It has been empirically shown that this selection works the best if we try values from a geometric progression. In this paper, we provide a theoretical explanation for this empirical fact.

    File in pdf


    Technical Report UTEP-CS-20-34, April 2020
    Why There Are Only Four Fundamental Forces: A Possible Explanation
    Olga Kosheleva and Vladik Kreinovich

    Published in International Mathematical Forum, 2020, Vol. 5, No. 4, pp. 151-153.

    It is known that there are exactly four fundamental forces of nature: gravity forces, forces corresponding to weak interactions, electromagnetic forces, and forces corresponding to strong interactions. In this paper, we provide a possible explanation of why there are exactly four fundamental forces: namely, we relate this number with the dimension of physical space-time.

    File in pdf


    Technical Report UTEP-CS-20-33, April 2020
    Updated version UTEP-CS-20-33b, September 2020
    Scale-Invariance Ideas Explain the Empirical Soil-Water Characteristic Curve
    Edgar Daniel Rodriguez Velasquez and Vladik Kreinovich

    Published in the Proceedings of the IEEE Series of Symposia on Computational Intelligence SSCI'2020, Canberra, Australia, December 1-4, 2020.

    The prediction of the road's properties under the influence of water infiltration is important for pavement design and management. Traditionally, this prediction heavily relied on expert estimates. In the last decades, complex empirical formulas have been proposed to capture the expert's intuition in estimating the effect of water infiltration on the stiffness of the pavement's payers. Of special importance is the effect of water intrusion on the pavement's foundation -- known as subgrade soil. In this paper, we show that natural scale-invariance ideas lead to a theoretical explanation for an empirical formula describing the dependence between soil suction and water content; formulas describing this dependence are known as soil-water characteristic curves.

    Original file UTEP-CS-20-33 in pdf
    Updated version UTEP-CS-20-33b in pdf


    Technical Report UTEP-CS-20-32, April 2020
    Optimal Search under Constraints
    Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

    Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020, pp. 421-426.

    In general, if we know the values a and b at which a continuous function has different signs -- and the function is given as a black box -- the fastest possible way to find the root x for which f(x) = 0 is by using bisection (also known as binary search). In some applications, however -- e.g., in finding the optimal dose of a medicine -- we sometimes cannot use this algorithm since, for avoid negative side effects, we can only try value which exceed the optimal dose by no more than some small value δ > 0. In this paper, we show how to modify bisection to get an optimal algorithm for search under such constraint.

    File in pdf


    Technical Report UTEP-CS-20-31, April 2020
    Equations for Which Newton's Method Never Works: Pedagogical Examples
    Leobardo Valera, Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

    Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020, pp. 413-419.

    One of the most widely used methods for solving equations is the classical Newton's method. While this method often works -- and is used in computers for computations ranging from square root to division -- sometimes, this method does not work. Usual textbook examples describe situations when Newton's method works for some initial values but not for others. A natural question that students often ask is whether there exist functions for which Newton's method never works -- unless, of course, the initial approximation is already the desired solution. In this paper, we provide simple examples of such functions.

    File in pdf


    Technical Report UTEP-CS-20-30, April 2020
    Centroids Beyond Defuzzification
    Juan Carlos Figueroa–Garcia, Christian Servin, and Vladik Kreinovich

    Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020, pp. 407-412.

    In general, expert rules expressed by imprecise (fuzzy) words of natural language like "small" lead to imprecise (fuzzy) control recommendations. If we want to design an automatic controller, we need, based on these fuzzy recommendations, to generate a single control value. A procedure for such generation is known as defuzzification. The most widely used defuzzification procedure is centroid defuzzification, in which, as the desired control value, we use one of the coordinates of the center of mass ("centroid") of an appropriate 2-D set. A natural question is: what is the meaning of the second coordinate of this center of mass? In this paper, we show that this second coordinate describes the overall measure of fuzziness of the resulting recommendation.

    File in pdf


    Technical Report UTEP-CS-20-29, April 2020
    Which Algorithms Are Feasible and Which Are Not: Fuzzy Techniques Can Help in Formalizing the Notion of Feasibility
    Olga Kosheleva and Vladik Kreinovich

    Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020, pp. 401-406.

    Some algorithms are practically feasible, in the sense that for all inputs of reasonable length they provide the result in reasonable time. Other algorithms are not practically feasible, in the sense that they may work well for small-size inputs, but for slightly larger -- but still reasonable-size -- inputs, the computation time becomes astronomical (and not practically possible). How can we describe practical feasibility in precise terms? The usual formalization of the notion of feasibility states that an algorithm is feasible if its computation time is bounded by a polynomial of the size of the input. In most cases, this definition works well, but sometimes, it does not: e.g., according to this definition, every algorithm requiring a constant number of computational steps is feasible, even when this number of steps is larger than the number of particles in the Universe. In this paper, we show that by using fuzzy logic, we can naturally come up with a more adequate description of practical feasibility.

    File in pdf


    Technical Report UTEP-CS-20-28, April 2020
    Is There a Contradiction Between Statistics and Fairness: From Intelligent Control to Explainable AI
    Christian Servin and Vladik Kreinovich

    Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020, pp. 391-400.

    At first glance, there seems to be a contradiction between statistics and fairness: statistics-based AI techniques lead to unfair discrimination based on gender, race, and socio-economical status. This is not just a fault of probability techniques: similar problems can happen if we use fuzzy or other techniques for processing uncertainty. To attain fairness, several authors proposed not to rely on statistics and instead, explicitly add fairness constraints into decision making. In this paper, we show that the seeming contradiction between statistics and fairness is caused mostly by the fact that the existing systems use simplified models; contradictions disappear if we replace them with more adequate (and thus more complex) statistical models.

    File in pdf


    Technical Report UTEP-CS-20-27, April 2020
    Why Linear Expressions in Discounting and in Empathy: A Symmetry-Based Explanation
    Supanika Leurcharusmee, Laxman Bokati, and Olga Kosheleva

    Published in Soft Computing, 2021, Vol. 25, No. 12, pp. 7753-7760.

    People's preferences depend not only on the decision maker's immediate gain, they are also affected by the decision maker's expectation of future gains. A person's decisions are also affected by possible consequences for others. In decision theory, people's preferences are described by special quantities called utilities. In utility terms, the above phenomena mean that the person's overall utility of an action depends not only on the utility corresponding to the action's immediate consequences for this person, it also depends on utilities corresponding to future consequences and on utilities corresponding to consequences for others. These dependencies reflect discounting of future consequences in comparison with the current ones and to empathy (or lack of) of the person towards others. In general, many formulas involving utility are nonlinear, even formulas describing the dependence of utility on money. However, surprisingly, for discounting and for empathy, linear formulas work very well. In this paper, we show that natural symmetry requirements can explain this linearity.

    File in pdf


    Technical Report UTEP-CS-20-26, April 2020
    Why Class-D Audio Amplifiers Work Well: A Theoretical Explanation
    Kevin Alvarez, Julio C. Urenda, and Vladik Kreinovich

    Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 15-20.

    Most current high-quality electronic audio systems use class-D audio amplifiers (D-amps, for short), in which a signal is represented by a sequence of pulses of fixed height, pulses whose duration at any given moment of time linearly depends on the amplitude of the input signal at this moment of time. In this paper, we explain the efficiency of this signal representation by showing that this representation is the least vulnerable to additive noise (that affect measuring the signal itself) and to measurement errors corresponding to measuring time.

    File in pdf


    Technical Report UTEP-CS-20-25, April 2020
    Why Black-Scholes Equations Are Effective Beyond Their Usual Assumptions: Symmetry-Based Explanation
    Warattaya Chinnakum and Sean R. Aguilar

    Published in International Journal of Uncertainty, Fuzziness, and Knowledge-Based Systems, 2020, Vol. 28, Supplement 1, pp. 1-10.

    Nobel-Prize-winning Black-Scholes equations are actively used to estimate the price of options and other financial instruments. In practice, they provide a good estimate for the price, but the problem is that their original derivation is based on many simplifying statistical assumptions which are, in general, not valid for financial time series. The fact that these equations are effective way beyond their usual assumptions leads to a natural conclusion that there must be an alternative derivation for these equations, a derivation that does not use the usual too-strong assumptions. In this paper, we provide such a derivation in which the only substantial assumption is a natural symmetry: namely, scale-invariance of the corresponding processes. Scale-invariance also allows us to describe possible generalizations of Black-Scholes equations, generalizations that we hope will lead to even more accurate estimates for the corresponding prices.

    File in pdf


    Technical Report UTEP-CS-20-24, March 2020
    Scale-Invariance and Fuzzy Techniques Explain the Empirical Success of Inverse Distance Weighting and of Dual Inverse Distance Weighting in Geosciences
    Laxman Bokati, Aaron Velasco, and Vladik Kreinovich

    Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020, pp. 379-390.

    Once we measure the values of a physical quantity at certain spatial locations, we need to interpolate these values to estimate the value of this quantity at other locations x. In geosciences, one of the most widely used interpolation techniques is inverse distance weighting, when we combine the available measurement results with the weights inverse proportional to some power of the distance from x to the measurement location. This empirical formula works well when measurement locations are uniformly distributed, but it leads to biased estimates otherwise. To decrease this bias, researchers recently proposed a more complex dual inverse distance weighting technique. In this paper, we provide a theoretical explanation both for the inverse distance weighting and for the dual inverse distance weighting. Specifically, we show that if we use the general fuzzy ideas to formally describe the desired property of the interpolation procedure, then physically natural scale-invariance requirement select only these two distance weighting techniques.

    File in pdf


    Technical Report UTEP-CS-20-23, March 2020
    Decision Making Under Interval Uncertainty: Towards (Somewhat) More Convincing Justifications for Hurwicz Optimism-Pessimism Approach
    Warattaya Chinnakum, Laura Berrout Ramos, Olugbenga Iyiola, and Vladik Kreinovich

    Published in Asian Journal of Economics and Banking (AJEB), 2021, Vol. 5, No. 1, pp. 32-45.

    In the ideal world, we know the exact consequences of each action. In this case, it is relatively straightforward to compare different possible actions and, as a result of this comparison, to select the best action. In real life, we only know the consequences with some uncertainty. A typical example is interval uncertainty, when we only know the lower and upper bounds on the expected gain. How can we compare such interval-valued alternatives? A usual way to compare such alternatives is to use the optimism-pessimism criterion developed by Nobelist Leo Hurwicz. In this approach, we maximize a weighted combination of the worst-case and the best-case gains, with the weights reflecting the decision maker's degree of optimism. There exist several justifications for this criterion; however, some of the assumptions behind these justifications are not 100\% convincing. In this paper, we propose new, hopefully more convincing justifications for Hurwicz's approach.

    File in pdf


    Technical Report UTEP-CS-20-22, March 2020
    Which Are the Correct Membership Functions? Correct "And"- and "Or"- Operations? Correct Defuzzification Procedure?
    Olga Kosheleva, Vladik Kreinovich, and Shahnaz Shabazova

    Published in: Shahnaz N. Shahbazova, Ali M. Abbasov, Vladik Kreinovich, Janusz Kacprzyk, and Ildar Batyrshin (eds.), "Recent Developments and the New Directions of Research, Foundations, and Applications", Springer, Cham, Switzerland, 2023, Vol. 1, pp. 193-201.

    Even in the 1990s, when many successful examples of fuzzy control appeared all the time, many users were somewhat reluctant to use fuzzy control. One of the main reasons for this reluctance was the perceived subjective character of fuzzy techniques -- for the same natural-language rules, different experts may select somewhat different membership functions and thus get somewhat different control/recommendation strategies. In this paper, we promote the idea that this selection does not have to be subjective. We can always select the "correct" membership functions, i.e., functions for which, on previously tested case, we got the best possible control. Similarly, we can select the "correct" and- and or-operations, the correct defuzzification procedure, etc.

    File in pdf


    Technical Report UTEP-CS-20-21, March 2020
    Theoretical Explanation of Recent Empirically Successful Code Quality Metrics
    Vladik Kreinovich, Omar A. Masmali, Hoang Phuong Nguyen, and Omar Badreddin

    Published in Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII), 2020, Vol. 24, No. 5, pp. 604-608.

    Millions of lines of code are written every day, and it is not practically possible to perfectly thoroughly test all this code on all possible situations. In practice, we need to be able to separate codes which are more probable to contain bugs -- and which thus need to be tested more thoroughly -- from codes which are less probable to contain flaws. Several numerical characteristics -- known as code quality metrics -- have been proposed for this separation. Recently, a new efficient class of code quality metrics have been proposed, based on the idea to assign consequent integers to different levels of complexity and vulnerability: we assign 1 to the simplest level, 2 to the next simplest level, etc. The resulting numbers are then combined -- if needed, with appropriate weights. In this paper, we provide a theoretical explanation for the above idea.

    File in pdf


    Technical Report UTEP-CS-20-20, March 2020
    How to Combine (Dis)Utilities of Different Aspects into a Single (Dis)Utility Value, and How This Is Related to Geometric Images of Happiness
    Laxman Bokati, Hoang Phuong Nguyen, Olga Kosheleva, and Vladik Kreinovich

    Published in Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII), 2020, Vol. 24, No. 5, pp. 599-603.

    In many practical situations, a user needs our help in selecting the best out of a large number of alternatives. To be able to help, we need to understand the user's preferences. In decision theory, preferences are described by numerical values known as utilities. It is often not feasible to ask to user to provide utilities of all possible alternatives, so we must be able to estimate these utilities based on utilities of different aspects of these alternatives. In this paper, we provide a general formula for combining utilities of aspects into a single utility value. The resulting formula turns out to be in good accordance with the known correspondence between geometric images and different degrees of happiness.

    File in pdf


    Technical Report UTEP-CS-20-19, March 2020
    How to Describe Conditions Like 2-out-of-5 in Fuzzy Logic: a Neural Approach
    Olga Kosheleva, Vladik Kreinovich, and Hoang Phuong Nguyen

    Published in Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII), 2020, Vol. 24, No. 5, pp. 693-598.

    In many medical applications, we diagnose a disease and/or apply a certain remedy if, e.g., two out of five conditions are satisfied. In the fuzzy case, i.e., when we only have certain degrees of confidence that each of n statement is satisfied, how do we estimate the degree of confidence that k out of n conditions are satisfied? In principle, we can get this estimate if we use the usual methodology of applying fuzzy techniques: we represent the desired statement in terms of "and" and "or", and use fuzzy analogues of these logical operations. The problem with this approach is that for large $n$, it requires too many computations. In this paper, we derive the fastest-to-compute alternative formula. In this derivation, we use the ideas from neural networks.

    File in pdf


    Technical Report UTEP-CS-20-18, March 2020
    Quantum (and More General) Models of Research Collaboration
    Oscar Galindo, Miroslav Svitek, and Vladik Kreinovich

    Published in Asian Journal of Economics and Banking, 2020, Vol. 4, No. 1, pp. 77-86.

    In the last decades, several papers have shown that quantum techniques can be successful in describing not only events in the micro-scale physical world -- for which they were originally invented -- but also in describing social phenomena, e.g., different economic processes. In our previous paper, we provide an explanation for this somewhat surprising successes. In this paper, we extend this explanation and show that quantum (and more general) techniques can also be used to model research collaboration.

    File in pdf


    Technical Report UTEP-CS-20-17, March 2020
    Towards a Theoretical Explanation of How Pavement Condition Index Deteriorates over Time
    Edgar Daniel Rodriguez Velasquez and and Vladik Kreinovich

    Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 121-127.

    To predict how the Pavement Condition Index will change over time, practitioners use a complex empirical formula derived in the 1980s. In this paper, we provide a possible theoretical explanation for this formula, an explanation based on general ideas of invariance. In general, the existence of a theoretical explanation makes a formula more reliable; thus, we hope that our explanation will make predictions of road quality more reliable.

    File in pdf


    Technical Report UTEP-CS-20-16, March 2020
    New (Simplified) Derivation of Nash's Bargaining Solution
    Hoang Phuong Nguyen, Laxman Bokati, and Vladik Kreinovich

    Published in Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII), 2020, Vol. 24, No. 5, pp. 589-592.

    According to the Nobelist John Nash, if a group of people wants to selects one of the alternatives in which all of them get a better deal than in a status quo situations, then they should select the alternative that maximizes the product of their utilities. In this paper, we provide a new (simplified) derivation of this result, a derivation which is not only simpler -- it also does not require that the preference relation between different alternatives be linear.

    File in pdf


    Technical Report UTEP-CS-20-15, March 2020
    Towards Making Fuzzy Techniques More Adequate for Combining Knowledge of Several Experts
    Hoang Phuong Nguyen and Vladik Kreinovich

    Published in Journal of Advanced Computational Intelligence and Intelligent Informatics (JACIII), 2020, Vol. 24, No. 5, pp. 583-588.

    In medical and other applications, expert often use rules with several conditions, each of which involve a quantity within the domain of expertise of a different expert. In such situations, to estimate the degree of confidence that all these conditions are satisfied, we need to combine opinions of several experts -- i.e., in fuzzy techniques, combine membership functions corresponding to different experts. In each area of expertise, different experts may have somewhat different membership functions describing the same natural-language ("fuzzy") term like small. It is desirable to present the user with all possible conclusions corresponding to all these membership functions. In general, even if, for each area of expertise, we have only a 1-parametric family characterizing different membership function, then for rules with 3 conditions, we already have a difficult-to-interpret 3-parametric family of possible consequences. It is thus desirable to limit ourselves to the cases when the resulting family is still manageable -- e.g., is 1-parametric. In this paper, we provide a full description of all such families. Interestingly, it turns out that such families are possible only if we allow non-normalized membership functions, i.e., functions for which the maximum may be smaller than 1. We argue that this is a way to go, since normalization loses some information that we receive from the experts.

    File in pdf


    Technical Report UTEP-CS-20-14, March 2020
    Why Mean, Variance, Moments, Correlation, Skewness etc. – Invariance-Based Explanations
    Olga Kosheleva, Laxman Bokati, and Vladik Kreinovich

    Published in Asian Journal of Economics and Banking, 2020, Vol. 4, No. 2, pp. 61-76.

    In principle, we can use many different characteristics of a probability distribution. However, in practice, a few of such characteristics are mostly used: mean, variance, moments, correlation, etc. Why these characteristics and not others? The fact that these characteristics have been successfully used indicates that there must be some reason for their selection. In this paper, we show that the selection of these characteristics can be explained by the fact that these characteristics are invariant with respect to natural transformations -- while other possible characteristics are not invariant.

    File in pdf


    Technical Report UTEP-CS-20-13, February 2020
    Updated version UTEP-CS-20-13d, April 2020
    How Quantum Cryptography and Quantum Computing Can Make Cyber-Physical Systems More Secure
    Deepak Tosh, Oscar Galindo, Vladik Kreinovich, and Olga Kosheleva

    Published in Proceedings of the System of Systems Engineering Conference SoSE'2020, Budapest, Hungary, June 2-4, 2020, pp. 313-320.

    For cyber-physical systems, cyber-security is vitally important. There are many cyber-security tools that make communications secure -- e.g., communications between sensors and the computers processing the sensor's data. Most of these tools, however, are based on RSA encryption, and it is known that with quantum computing, this encryption can be broken. It is therefore desirable to use an unbreakable alternative -- quantum cryptography -- for such communications. In this paper, we discuss possible consequences of this option. We also explain how quantum computers can help even more: namely, they can be used to optimize the system's design -- in particular, to maximize its security, and to make sure that we do not waste time on communicating and processing irrelevant information.

    Original version UTEP-CS-20-13 in pdf
    Updated version UTEP-CS-20-13d in pdf


    Technical Report UTEP-CS-20-12, February 2020
    Updated version UTEP-CS-20-12a, August 2020
    Why Squashing Functions in Multi-Layer Neural Networks
    Julio C. Urenda, Orsolya Csiszar, Gabor Csiszar, Jozsef Dombi, Olga Kosheleva, Vladik Kreinovich, and Gyorgy Eigner

    Published in Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics SMC'2020, Toronto, Canada, October 11-14, 2020, pp. 1705-1711.

    Most multi-layer neural networks used in deep learning utilize rectified linear neurons. In our previous papers, we showed that if we want to use the exact same activation function for all the neurons, then the rectified linear function is indeed a reasonable choice. However, preliminary analysis shows that for some applications, it is more advantageous to use different activation functions for different neurons -- i.e., select a family of activation functions instead, and select the parameters of activation functions of different neurons during training. Specifically, this was shown for a special family of squashing functions that contain rectified linear neurons as a particular case. In this paper, we explain the empirical success of squashing functions by showing that the formulas describing this family follow from natural symmetry requirements.

    Original version UTEP-CS-20-12 in pdf
    Updated version UTEP-CS-20-12a in pdf


    Technical Report UTEP-CS-20-11, February 2020
    Predictably (Boundedly) Rational: Examples of Seemingly Irrational Behavior Can Be Quantitatively Explained by Bounded Rationality
    Laxman Bokati, Olga Kosheleva, and Vladik Kreinovich

    Published in Asian Journal of Economics and Banking, 2020, Vol. 4, No. 1, pp. 20-48.

    Traditional economics is based on the simplifying assumption that people behave perfectly rationally, that before making any decision, a person thoroughly analyzes all possible situations. In reality, we often do not have enough time to thoroughly analyze all the available information, as a result of which we make decisions of bounded rationality -- bounded by our inability to perform a thorough analysis of the situation. So, to predict human behavior, it is desirable to study how people actually make decisions. The corresponding area of economics is known as behavioral economics. It is known that many examples of seemingly irrational behavior can be explained, on the qualitative level, by this idea of bounded rationality. In this paper, we show that in many case, this qualitative explanation can be expanded into a quantitative one, that enables us to explain the numerical characteristics of the corresponding behavior.

    File in pdf


    Technical Report UTEP-CS-20-10, February 2020
    Revised version UTEP-CS-20-10a, February 2021
    How to Gauge the Quality of a Testing Method When Ground Truth Is Known with Uncertainty
    Nicolas Gray, Scott Ferson, and Vladik Kreinovich

    Published in Proceedings of the 9th International Workshop on Reliable Engineering Computing REC'2021, Taormina, Italy, May 16-20, 2021, pp. 265-278.

    The quality of a testing method is usually measured by using sensitivity, specificity, and/or precision. To compute each of these three characteristics, we need to know the ground truth, i.e., we need to know which objects actually have the tested property. In many applications (e.g., in medical diagnostics), the information about the objects comes from experts, and this information comes with uncertainty. In this paper, we show how to take this uncertainty into account when gauging the quality of testing methods.

    Original file UTEP-CS-20-10 in pdf
    Revised version UTEP-CS-20-10a in pdf


    Technical Report UTEP-CS-20-09, February 2020
    Revised version UTEP-CS-20-09a, February 2021
    Fusion of Probabilistic Knowledge as Foundation for Sliced-Normal Approach
    Michael Beer, Olga Kosheleva, and Vladik Kreinovich

    Published in Proceedings of the 9th International Workshop on Reliable Engineering Computing REC'2021, Taormina, Italy, May 16-20, 2021, pp. 408-418.

    In many practical applications, it turns out to be efficient to use Sliced-Normal multi-D distributions, i.e., distributions for which the logarithm of the probability density function (pdf) is a polynomial -- -- to be more precise, it is a sum of squares of several polynomials. This class is a natural extension of normal distributions, i.e., distributions for which the logarithm of the pdf is a quadratic polynomial.

    In this paper, we provide a possible theoretical explanation for this empirical success.

    Original file UTEP-CS-20-09 in pdf
    Revised version UTEP-CS-20-09a in pdf


    Technical Report UTEP-CS-20-08, February 2020
    Strength of Lime Stabilized Pavement Materials: Possible Theoretical Explanation of Empirical Dependencies
    Edgar Daniel Rodriguez Velasquez and Vladik Kreinovich

    Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 115-119.

    When building a road, it is often necessary to strengthen the underlying soil layer. This strengthening is usually done by adding lime. There are empirical formulas that describe how the resulting strength depends on the amount of added lime. In this paper, we provide a theoretical explanation for these empirical formulas.

    File in pdf


    Technical Report UTEP-CS-20-07, February 2020
    Revised version UTEP-CS-20-07a, December 2020
    Why Ellipsoids in Mechanical Analysis of Wood Structures
    F. Niklas Schietzold, Julio Urenda, Vladik Kreinovich, Wolfgang Graf, and Michael Kaliske

    Published in Proceedings of the 9th International Workshop on Reliable Engineering Computing REC'2021, Taormina, Italy, May 16-20, 2021, pp. 604-614.

    Wood is a very mechanically anisotropic material. At each point on the wooden beam, both average values and fluctuations of the local mechanical properties corresponding to a certain direction depend, e.g., on whether this direction is longitudinal, radial or tangential with respect to the grain orientation of the original tree. This anisotropy can be described in geometric terms, if we select a point x and form iso-correlation surfaces -- i.e., surfaces formed by points y with the same level of correlation ρ(x,y) between local changes in the vicinities of the points x and y. Empirical analysis shows that for each point x, the corresponding surfaces are well approximated by concentric homothetic ellipsoids. In this paper, we provide a theoretical explanation for this empirical fact.

    Original file UTEP-CS-20-07 in pdf
    Updated version UTEP-CS-20-07a in pdf


    Technical Report UTEP-CS-20-06, February 2020
    Can We Preserve Physically Meaningful "Macro" Analyticity without Requiring Physically Meaningless "Micro" Analyticity?
    Olga Kosheleva and Vladik Kreinovich

    Published in Mathematical Structures and Modeling, 2020, Vol. 54, pp. 95-99.

    Physicists working on quantum field theory actively used "macro" analyticity -- e.g., that an integral of an analytical function over a large closed loop is 0 -- but they agree that "micro" analyticity -- the possibility to expand into Taylor series -- is not physically meaningful on the micro level. Many physicists prefer physical theories with physically meaningful mathematical foundations. So, a natural question is: can we preserve physically meaningful "macro" analyticity without requiring physically meaningless "micro" analyticity? In the 1970s, an attempt to do it was made by using constructive mathematics, in which only objects generated by algorithms are allowed. This did not work out, but, as we show in this paper, the desired separation between "macro" and "micro" analyticity can be achieved if we limit ourselves to feasible algorithms.

    File in pdf


    Technical Report UTEP-CS-20-05, February 2020
    Several Years of Practice May Not Be As Good as Comprehensive Training: Zipf's Law Explains Why
    Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

    Published in Mathematical Structures and Modeling, 2020, Vol. 54, pp. 145-148.

    Many professions practice certifications as a way to establish that a person practicing this profession has reached a certain skills level. At first glance, it may sound like several years of practice should help a person pass the corresponding certification test, but in reality, even after several years of practice, most people are not able to pass the test, while after a few weeks of intensive training, most people pass it successfully. This sounds counterintuitive, since the overall number of problems that a person solves during several years of practice is much larger than the number of problems solved during a few weeks of intensive training. In this paper, we show that Zipf's law explains this seemingly counterintuitive phenomenon.

    File in pdf


    Technical Report UTEP-CS-20-04, February 2020
    A Mystery of Human Biological Development -- Can It Be Used to Speed up Computations?
    Olga Kosheleva and Vladik Kreinovich

    Published in: Andrew Adamatzky (ed.), Handbook on Unconventional Computing, World Scientific, 2021, pp. 399-403.

    For many practical problems, the only known algorithms for solving them require non-feasible exponential time. To make computations feasible, we need an exponential speedup. A reasonable way to look for such possible speedup is to search for real-life phenomena where such a speedup can be observed. A natural place to look for such a speedup is to analyze the biological activities of human beings -- since we, after all, solve many complex problems that even modern super-fast computers have trouble solving. Up to now, this search was not successful -- e.g., there are people who compute much faster than others, but it turns out that their speedup is linear, not exponential. In this paper, we want to attract the researchers' attention to the fact that recently, an exponential speed up was indeed found -- namely, it turns out that the biological development of humans is, on average, exponentially faster than the biological development of such smart animals as dogs. We hope that unveiling the processes behind this unexpected speedup can help us achieve a similar speedup in computations.

    File in pdf


    Technical Report UTEP-CS-20-03, February 2020
    Need for Simplicity and Everything Is a Matter of Degree: How Zadeh's Philosophy is Related to Kolmogorov Complexity, Quantum Physics, and Deep Learning
    Vladik Kreinovich, Olga Kosheleva, and Andres Ortiz-Munoz

    Published in: Shahnaz N. Shahbazova, Ali M. Abbasov, Vladik Kreinovich, Janusz Kacprzyk, and Ildar Batyrshin (eds.), " Recent Developments and the New Directions of Research, Foundations, and Applications", Springer, Cham, Switzerland, 2023, Vol. 1, pp. 203-215.

    Many people remember Lofti Zadeh's mantra -- that everything is a matter of degree. This was one of the main principles behind fuzzy logic. What is somewhat less remembered is that Zadeh also used another important principle -- that there is a need for simplicity. In this paper, we show that together, these two principles can generate the main ideas behind such various subjects as Kolmogorov complexity, quantum physics, and deep learning. We also show that these principles can help provide a better understanding of an important notion of space-time causality.

    File in pdf


    Technical Report UTEP-CS-20-02, January 2020
    Physical Randomness Can Help in Computations
    Olga Kosheleva and Vladik Kreinovich

    Published in: Andrew Adamatzky (ed.), Handbook on Uncoventional Computing, World Scientific, 2021, pp. 363-373.

    Can we use some so-far-unused physical phenomena to compute something that usual computers cannot? Researchers have been proposing many schemes that may lead to such computations. These schemes use different physical phenomena ranging from quantum-related to gravity-related to using hypothetical time machines. In this paper, we show that, in principle, there is no need to look into state-of-the-art physics to develop such a scheme: computability beyond the usual computations naturally appears if we consider such a basic notion as randomness.

    File in pdf


    Technical Report UTEP-CS-20-01, January 2020
    Why Immediate Repetition Is Good for Short-Term Learning Results but Bad For Long-Term Learning: Explanation Based on Decision Theory
    Laxman Bokati, Julio Urenda, Olga Kosheleva, and Vladik Kreinovich

    Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 27-35.

    It is well known that repetition enhances learning; the question is: when is a good time for this repetition? Several experiments have shown that immediate repetition of the topic leads to better performance on the resulting test than a repetition after some time. Recent experiments showed, however, that while immediate repetition leads to better results on the test, it leads to much worse performance in the long term, i.e., several years after the material have been studied. In this paper, we use decision theory to provide a possible explanation for this unexpected phenomenon.

    File in pdf