Computer Science Department

Abstracts of 2023 Reports

Technical Report UTEP-CS-23-71, December 2023

Why Sigmoid Transformation Helps Incorporate Logic Into Deep Learning: A Theoretical Explanation

Chitta Baral and Vladik Kreinovich

Traditional neural networks start from the data, they cannot easily handle prior knowledge -- this is one of the reasons why they often take very long to train. It is desirable to incorporate prior knowledge into deep learning. For the case when this knowledge consists of propositional statements, a successful way to incorporate this knowledge was proposed in a recent paper by van Krieken et al. That paper uses the fact that a neural network does not directly return a truth value, it returns a real value -- in effect, the degree of confidence in the corresponding statement -- from which we extract the truth value by fixing a threshold. Thus, the authors of the paper used formulas for transforming degrees of confidence in individual statements into a reasonable estimate for the degree of confidence in their logical combinations, formulas developed and studied under the name of fuzzy logic. However, it turns out the direct use of these formula often leads to very slow training. That paper showed that we can get effective training if instead of directly using the resulting degree of confidence we first apply a sigmoid-related transformation. In our paper, we provide a theoretical explanation of this semi-empirical idea: specifically, we show that under reasonable conditions, the optimal nonlinear transformation is either a sigmoid or an (arc)tangent or an appropriate combination of sigmoids, (arc)tangents, and their limit cases (such as linear functions).

Technical Report UTEP-CS-23-70, December 2023

If We Add Axiom of Choice to Constructive Analysis, We Get Classical Arithmetic: An Exercise in Reverse Constructive Mathematics

Olga Kosheleva and Vladik Kreinovich

A recent paper in Bulletin of Symbolic Logic reminded that the Axiom of Choice is, in general, false in constructive analysis. This result is an immediate consequence of a theorem -- first proved by Tseytin -- that every computable function is continuous. In this paper, we strengthen the result about the Axiom of Choice by proving that this axiom is as non-constructive as possible: namely, that if we add this axiom to constructive analysis, then we get full classical arithmetic.

Technical Report UTEP-CS-23-69, December 2023

When Is a Single "And"-Condition Enough?

Olga Kosheleva and Vladik Kreinovich

In many practical situations, there are several possible decisions.
Any general recommendation means specifying, for each possible
decision, conditions under which this decision is recommended. In
some cases, a single "and"-condition is sufficient: e.g., a
condition under which a patient is recommended to take aspirin is
that "the patient has a fever *and* the patient does not have
stomach trouble". In other cases, conditions are more complicated.
A natural question is: when is a single "and"-condition enough?
In this paper, we provide an answer to this question.

Technical Report UTEP-CS-23-68, December 2023

Smooth Non-Additive Integrals and Measures and Their Potential Applications

Olga Kosheleva and Vladik Kreinovich

In this paper, we explain why non-additive integrals and measures are needed, how non-additive integrals and measures are related, how to use them in decision making, and how they can help in fundamental physics. These four topics are covered, correspondingly, in Sections 2-5 of this paper.

Technical Report UTEP-CS-23-67, December 2023

Every ReLU-Based Neural Network Can Be Described by a System of Takagi-Sugeno Fuzzy Rules: A Theorem

Barnabas Bede, Olga Kosheleva, and Vladik Kreinovich

While modern deep-learning neural networks are very successful, sometimes they make mistakes, and since their results are "black boxes" -- no explanation is provided -- it is difficult to determine which recommendations are erroneous. It is therefore desirable to make the resulting computations explainable, i.e., to describe their results by using commonsense rules. In this paper, we use "fuzzy" techniques -- techniques developed by Lotfi Zadeh to deal with commonsense rules formulated by using imprecise ("fuzzy") words from natural language -- to show that such a rule-based representation is always possible. Our result does not yet provide the desired explainability, since it requires two rules for each neuron, and thus millions and billions of rules for a network with millions and billions of neurons. However, we believe that this is a useful first step towards the desired explainability.

Technical Report UTEP-CS-23-66, November 2023

Updated version UTEP-CS-23-66a, February 2024

Uncertainty Quantification for Results of AI-Based Data Processing: Towards More Feasible Algorithms

Christoph Q. Lauter, Martine Ceberio, Vladik Kreinovich, and Olga M. Kosheleva

AI techniques have been actively and successfully used in data processing. This tendency started with fuzzy techniques, now neural network techniques are actively used. With each new technique comes the need for the corresponding uncertainty quantification (UQ). In principle, for both fuzzy and neural techniques, we can use the usual UQ methods -- however, these techniques often require an unrealistic amount of computation time. In this paper, we show that in both cases, we can use specific features of the corresponding techniques to drastically speed up the corresponding computations.

Original file UTEP-CS-23-66 in pdf

Updated version UTEP-CS-23-66a in pdf

Technical Report UTEP-CS-23-65, November 2023

How to Efficiently Propagate P-Box Uncertainty

Olga Kosheleva and Vladik Kreinovich

In many practical situations, to get the desired estimate or prediction, we need to process existing data. This data usually comes from measurements, and measurements are never 100% accurate. Because we only know the input values with uncertainty, the results of processing this data also comes with uncertainty. To make an appropriate decision, we need to know how accurate is the resulting estimate, i.e., how the input uncertainty "propagates" through the data processing algorithm. In the ideal case, when we know the probability distribution of each measurement error, we can, in principle, use Monte-Carlo simulations to describe the uncertainty of the data processing result. In practice, however, we often only have partial information about the measurement uncertainty: for example, instead of the exact values of the cumulative distribution function F(x), we only know bounds on F(x). Such information is known as the probability box (p-box, for short). In this paper, we provide feasible algorithms for propagating p-box uncertainty.

Technical Report UTEP-CS-23-64, November 2023

Which Random-Set Representation of a Fuzzy Set Is the Simplest?

Vladik Kreinovich, Olga Kosheleva, and Hung T. Nguyen

One of the ways to elicit membership degrees is by polling. For example, we ask a group of people how many believe that 30 C is hot. If 8 out of ten say that it is hot, we assign the degree 8/10 to the statement "30 C is hot". In precise mathematical terms, polling can be described via so-called random sets. It is known that every fuzzy set can be obtained this way, i.e., that every fuzzy set can be represented by an appropriate random set. Moreover, it is known that for many fuzzy sets, there are several different random-set representations. From the computational viewpoint, it is desirable to use the random sets which are the simplest, i.e., which contains the smallest possible number of elements. So, the natural questions are: what is the simplest random-set representation of a given fuzzy set? and is such simplest representation unique or are there several different random-set representations with the same number of elements? In this paper, we answer both questions: we show that for almost all fuzzy sense (in some reasonable sense), there are several different simplest random-set representations, and that the known α-cut representation (where probabilities are assigned to α-cuts of the fuzzy set) is one of them.

Technical Report UTEP-CS-23-63, November 2023

Giant Footprints of Buddha and Generalized Limits

Julio C. Urenda and Vladik Kreinovich

In many places in Asia, there are footprints claimed to be left by Buddha. Many of them are much larger than the usual size of human feet, up to 150 cm and more in length. In this paper, we provide a possible mathematical explanation for such unusual sizes.

Technical Report UTEP-CS-23-62, November 2023

From Type-2 Fuzzy to Type-2 Intervals and Type-2 Probabilities

Vladik Kreinovich, Olga Kosheleva, and Luc Longpre

Our knowledge comes from observations, measurements, and expert opinions. Measurements and observations are never 100% accurate, there is always a difference between the measurement result and the actual value of the corresponding quantity. We gauge the resulting uncertainty either by an interval of possible values, or by a probability distribution on the set of possible values, or by a membership function that describes to what extent different values are possible. The information about uncertainty also comes either from measurements or from expert estimates and is, therefore, also uncertain. It is important to take such "type-2" uncertainty into account. This is a known idea in fuzzy, where type-2 fuzzy is a well-known effective technique. In this paper, we explain how a similar approach can be applied to type-2 intervals and type-2 probabilities.

Technical Report UTEP-CS-23-61, November 2023

Usually, Either Left and Right Brains Are Equally Active or Only One of Them Is Active: First-Principles Explanation

Julio C. Urenda and Vladik Kreinovich

To appear in: Nguyen Hoang Phuong, Nguyen Thi Huyen Chau, and
Vladik Kreinovich (eds.), *Machine Learning and Other Soft
Computing Techniques: Biomedical and Related Applications*,
Springer, Cham, Switzerland.

It is known that in most practical situations, either both left and right brains are equally active, or only one of them is active. A recent paper showed that this empirical phenomenon can be explained by a realistic model of the brain effectiveness. In this paper, we show that this conclusion can be made without any specific assumptions about the brain, based on first principles.

Technical Report UTEP-CS-23-60, October 2023

Updated version UTEP-CS-23-60a, February 2024

From Fuzzy to Mobile Fuzzy

Victor L. Timchenko, Yuriy P. Kondratenko, Olga Kosheleva, and Vladik Kreinovich

The main limitation of mobile computing in comparison with regular computing is the need to make sure that the battery last as long as possible –- and thus, the number of computational steps should be as small as possible. In this paper, we analyze how this affects fuzzy computations. We show that the need for fastest computations leads to triangular membership functions and simplest "and"- and "or"-operations: min and max. It also leads to the need to limit ourselves to a few-bit description of fuzzy degrees –- which leads to 3-bit descriptions similar to optical implementation of fuzzy computing.

Original file UTEP-CS-23-60 in pdf

Updated version UTEP-CS-23-60b in pdf

Technical Report UTEP-CS-23-59, October 2023

Updated version UTEP-CS-23-59a, February 2024

Just-in-Accuracy: Mobile Approach to Uncertainty

Martine Ceberio, Christoph Lauter, and Vladik Kreinovich

To make a mobile device last longer, we need to limit computations to a bare minimum. One way to do that, in complex control and decision making problems, is to limit precision with which we do computations, i.e., limit the number of bits in the numbers' representation. A problem is that often, we do not know with what precision should we do computations to get the desired accuracy of the result. What we propose is to first do computations with very low precision, then, based on these computations, estimate what precision is needed to achieve the given accuracy, and then perform computations with this precision.

Original file UTEP-CS-23-59 in pdf

Updated version UTEP-CS-23-59a in pdf

Technical Report UTEP-CS-23-58, October 2023

Approximate Stochastic Dominance Revisited

Chon Van Le, Olga Kosheleva, and Vladik Kreinovich

To appear in: Vladik Kreinovich, Woraphon Yamaka, and Supanika
Leurcharusmee (eds.), *Applications of Optimal Transport to
Economics and Related Topics*, Springer, Cham, Switzerland.

According to decision theory, in general, to recommend the best of
possible actions, we need to know, for each possible action, the
probabilities of different outcomes, and we also need to know the
decision maker's utility function -- that describes his/her
preferences. For some pairs of probability distributions, however,
we can make such a recommendation without knowing the exact form of
the utility function -- e.g., in financial applications, we only
need to know that a larger amount is preferable to a smaller one.
Such situations, when we can make decisions based only on the
information about probabilities, are known as {\it stochastic
dominance.} The usual analysis of such situations is based in the
idealized assumption that any difference in utility, no matter how
small, is important. In reality, very small changes in utility
value are irrelevant. From this viewpoint, if the utility
corresponding to the distribution F_{2}(x) is always either larger
or only slightly smaller than the utility corresponding to
F_{1}(x), then we can still conclude that the second action is
better (or of the same quality) than the first action. In this
paper, we show how to describe such approximate stochastic
dominance in precise terms.

Technical Report UTEP-CS-23-57, October 2023

How to Make Machine Learning Financial Recommendations More Fair: Theoretical Explanation

Tho M. Nguyen, Saeid Tizpaz-Niari, and Vladik Kreinovich

To appear in: Nguyen Ngoc Thach, Nguyen Duc Trung, Doan Thanh Ha,
and Vladik Kreinovich (eds.), *Partial Identification in
Econometrics and Related Topics*, Springer, Cham, Switzerland

Machine learning has been actively and successfully used to make financial decisions. In general, these systems work reasonably well. However, in some cases, these systems show unexpected bias towards minority groups -- the bias that is sometime much larger than the bias in the data on which they were trained. A recent paper analyzed whether a proper selection of hyperparameters can decrease this bias. It turned out that while the selection of hyperparameters indeed affect the system's fairness, only a few of the hyperparameters lead to consistent improvement of fairness: the number of features used for training and the number of training iterations. In this paper, we provide a theoretical explanation for these empirical results.

Technical Report UTEP-CS-23-56, October 2023

Local-Global Support for Earth Sciences: Economic Analysis

Uyen Hoang Pham, Aaron Velasco, and Vladik Kreinovich

To appear in: Vladik Kreinovich, Woraphon Yamaka, and Supanika
Leurcharusmee (eds.), *Applications of Optimal Transport to
Economics and Related Topics*, Springer, Cham, Switzerland.

Most funding for science comes from taxpayers. So, it is very important to be able to convince taxpayers that this funding is potentially beneficial for them. This task is easier in Earth sciences, e.g., in meteorology, where there are clear local benefits. The problem is that while many people support local studies focused on their region, they do not always have a good understanding of the fact that effective local benefits require also studying surrounding areas -- and what should be the optimal balance between local and (more) global studies. In this paper, on a (somewhat) simplified model of the situation, we explain what is the appropriate balance. We hope that the corresponding methodology can (and will) be applied to more realistic -- and thus, more complex -- local-global models as well.

Technical Report UTEP-CS-23-55, October 2023

Why Micro-Funding? Why Small Businesses Are Important? Analysis Based on First Principles

Hien D. Tran, Edwin Tomy George, and Vladik Kreinovich

To appear in: Vladik Kreinovich, Woraphon Yamaka, and Supanika
Leurcharusmee (eds.), *Applications of Optimal Transport to
Economics and Related Topics*, Springer, Cham, Switzerland.

On the one hand, in economics, there is a well-known and well-studied economy of scale: when two smaller companies merge, it lowers their costs and thus, makes them more effective and therefore more competitive. At first glance, this advantage of big size would make economy dominated by big companies -- but in reality, small business remain a significant and important economic sector. Similarly, it is well known and well studied that research collaboration enhances researchers' productivity -- but still a significant portion of important results come from individual efforts. In several applications areas, there are area-specific explanations for this seemingly contradictory phenomenon. In this paper, we provide a general explanation based on first principles. Our reasoning also leads to a new explanation of the ubiquity of Zipf's Law -- a wa that describes, e.g., the distribution of companies by size.

Technical Report UTEP-CS-23-54, October 2023

How to Deal with Inconsistent Intervals: Utility-Based Approach Can Overcome the Limitations of the Purely Probability-Based Approach

Kittawit Autchariyapanitkul, Tomoe Entani, Olga Kosheleva, and Vladik Kreinovich

To appear in: Nguyen Ngoc Thach, Nguyen Duc Trung, Doan Thanh Ha,
and Vladik Kreinovich (eds.), *Partial Identification in
Econometrics and Related Topics*, Springer, Cham, Switzerland

In many application areas, we rely on experts to estimate the numerical values of some quantities. Experts can provide not only the estimates themselves, they can also estimate the accuracies of their estimates -- i.e., in effect, they provide an interval of possible values of the quantity of interest. To get a more accurate estimate, it is reasonable to ask several experts -- and to take the intersection of the resulting intervals. In some cases, however, experts overestimate the accuracy of their estimates, their intervals are too narrow -- so narrow that they are inconsistent: their intersection is empty. In such situations, it is necessary to extend the experts' intervals so that they will become consistent. Which extension should we choose? Since we are dealing with uncertainty, it seems reasonable to apply probability-based approach -- well suited for dealing with uncertainty. From the purely mathematical viewpoint, this application is possible -- however, as we show, even in simplest situations, it leads to counter-intuitive results. We show that we can make more reasonable recommendations if, instead of only taking into account probabilities, we also take into account our preferences -- which, according to decision theory, can be described by utilities.

Technical Report UTEP-CS-23-53, September 2023

Linear Regression under Partial Information

Tho M. Nguyen, Saeid Tizpaz-Niari, and Vladik Kreinovich

To appear in: Nguyen Ngoc Thach, Nguyen Duc Trung, Doan Thanh Ha,
and Vladik Kreinovich (eds.), *Partial Identification in
Econometrics and Related Topics*, Springer, Cham, Switzerland

Often, we need to know how to estimate the value of a
difficult-to-directly estimate quantity y -- e.g., tomorrow's
temperature -- based on the known values of several quantities
x_{1}, ..., x_{n}. In many practical situations, we
know that the relation between y and x_{i} can be
accurately described by a linear function. So, to find this
dependence, we need to estimate the coefficients of this linear
dependence based on the known cases in which we know both y and
x_{i}; this is known as *linear regression*. In the
ideal situation, when in each case, we know all the inputs xi, the
computationally efficient and well-justified least squares method
provides a solution to this problem. However, in practice, some of
the inputs are often missing. There are heuristic methods for
dealing with such missing values, but the problem is that different
methods lead to different results. This is the main problem with
which we deal in this paper. To solve this problem, we propose a
new well-justified method that eliminates this undesirable
non-uniqueness. An auxiliary computational problem emerges if after
we get a linear dependence of y on x_{i}, we learn the
values of an additional variable x_{n+1}. In this case, in
principle, we can simply re-apply the least square method "from
scratch", but this idea, while feasible, is still somewhat
time-consuming, so it is desirable to come up with a faster
algorithm that would utilize the previous regression result. Such
an algorithm is also provided in this paper.

Technical Report UTEP-CS-23-52, September 2023

When Is It Beneficial to Merge Two Companies? When Is it Beneficial to Start a Research Collaboration?

Miroslav Svitek, Olga Kosheleva, and Vladik Kreinovich

Merging two companies or splitting a company into two, teaming of two researchers or two research groups -- or splitting a research group into two -- these are frequent occurrences. Sometimes these actions lead to increased effectiveness, but sometimes, contrary to the optimistic expectations, the overall effectiveness decreases. To minimize the possibility of such failures, it is desirable to replace the current semi-intuitive way of making the corresponding decisions with a more objective approach. In this paper, we propose such an approach.

Technical Report UTEP-CS-23-51, August 2023

If Everything is a Matter of Degree, Why Do Crisp Techniques Often Work Better?

Miroslav Svitek, Olga Kosheleva, and Vladik Kreinovich

Numerous examples from different application domain confirm the statement of Lotfi Zadeh -- that everything is a matter of degree. Because of this, one would expect that in most -- if not all -- practical situations taking these degrees into account would lead to more effective control, more effective prediction, etc. In practice, while in many cases, this indeed happens, in many other cases, "crisp" methods -- methods that do not take these degrees into account -- work better. In this paper, we provide two possible explanations for this discrepancy: an objective one -- explaining that the optimal (best-fit) model is indeed often the crisp one, and a subjective one -- that we have to use crisp because of our limited ability to process information.

Technical Report UTEP-CS-23-50, August 2023

How to Select A Model If We Know Probabilities with Interval Uncertainty

Vladik Kreinovich

Published in *Asian Journal of Economics and Banking*, 2023,
DOI 10.1108/AJEB-08-2023-0078.

*Purpose:* When we know the probability of each model, a
natural idea is to select the most probable model. However, in many
practical situations, we do not know the exact values of these
probabilities, we only know intervals that contain these values. In
such situations, a natural idea is to select some probabilities
from these intervals and to select a model with the largest
selected probabilities. The purpose of this study is to decide how
to most adequately select these probabilities.

*Design/methodology/approach:* We want the
probability-selection method to preserve independence: If,
according to the probability intervals, the two events were
independent, then the selection of probabilities within the
intervals should preserve this independence.

*Findings:* We describe all techniques for decision making
under interval uncertainty about probabilities that are consistent
with independence. We prove that these techniques form a
1-parametric family, a family that has already been successfully
used in such decision problems.

*Originality/value:* We provide a theoretical explanation of an
empirically successful technique for decision making under interval
uncertainty about probabilities. This explanation is based on the
natural idea that the method for selecting probabilities from the
corresponding intervals should preserve independence.

Technical Report UTEP-CS-23-49a, August 2023

Algebraic Product Is the Only "And-Like"-Operation for Which Normalized Intersection Is Associative: A Proof

Thierry Denoeux and Vladik Kreinovich

To appear in: Nguyen Hoang Phuong, Nguyen Thi Huyen Chau, and
Vladik Kreinovich (eds.), *Machine Learning and Other Soft
Computing Techniques: Biomedical and Related Applications*,
Springer, Cham, Switzerland.

For normalized fuzzy sets, intersection is, in general, not normalized. So, if we want to limit ourselves to normalized fuzzy sets, we need to normalize the intersection. It is known that for algebraic product, the normalized intersection is associative, and that for many other "and"-operations (t-norms), normalized intersection is not associative. In this paper, we prove that algebraic product is the only "and"-operation (even the only "and-like" operation) for which normalized intersection is associative.

Technical Report UTEP-CS-23-48, August 2023

Why Attitudes Are Usually Mutual: A Possible Mathematical Explanation

Julio C. Urenda and Vladik Kreinovich

In this paper, we provide a possible mathematical explanation of why people's attitude to each other is usually mutual: we usually have good attitude who those who have good feelings towards us, and we usually have negative attitudes towards those who have negative feelings towards, Several mathematical explanations of this mutuality have been proposed, but they are based on specific approximate mathematical models of human (and animal) interaction. It is desirable to have a solid mathematical explanation that would not depend on such approximate models. In this paper, we show that a recent mathematical result about relation algebras can lead to such an explanation.

Technical Report UTEP-CS-23-47, August 2023

Towards a Psychologically Natural Relation Between Colors and Fuzzy Degrees

Victor L. Timchenko, Yuriy P. Kondratenko, Olga Kosheleva, Vladik Kreinovich, and Nguyen Hoang Phuong

To appear in: Nguyen Hoang Phuong, Nguyen Thi Huyen Chau, and
Vladik Kreinovich (eds.), *Machine Learning and Other Soft
Computing Techniques: Biomedical and Related Applications*,
Springer, Cham, Switzerland.

A natural way to speed up computations -- in particular, computations that involve processing fuzzy data -- is to use the fastest possible communication medium: light. Light consists of components of different color. So, if we use optical color computations to process fuzzy data, we need to associate fuzzy degrees with colors. One of the main features -- and of the main advantages -- of fuzzy technique is that the corresponding data has intuitive natural meaning: this data comes from words from natural language. It is desirable to preserve this naturalness as much as possible. In particular, it is desirable to come up with a natural relation between colors and fuzzy degrees. In this paper, we show that there is exactly one such natural relation, and we describe this relation.

Technical Report UTEP-CS-23-46, August 2023

Industry-Academia Collaboration: Main Challenges and What Can We Do

Olga Kosheleva and Vladik Kreinovich

How can we bridge the gap between industry and academia? How can we make them collaborate more effectively? In this essay, we try to come up with answers to these important questions.

Technical Report UTEP-CS-23-45, August 2023

Why Unit Two-Variable-Per-Inequality (UTVPI) Constraints Are So Efficient to Handle: Intuitive Explanation

Saeid Tizpaz-Niari, Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

In general, integer linear programming is NP-hard. However, there exists a class of integer linear programming problems for which an efficient algorithm is possible: the class of so-called unit two-variable-per-inequality (UTVPI) constraints. In this paper, we provide an intuitive explanation for why an efficient algorithm turned out to be possible for this class. Namely, the smaller the class, the more probable it is that a feasible algorithm is possible for this class, and the UTVPI class is indeed the smallest -- in some reasonable sense described in this paper.

Technical Report UTEP-CS-23-44, July 2023

Topological Explanation of Why Complex Numbers Are Needed in Quantum Physics

Julio C. Urenda and Vladik Kreinovich

In quantum computing, we only use states in which all amplitudes are real numbers. So why do we need complex numbers with non-zero imaginary part in quantum physics in general? In this paper, we provide a simple topological explanation for this need, explanation based on the Second Law of Thermodynamics.

Technical Report UTEP-CS-23-43, July 2023

What Was More Frequently Used -- "And" or "Or": Based on Analysis of European Languages

Olga Kosheleva and Vladik Kreinovich

Traditional logic has two main connectives: "and" and "or". A natural question is: which of the two is more frequently used? This question is easy to answer for the current usage of these connectives -- we can simply analyze all the texts, but what can we say about the past usage? To answer this question, we use the known linguistics fact that, in general, notions that are more frequently used are described by shorter words. It turns out that in most European languages, the word for "and" is shorter -- or of the same length -- as the word for "or". This seems to indicate that in these languages, "and" was used more frequently. The only four exceptions are languages of the British Isles and of Greece, where most probably "or" was used more frequently. In this paper, we propose a possible explanation for these exceptions.

Technical Report UTEP-CS-23-42, July 2023

How to Best Retrain a Neural Network If We Added One More Input Variable

Saeid Tizpaz-Niari and Vladik Kreinovich

*Machine Learning and Other Soft
Computing Techniques: Biomedical and Related Applications*,
Springer, Cham, Switzerland.

Often, once we have trained a neural network to estimate the value of a quantity y based on the available values of inputs x1, ..., xn, we learn to measure the values of an additional quantity that have some influence on y. In such situations, it is desirable to re-train the neural network, so that it will be able to take this extra value into account. A straightforward idea is to add a new input to the first layer and to update all the weights based on the patterns that include the values of the new input. The problem with this straightforward idea is that while the result is a minor improvement, such re-training will take a lot of time, almost as much as the original training. In this paper, we show, both theoretically and experimentally, that in such situations, we can speed up re-training -- practically without decreasing resulting accuracy -- if we only update some weights.

Technical Report UTEP-CS-23-41, July 2023

We Can Always Reduce a Non-Linear Dynamical System to Linear -- at Least Locally -- But Does It Help?

Orsolya Csiszar, Gabor Csiszar, Olga Kosheleva, Vladik Kreinovich, and Nguyen Hoang Phuong

*Machine Learning and Other Soft
Computing Techniques: Biomedical and Related Applications*,
Springer, Cham, Switzerland.

Many real-life phenomena are described by dynamical systems. Sometimes, these dynamical systems are linear. For such systems, solutions are well known. In some cases, it is possible to transform a nonlinear system into a linear one by appropriately transforming its variables, and this helps to solve the original nonlinear system. For other nonlinear systems -- even for the simplest ones -- such transformation is not known. A natural question is: which nonlinear systems allow such transformations? In this paper, we show that we can always reduce a nonlinear system to a linear one -- but, in general, it does not help, since the complexity of finding such a reduction is exactly the same as the complexity of solving the original nonlinear system.

Technical Report UTEP-CS-23-40, July 2023

Why Bump Reward Function Works Well In Training Insulin Delivery Systems

Lehel Denes-Fazakas, Laszlo Szilagyi, Gyorgy Eigner, Olga Kosheleva, Vladik Kreinovich, and Nguyen Hoang Phuong

*Machine Learning and Other Soft
Computing Techniques: Biomedical and Related Applications*,
Springer, Cham, Switzerland.

Diabetes is a disease when the body can no longer properly regulate blood glucose level, which can lead to life-threatening situations. To avoid such situations and regulate blood glucose level, patients with severe form of diabetes need insulin injections. Ideally, the system should automatically decide when best to inject insulin and how much to inject. To find the optimal control, researchers applied machine learning with different reward functions. It turns out that the most effective learning occurred when they used the so-called bump function. In this paper, we provide a possible explanation for this empirical result.

Technical Report UTEP-CS-23-39, July 2023

How to Estimate Unknown Unknowns: From Cosmic Light to Election Polls

Talha Azfar, Vignesh Ponraj, Vladik Kreinovich, and Nguyen Hoang Phuong

*Machine Learning and Other Soft
Computing Techniques: Biomedical and Related Applications*,
Springer, Cham, Switzerland.

In two different areas of research -- in the study of space light and in the study of voting -- the observed value of the quantity of interest is twice larger than what we would expect. That the observed value is larger makes perfect sense: there are phenomena that we do not take into account in our estimations. However, the fact that the observed value is exactly twice larger deserves explanation. In this paper, we show that Laplace Indeterminacy Principle leads to such an explanation.

Technical Report UTEP-CS-23-38, July 2023

Why Resilient Modulus Is Proportional to the Square Root of Unconfined Compressive Strength (UCS): A Qualitative Explanation

Edgar Daniel Rodriguez Velasquez and Vladik Kreinovich

The strength of the pavement is determine by its resilient modulus, i.e., by its ability to withstand (practically) instantaneous stresses caused by the passing traffic. However, the resilient modulus is not easy to measure: its measurement requires a special expensive equipment that many labs do not have. So, instead of measuring it, practitioners often measure easier-to-measure Unconfined Compressive Strength (UCS) -- that describes the effect of a continuously applied force -- and estimate the resilient modulus based on the result of this measurement. An empirical formula shows that the resilient modulus is proportional to the square root of the Unconfined Compressive Strength. In this paper, we provide a possible explanation for this empirical dependence.

Technical Report UTEP-CS-23-37, July 2023

Complex Numbers Explain Why in Chinese Tradition, 4 Is Bad But 8 Is Good

Luc Longpre, Olga Kosheleva, and Vladik Kreinovich

In the traditional Chinese culture, 4 is considered to be an unlucky number, while the number 8 is considered to be very lucky. In this paper, we show that both "badness" and "goodness" can be explained if we take into account the role of complex numbers in the analysis of general dynamical systems.

Technical Report UTEP-CS-23-36, July 2023

Methodological Lesson of Pythagorean Triples

Julio C. Urenda, Olga Kosheleva, and Vladik Kreinovich

There are many right triangles in which all three sides a, b, and c
have integer lengths. The triples (a,b,c) formed by such lengths
are known as Pythagorean triples. Since ancient times, it is known
how to generate all Pythagorean triples: we can enumerate primitive
Pythagorean triples -- in which the three numbers have no common
divisors -- by considering all pairs of natural numbers m>n in
which m and n have no common divisors, and taking a =m^{2}
− n^{2}, b = 2mn, and c = m^{2} +
n^{2}. Multiplying all elements of a triple by the same
number, we can get all other Pythagorean triples. The proof of this
result -- going back to Euclid -- is technical. In this paper, we
provide a commonsense explanation of this result. We hope that this
explanation -- which is more general than Pythagorean triples --
can lead to new hypotheses and new results about similar
situations.

Technical Report UTEP-CS-23-35, July 2023

Why Deep Learning Is Under-Determined? Why Usual Numerical Methods for Solving Partial Differential Equations Do Not Preserve Energy? The Answers May Be Related to Chevalley-Warning Theorem (and Thus to Fermat Last Theorem)

Julio C. Urenda, Olga Kosheleva, and Vladik Kreinovich

In this paper, we provide a possible explanation to two seemingly unrelated phenomena: (1) that in deep learning, under-determined systems of equations perform much better than the over-determined one -- which are typical in data processing, and that (2) usual numerical methods for solving partial differential equations do not preserve energy. Our explanation is related to the intuition of Fermat behind his Last Theorem and of Euler about more general statements, intuition that led to the proof of Chevalley-Warning Theorem in number theory.

Technical Report UTEP-CS-23-34, July 2023

Why 6-Labels Uncertainty Scale in Geosciences: Probability-Based Explanation

Aaron Velasco, Julio C. Urenda, Olga Kosheleva, and Vladik Kreinovich

To appear in: Patricia Melin and Oscar Castillo (eds.),
*Proceedings of the International Seminar on Computational
Intelligence ISCI'2023*, Tijuana, Mexico, August 30-31, 2023,
Springer.

To describe uncertainty in geosciences, several researchers have recently proposed a 6-labels uncertainty scale, in which one the labels corresponds to full certainty, one label to the absence of any knowledge, and the remaining four labels correspond to the degrees of confidence from the intervals [0,0.25], [0.25,0.5], [0.5,0.75], and [0.75,1]. Tests of this 6-labels scale indicate that it indeed conveys uncertainty information to geoscientists much more effectively than previously proposed uncertainty schemes. In this paper, we use probability-related techniques to explain this effectiveness.

Technical Report UTEP-CS-23-33, July 2023

Fuzzy Mathematics under Non-Minimal "And"-Operations (t-Norms): Equivalence leads to Metric, Order Leads to Kinematic Metric, Topology Leads to Area or Volume

Purbita Jana, Olga Kosheleva, and Vladik Kreinovich

Published in *Proceedings of the 20th World Congress of the
International Fuzzy Systems Association IFSA'2023*, Daegu,
South Korea, August 20-23, 2023, pp. 472-477.

Most formulas analyzed in fuzzy mathematics assume -- explicitly or implicitly -- that the corresponding "and"-operation (t-norm) is the simplest minimum operation. In this paper, we analyze what happens if instead, we use other "and"-operations. It turns out that for such operations, a fuzzification of a mathematical theory naturally leads to a more complex mathematical setting: fuzzification of equivalence relation leads to metric, fuzzification of order leads to kinematic metric, and fuzzification of topology leads to area or volume.

Technical Report UTEP-CS-23-32, July 2023

Fuzzy Techniques Explain the Effectiveness of ReLU Activation Function in Deep Learning

Julio C. Urenda, Olga Kosheleva, and Vladik Kreinovich

To appear in: Patricia Melin and Oscar Castillo (eds.),
*Proceedings of the International Seminar on Computational
Intelligence ISCI'2023*, Tijuana, Mexico, August 30-31, 2023,
Springer.

In the last decades, deep learning has led to spectacular successes. One of the reasons for these successes was the fact that deep neural networks use a special Rectified Linear Unit (ReLU) activation function s(x) = max(0,x). Why this activation function is so successful is largely a mystery. In this paper, we show that common sense ideas -- as formalized by fuzzy logic -- can explain this mysterious effectiveness.

Technical Report UTEP-CS-23-31, July 2023

How to Combine Probabilistic and Fuzzy Uncertainty: Theoretical Explanation of Clustering-Related Empirical Result

Laszlo Szilagyi, Olga Kosheleva, and Vladik Kreinovich

Published in *Proceedings of the 20th World Congress of the
International Fuzzy Systems Association IFSA'2023*, Daegu,
South Korea, August 20-23, 2023,, pp. 468-471.

In contrast to crisp clustering techniques that assign each object to a class, fuzzy clustering algorithms assign, to each object and to each class, a degree to which this object belongs to this class. In the most widely used fuzzy clustering algorithm -- fuzzy c-means -- for each object, degrees corresponding to different classes add up to 1. From this viewpoint, these degrees act as probabilities. There exist alternative fuzzy-based clustering techniques in which, in line with the general idea of the fuzzy set, the largest of the degrees is equal to 1. In some practical situations, the probability-type fuzzy clustering works better; in other situations, the more fuzzy-type technique leads to a more adequate clustering. It is therefore desirable to combine the two techniques, so that one of them will cover the situations where the other method does not work so well. Such combination methods have indeed been proposed. An empirical comparison has shown that out of all these combined methods, the most effective one is the method in which we the use the product of probability and fuzzy degree. In this paper, we provide a theoretical explanation for this empirical result.

Technical Report UTEP-CS-23-30, July 2023

Which Fuzzy Implications Operations Are Polynomial? A Theorem Proves That This Can Be Determined by a Finite Set of Inequalities

Sebastia Massanet, Olga Kosheleva, and Vladik Kreinovich

Published in *Proceedings of the 20th World Congress of the
International Fuzzy Systems Association IFSA'2023*, Daegu,
South Korea, August 20-23, 2023, pp. 462-467.

To adequately represent human reasoning in a computer-based systems, it is desirable to select fuzzy operations that are as close to human reasoning as possible. In general, every real-valued function can be approximated, with any desired accuracy, by polynomials; it is therefore reasonable to use polynomial fuzzy operations as the appropriate approximations. We thus need to select, among all polynomial operations that satisfy corresponding properties -- like associativity -- the ones that best fit the empirical data. The challenge here is that properties like associativity mean satisfying infinitely many constraints (corresponding to infinitely many possible triples of values), while most effective optimization techniques assume that the number of equality or inequality constraints is finite. Thus, it is desirable to find, for each corresponding family of infinitely many constraints, an equivalent finite set of constraints. Such sets have been found for many fuzzy operations -- e.g., for implication operations represented by polynomials of degree 4. In this paper, we show that such equivalent finite sets always exist, and we describe an algorithm for generating these sets.

Technical Report UTEP-CS-23-29, July 2023

Updated version UTEP-CS-23-29a, July 2023

How to Make Decision Under Interval Uncertainty: Description of All Reasonable Partial Orders on the Set of All Intervals

Tiago M. Costa, Olga Kosheleva, and Vladik Kreinovich

Published in *Proceedings of the 20th World Congress of the
International Fuzzy Systems Association IFSA'2023*, Daegu,
South Korea, August 20-23, 2023, pp. 457-461.

In many practical situations, we need to make a decision while for each alternative, we only know the corresponding value of the objective function with interval uncertainty. To help a decision maker in this situation, we need to know the (in general, partial) order on the set of all intervals that corresponds to the preferences of the decision maker. For this purpose, in this paper, we provide a description of all such partial orders -- under some reasonable conditions. It turns out that each such order is characterized by two linear inequalities relating the endpoints of the corresponding intervals, and that all such orders form a 2-parametric family.

Original file UTEP-CS-23-29 in pdf

Updated version UTEP-CS-23-29a in pdf

Technical Report UTEP-CS-23-28, July 2023

How to Propagate Interval (and Fuzzy) Uncertainty: Optimism-Pessimism Approach

Vinicius F. Wasques, Olga Kosheleva, and Vladik Kreinovich

Published in *Proceedings of the 20th World Congress of the
International Fuzzy Systems Association IFSA'2023*, Daegu,
South Korea, August 20-23, 2023, pp. 451-456.

In many practical situations, inputs to a data processing algorithm are known with interval uncertainty, and we need to propagate this uncertainty through the algorithm, i.e., estimate the uncertainty of the result of data processing. Traditional interval computation techniques provide guaranteed estimates, but from the practical viewpoint, these bounds are too pessimistic: they take into account highly improbable worst-case situations when all the measurement and estimation errors happen to be strongly correlated. In this paper, we show that a natural idea of having more realistic estimates leads to the use of so-called interactive addition of intervals, techniques that has already been successful used to process interval uncertainty. Thus, we provide a new justification for these techniques. If we use a known interpretation of a fuzzy set as a nested family of intervals -- its alpha-cuts -- then we can naturally extend our results to the case is fuzzy uncertainty.

Technical Report UTEP-CS-23-27, June 2023

Dialogs Re-enacted Across Languages, Version 2

Nigel G. Ward, Jonathan E. Avila, Emilia Rivas, and Divette Marco

To support machine learning of cross-language prosodic mappings and other ways to improve speech-to-speech translation, we present a protocol for collecting closely matched pairs of utterances across languages, a description of the resulting data collection and its public release, and some observations and musings. This report is intended for:

- people using this corpus
- people extending this corpus
- people designing similar collections of bilingual dialog data.

Technical Report UTEP-CS-23-26, June 2023

Updated version UTEP-CS-23-26a, September 2023

Why Fuzzy Control Is Often More Robust (and Smoother): A Theoretical Explanation

Orsolya Csiszar, Gabor Csiszar, Olga Kosheleva, Martine Ceberio, and Vladik Kreinovich

To appear in *Proceedings of the IEEE Series of Symposia on
Computational Intelligence SSCI 2023*, Mexico City, Mexico,
December 6-8, 2023.

In many practical situations, practitioners use easier-to-compute fuzzy control to approximate the more-difficult-co-compute optimal control. As expected, for many characteristics, this approximate control is slightly worse than the optimal control it approximates, However, with respect to robustness or smoothness, the approximating fuzzy control is often better than the original one. In this paper, we provide a theoretical explanation for this somewhat mysterious empirical phenomenon.

Original file UTEP-CS-23-26 in pdf

Updated version UTEP-CS-23-26a in pdf

Technical Report UTEP-CS-23-25, June 2023

Updated version UTEP-CS-23-25a, September 2023

Which Activation Function Works Best for Training Artificial Pancreas: Empirical Fact and Its Theoretical Explanation

Lehel Denes-Fazakas, Laslo Szilagyi, Gyorgy Eigner, Olga Kosheleva, Martine Ceberio, and Vladik Kreinovich

To appear in *Proceedings of the IEEE Series of Symposia on
Computational Intelligence SSCI 2023*, Mexico City, Mexico,
December 6-8, 2023.

One of the most effective ways to help patients at the dangerous levels of diabetes is an artificial pancreas, a device that constantly monitors the patient's blood sugar level and injects insulin based on this level. Patient's reaction to insulin is highly individualized, so the artificial pancreas needs to be trained on each patient. It turns out that the best training results are attained when instead of the usual ReLU neurons, we use their minor modification known as Exponential Linear Units (ELU). In this paper, we provide a theoretical explanation for the empirically observed effectiveness of ELUs.

Original file UTEP-CS-23-25 in pdf

Updated version UTEP-CS-23-25a in pdf

Technical Report UTEP-CS-23-24, June 2023

Selecting the Most Adequate Fuzzy Operation for Explainable AI: Empirical Fact and Its Possible Theoretical Explanation

Orsolya Csiszar, Gabor Csiszar, Martine Ceberio, and Vladik Kreinovich

A reasonable way to make AI results explainable is to approximate the corresponding deep-learning-generated function by a simple expression formed by fuzzy operations. Experiments on real data show that out of all easy-to-compute fuzzy operations, the best approximation is attained if we use an operation a + b − 0.5 ( limited to the interval [0,1]$. In this paper, we provide a possible theoretical explanation for this empirical result.

Technical Report UTEP-CS-23-23, June 2023

Why Softmax? Because It Is the Only Consistent Approach to Probability-Based Classification

Anatole Lokshin, Olga Kosheleva, and Vladik Kreinovich

In many practical problems, the most effective classification techniques are based on deep learning. In this approach, once the neural network generates values corresponding to different classes, these values are transformed into probabilities by using the softmax formula. Researchers tried other transformation, but they did not work as well as softmax. A natural question is: why is softmax so effective? In this paper, we provide a possible explanation for this effectiveness: namely, we prove that softmax is the only consistent approach to probability-based classification. In precise terms, it is the only approach for which two reasonable probability-based ideas -- Least Squares and Bayesian statistics -- always lead to the exact same classification.

Technical Report UTEP-CS-23-22, May 2023

Updated version UTEP-CS-23-22a, June 2023

Is Fully Explainable AI Even Possible: Fuzzy-Based Analysis

Miroslav Svitek, Olga Kosheleva, and Vladik Kreinovich

Published in *Proceedings of the 20th World Congress of the
International Fuzzy Systems Association IFSA'2023*, Daegu,
South Korea, August 20-23, 2023, pp. 259-264.

One of the main limitations of many current AI-based
decision-making systems is that they do not provide any
understandable explanations of how they came up with the produced
decision. Taking into account that these systems are not perfect,
that their decisions are sometimes far from good, the absence of an
explanation makes it difficult to separate good decisions from
suspicious ones. Because of this, many researchers are working on
making AI explainable. In some applications areas -- e.g., in chess
-- practitioners get an impression that there is a limit to
understandability, that some decisions remain *inhuman* -- not
explainable. In this paper, we use fuzzy techniques to analyze this
situation. We show that for relatively simpler systems, explainable
model are indeed optimal approximate descriptions, while for more
complex systems, there is a limit on the adequacy of explainable
models.

Original file UTEP-CS-23-22 in pdf

Updated version UTEP-CS-23-22a in pdf

Technical Report UTEP-CS-23-21, May 2023

Updated version UTEP-CS-23-21a, August 2023

Fast -- Asymptotically Optimal -- Methods for Determining the Optimal Number of Features

Saeid Tizpaz-Niari, Luc Longpre, Olga Kosheleva, and Vladik Kreinovich

Published in: Van-Nam Huynh, Bac H. Le, Katsuhiro Honda, Masahiro
Inuiguchi, and Youji Kohda (eds.), *Proceedings of the Tenth
International Symposium on
Integrated Uncertainty in Knowledge Modelling and Decision Making
IUKM 2023*, Kanazawa, Japan, November 2-4, 2023, Vol. 1,
pp. 123-128.

In machine learning -- and in data processing in general -- it is very important to select the proper number of features. If we select too few, we miss important information and do not get good results, but if we select too many, this will include many irrelevant ones that only bring noise and thus again worsen the results. The usual method of selecting the proper number of features is to add features one by one until the quality stops improving and starts deteriorating again. This method works, but it often takes too much time. In this paper, we propose faster -- even asymptotically optimal -- methods for solving the problem.

Original file UTEP-CS-23-21 in pdf

Updated version UTEP-CS-23-21a in pdf

Technical Report UTEP-CS-23-20, May 2023

Updated version UTEP-CS-23-20a, June 2023

Logical Inference Inevitably Appears: Fuzzy-Based Explanation

Julio Urenda, Olga Kosheleva, Vladik Kreinovich, and Orsolya Csiszar

Published in *Proceedings of the 20th World Congress of the
International Fuzzy Systems Association IFSA'2023*, Daegu,
South Korea, August 20-23, 2023, pp. 265-268.

Many thousands years ago, our primitive ancestors did not have the ability to reason logically and to perform logical inference. This ability appeared later. A natural question is: was this appearance inevitable -- or was this a lucky incident that could have been missed? In this paper, we use fuzzy techniques to provide a possible answer to this question. Our answer is: yes, the appearance of logical inference in inevitable.

Original file UTEP-CS-23-20 in pdf

Updated version UTEP-CS-23-20a in pdf

Technical Report UTEP-CS-23-19, May 2023

Updated version UTEP-CS-23-19a, August 2023

Why Inverse Layers in Pavement? Why Zipper Fracking? Why Interleaving in Education? A General Explanation

Edgar Daniel Rodriguez Velasquez, Aaron Velasco, Olga Kosheleva, and Vladik Kreinovich

Published in: Van-Nam Huynh, Bac H. Le, Katsuhiro Honda, Masahiro
Inuiguchi, and Youji Kohda (eds.), *Proceedings of the Tenth
International Symposium on
Integrated Uncertainty in Knowledge Modelling and Decision Making
IUKM 2023*, Kanazawa, Japan, November 2-4, 2023, Vol. 1, pp.
129-138.

In many practical situations, if we split our efforts into two disconnected chunks, we get better results: a pavement is stronger if instead of a single strengthening layer, we place two parts of this layer separated by no-so-strong layers; teaching is more effective if instead of concentrating a topic in a single time interval, we split it into two parts separated in time, etc. In this paper, we provide a general explanation for all these phenomena.

Original file UTEP-CS-23-19 in pdf

Updated version UTEP-CS-23-20a in pdf

Technical Report UTEP-CS-23-18, April 2023

Updated version UTEP-CS-23-18a, June 2023

Natural Color Interpretation of Interval-Valued Fuzzy Degrees

Victor L. Timchenko, Yury P. Kondratenko, Vladik Kreinovich, and Olga Kosheleva

Published in *Proceedings of the 20th World Congress of the
International Fuzzy Systems Association IFSA'2023*, Daegu,
South Korea, August 20-23, 2023, pp. 254-258.

Intuitively, interval-values fuzzy degrees are more adequate for representing expert uncertainty than the traditional [0,1]-based ones. Indeed, the very need for fuzzy degrees comes from the fact that experts often cannot describe their opinion not in terms of precise numbers, but by using imprecise ("fuzzy") words from natural language like "small". In such situations, it is strange to expect the same expert to be able to provide an exact number describing his/her degree of certainty; it is more natural to ask this expert to mark the whole interval (or even, more generally, a fuzzy set of possible degrees). In spite of this intuitive adequacy, and in spite of several successful applications of interval-valued degrees, most applications of fuzzy techniques are still based on the traditional [0,1]-based degrees. According to researcher who studied this puzzling phenomenon, the problem is that while people are accustomed to marking their opinion on a numerical scale, most people do not have any experience of using interval. To ease people's use of interval-valued degrees, we propose to take into account that the set of all interval-valued degrees is, in some reasonable sense, equivalent to the set of colors -- thus, we can represent degrees as appropriate colors. This idea can be naturally extended to Z-numbers -- and it also provides an additional argument why interval-valued degrees are more adequate, at least more adequate in the analysis of complex phenomena.

Original file UTEP-CS-23-18 in pdf

Updated version UTEP-CS-23-18a in pdf

Technical Report UTEP-CS-23-17, April 2023

How People Make Decisions Based on Prior Experience: Formulas of Instance-Based Learning Theory (IBLT) Follow from Scale Invariance

Palvi Aggarwal, Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

Published in: Kelly Cohen, Nicholas Ernest, Barnabas Bede, and Vladik Kreinovich (Eds.), "Fuzzy Information Processing 2023, Proceedings of the 2023 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2023", Cincinnati, Ohio, May 31 - June 2, 2023, Springer Lecture Notes in Networks and Systems, 2023, Vol. 751, pp. 312-319.

To better understand human behavior, we need to understand how people make decisions, how people select one of possible actions. This selection is usually based on predicting consequences of different actions, and these predictions are, in their turn, based on the past experience. For example, consequences that occur more frequently in the past are viewed as more probable. However, this is not just about frequency: recent observations are usually given more weight that past ones. Researchers have discovered semi-empirical formulas that describe our predictions reasonably well; these formulas form the basis of the Instance-Based Learning Theory (IBLT). In this paper, we show that these semi-empirical formulas can be derived from the natural idea of scale invariance.

Technical Report UTEP-CS-23-16, April 2023

What Do Goedel's Theorem and Arrow's Theorem Have in Common: A Possible Answer to Arrow's Question

Miroslav Svitek, Olga Kosheleva, and Vladik Kreinovich

Published in: Kelly Cohen, Nicholas Ernest, Barnabas Bede, and Vladik Kreinovich (Eds.), "Fuzzy Information Processing 2023, Proceedings of the 2023 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2023", Cincinnati, Ohio, May 31 - June 2, 2023, Springer Lecture Notes in Networks and Systems, 2023, Vol. 751, pp. 338-343.

Kenneth Arrow, the renowned author of the Impossibility Theorem that explains the difficulty of group decision making, noticed that there is some commonsense similarity between his result and Goedel's theorem about incompleteness of axiomatic systems. Arrow asked if it is possible to describe this similarity in more precise terms. In this paper, we make the first step towards this description. We show that in both cases, the impossibility result disappears if we take into account probabilities. Namely, we take into account that we can consider probabilistic situations, that we can make probabilistic conclusions, and that we can make probabilistic decisions (when we select different alternatives with different probabilities).

Technical Report UTEP-CS-23-15, April 2023

High-Impact Low-Probability Events Are Even More Important Than It Is Usually Assumed

Aaron Velasco, Olga Kosheleva, and Vladik Kreinovich

Published in: Kelly Cohen, Nicholas Ernest, Barnabas Bede, and Vladik Kreinovich (Eds.), "Fuzzy Information Processing 2023, Proceedings of the 2023 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2023", Cincinnati, Ohio, May 31 - June 2, 2023, Springer Lecture Notes in Networks and Systems, 2023, Vol. 751, pp. 344-349.

A large proportion of undesirable events like earthquakes, floods, tornados occur in zones where these events are frequent. However, a significant number of such events occur in other zones, where such events are rare. For example, while most major earthquakes occur in a vicinity of major faults, i.e., on the border between two tectonic plates, some strong earthquakes also occur inside plates. We want to mitigate all undesirable events, but our resources are limited. So, to allocate these resources, we need to decide which ones are more important. For this decision, a natural idea is to use the product of the probability of the undesirable event and possible damage caused by this event. A natural way to estimate probability is to use the frequency of such events in the past. This works well for high-probability events like earthquakes in a seismic zone near a fault. However, for high-impact low-probability event the frequency is small and, as a result, the actual probability may be very different from the observed frequency. In this paper, we show how to take this difference between frequency and probability into account. We also show that if we do take this difference into account, then high-impact low-probability events turn out to be even more important than it is usually assumed.

Technical Report UTEP-CS-23-14, April 2023

Wormholes, Superfast Computations, and Selivanov's Theorem

Olga Kosheleva and Vladik Kreinovich

While modern computers are fast, there are still many practical
problems that require even faster computers. It turns out that on
the fundamental level, one of the main factors limiting computation
speed is the fact that, according to modern physics, the speed of
all processes is limited by the speed of light. Good news is that
while the corresponding limitation is very severe in Euclidean
geometry, it can be more relaxed in (at least some) non-Euclidean
spaces, and, according to modern physics, the physical space is not
Euclidean. The differences from Euclidean character are especially
large on micro-level, where quantum effects need to be taken into
account. To analyze how we can speed up computations, it is
desirable to reconstruct the actual distance values --
corresponding to all possible paths -- from the values that we
actually measure -- which correspond only to macro-paths and thus,
provide only the upper bound for the distance. In our previous
papers -- including our joint paper with Victor Selivanov -- we
provided an explicit formula for such a reconstruction. But for
this formula to be useful, we need to analyze how algorithmic is
this reconstructions. In this paper, we show that while in general,
no reconstruction algorithm is possible, an algorithm *is*
possible if we impose a lower limit on the distances between steps
in a path. So, hopefully, this can help to eventually come up with
faster computations.

Technical Report UTEP-CS-23-13, April 2023

People Prefer More Information About Uncertainty, But Perform Worse When Given This Information: An Explanation of the Paradoxical Phenomenon

Jieqiong Zhao, Olga Kosheleva, and Vladik Kreinovich

Published in: Kelly Cohen, Nicholas Ernest, Barnabas Bede, and Vladik Kreinovich (Eds.), "Fuzzy Information Processing 2023, Proceedings of the 2023 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2023", Cincinnati, Ohio, May 31 - June 2, 2023, Springer Lecture Notes in Networks and Systems, 2023, Vol. 751, pp. 350-356.

In a recent experiment, decision makers were asked whether they would prefer having more information about the corresponding situation. They confirmed this preference, and such information was provided to them. However, strangely, the decisions of those who received this information were worse than the decisions of the control group -- that did not get this information. In this paper, we provide an explanation for this paradoxical situation.

Technical Report UTEP-CS-23-12, April 2023

Integrity First, Service Before Self, and Excellence: Core Values of US Air Force Naturally Follow from Decision Theory

Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

Published in: Kelly Cohen, Nicholas Ernest, Barnabas Bede, and Vladik Kreinovich (Eds.), "Fuzzy Information Processing 2023, Proceedings of the 2023 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2023", Cincinnati, Ohio, May 31 - June 2, 2023, Springer Lecture Notes in Networks and Systems, 2023, Vol. 751, pp. 320-324.

By analyzing data both from peace time and from war time, the US Air Force came with three principles that determine success: integrity, service before self, and excellent. We show that these three principles naturally follow from decision theory, a theory that describes how a rational person should make decisions.

Technical Report UTEP-CS-23-11, April 2023

Conflict Situations Are Inevitable When There Are Many Participants: A Proof Based on the Analysis of Aumann-Shapley Value

Sofia Holguin and Vladik Kreinovich

Published in: Kelly Cohen, Nicholas Ernest, Barnabas Bede, and Vladik Kreinovich (Eds.), "Fuzzy Information Processing 2023, Proceedings of the 2023 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2023", Cincinnati, Ohio, May 31 - June 2, 2023, Springer Lecture Notes in Networks and Systems, 2023, Vol. 751, pp. 325-330.

When collaboration of several people results in a business success, an important issue is how to fairly divide the gain between the participants. In principle, the solution to this problem is known since the 1950s: natural fairness requirements lead to the so-called Shapley value. However, the computation of Shapley value requires that we can estimate, for each subset of the set of all participants, how much gain they would have gained if they worked together without others. It is possible to perform such estimates when we have a small group of participants, but for a big company with thousands of employers this is not realistic. To deal with such situations, Nobelists Aumann and Shapley came up with a natural continuous approximation to Shapley value -- just like a continuous model of a solid body helps, since we cannot take into account all individual atoms. Specifically, they defined the Aumann-Shapley value as a limit of the Shapley value of discrete approximations: in some cases this limit exists, in some it does not. In this paper, we show that, in some reasonable sense, for almost all continuous situations the limit does not exist: we get different values depending on how we refine the discrete approximations. Our conclusion is that in such situations, since computing of fair division is not feasible, conflicts are inevitable.

Technical Report UTEP-CS-23-10, April 2023

Towards Decision Making Under Interval Uncertainty

Juan A. Lopez and Vladik Kreinovich

Published in: Kelly Cohen, Nicholas Ernest, Barnabas Bede, and Vladik Kreinovich (Eds.), "Fuzzy Information Processing 2023, Proceedings of the 2023 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2023", Cincinnati, Ohio, May 31 - June 2, 2023, Springer Lecture Notes in Networks and Systems, 2023, Vol. 751, pp. 335-337.

In many real-life situations, we need to make a decision. In many cases, we know the optimal decision in situations when we know the exact value of the corresponding quantity x. However, often, we do not know the exact value of this quantity, we only know the bounds on the value x -- i.e., we know the interval containing $x$. In this case, we need to select a decision corresponding to some value from this interval. The selected value will, in general, be different from the actual (unknown) value of this quantity. As a result, the quality of our decision will be lower than in the perfect case when we know the value x. Which value should we select in this case? In this paper, we provide a decision-theory-based recommendation for this selection.

Technical Report UTEP-CS-23-09, March 2023

Foundations of Neural Networks Explain the Empirical Success of the "Surrogate" Approach to Ordinal Regression -- and Recommend What Next

Salvador Robles Herrera, Martine Ceberio, and Vladik Kreinovich

Recently, a new efficient semi-heuristic statistical method -- called Surrogate Approach -- has been proposed for dealing with regression problems. How can we explain this empirical success? And since this method is only an approximation to reality, what can we recommend if there is a need for a more accurate approximation? In this paper, we show that this empirical success can be explained by the same arguments that explain the empirical success of neural networks -- and these arguments can also provide us with possible more general techniques (that will hopefully lead to more accurate approximation to real-life phenomena).

Technical Report UTEP-CS-23-08, March 2023

Why Gliding Symmetry Used to Be Prevalent in Biology But Practically Disappeared

Julio C. Urenda and Vladik Kreinovich

At present, many living creatures have symmetries; in particular, the left-right symmetry is ubiquitous. Interestingly, 600 million years ago, very fee living creatures had the left-right symmetry: most of them had a gliding symmetry, symmetry with respect to shift along a line followed by reflection in this line. This symmetry is really seen in living creatures today. In this paper, we provide a physical-based geometric explanation for this symmetry change: we explain both why gliding symmetry was ubiquitous, and why at present, it is rarely observed, while the left-right symmetry is prevalent.

Technical Report UTEP-CS-23-07, March 2023

The World Is Cognizable: An Argument Based on Hoermander's Theorem

Miroslav Svitek, Olga Kosheleva and Vladik Kreinovich

Is the world cognizable? Is it, in principle, possible to predict the future state of the world based on the measurements and observations performed in a local area -- e.g., in the Solar system? In this paper, we use general physicists' principles and a mathematical theorem about partial differential equations to show that such prediction is indeed, theoretically possible.

Technical Report UTEP-CS-23-06, March 2023

Updated version UTEP-CS-23-06a, April 2023

Everything Is a Matter of Degree: The Main Idea Behind Fuzzy Logic Is Useful in Geosciences and in Authorship

Christian Servin, Aaron Velasco, Edgar Daniel Rodriguez Velasquez, and Vladik Kreinovich

Published in: Kelly Cohen, Nicholas Ernest, Barnabas Bede, and Vladik Kreinovich (Eds.), "Fuzzy Information Processing 2023, Proceedings of the 2023 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2023", Cincinnati, Ohio, May 31 - June 2, 2023, Springer Lecture Notes in Networks and Systems, 2023, Vol. 751, pp. 217-227.

This paper presents two applications of the general principle -- the everything is a matter of degree -- the principle that underlies fuzzy techniques. The first -- qualitative -- application helps explain the fact that while most earthquakes occur close to faults (borders between tectonic plates or terranes), earthquakes have also been observed in areas which are far away from the known faults. The second -- more quantitative -- application is to the problem of which of the collaborators should be listed as authors and which should be simply thanked in the paper. We argue that the best answer to this question is to explicitly state the degree of authorship -- in contrast to the usual yes-no approach. We also show how to take into account that this degree can be estimated only with some uncertainty -- i.e., that we need to deal with interval-valued degrees.

Original file UTEP-CS-23-06 in pdf

Updated version UTEP-CS-23-06a in pdf

Technical Report UTEP-CS-23-05, February 2023

Updated version UTEP-CS-23-05a, April 2023

Causality: Hypergraphs, Matter of Degree, Foundations of Cosmology

Cliff Joslyn, Andres Ortiz-Munoz, Edgar Daniel Rodriguez Velasquez, Olga Kosheleva, and Vladik Kreinovich

Published in: Kelly Cohen, Nicholas Ernest, Barnabas Bede, and Vladik Kreinovich (Eds.), "Fuzzy Information Processing 2023, Proceedings of the 2023 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2023", Cincinnati, Ohio, May 31 - June 2, 2023, Springer Lecture Notes in Networks and Systems, 2023, Vol. 751, pp. 279-289.

The notion of causality is very important in many applications areas. Because of this importance, there are several formalizations of this notion in physics and in AI. Most of these definitions describe causality as a crisp ("yes"-"no") relation between two events or two processes -- cause and effect. However, such descriptions do not fully capture the intuitive idea of causality: first, often, several conditions are needed to be present for an effect to occur, and, second, the effect is often a matter of degree. In this paper, we show how to modify the current description of causality so as to take both these phenomena into account -- in particular, by extending the notion of directed acyclic graph to hypergraphs. As a somewhat unexpected side effect of our analysis, we get a natural explanation of why, in contrast to space-time of Special Relativity -- in which division into space and time depends on the observer, in cosmological solutions there is a clear absolute separation between space and time.

Original file UTEP-CS-23-05 in pdf

Updated version UTEP-CS-23-05a in pdf

Technical Report UTEP-CS-23-04, February 2023

SUCCESS (Studying Underlying Characteristics of Computing and Engineering Student Success) Survey: Non-Cognitive and Affective Profiles in Engineering and Computing Students at UTEP (2018-2022)

Sanga Kim, Christian Teran Lopez, Andres Segura, and Gabriel Miki

Technical Report UTEP-CS-23-03, January 2023

Updated version UTEP-CS-23-03a, April 2023

Interval-Valued and Set-Valued Extensions of Discrete Fuzzy Logics, Belnap Logic, and Color Optical Computing

Victor L. Timchenko, Yury P. Kondratenko, and Vladik Kreinovich

Published in: Sebastia Massanet, Susana Montes,
Daniel Ruiz-Aguilera, and Manuel Gonzalez-Hidalgo (Eds.), *Fuzzy
Logic and Technology, and Aggregation Operators. Proceedings of
the 13th Conference of the European Society for Fuzzy Logic and
Technology EUSFLAT 2023 and 12th International Summer School on
Aggregation Operators AGOP 2023, Palma de Mallorca, Spain,
September 4-8, 2023*, Springer Lecture Notes in Computer Science,
2023, Vol. 14069, pp. 297-303, doi
https://doi.org/10.1007/978-3-031-39965-7_25

It has been recently shown that in some applications, e.g., in ship navigation near a harbor, it is convenient to use combinations of basic colors -- red, green, and blue -- to represent different fuzzy degrees. In this paper, we provide a natural explanation for the efficiency of this empirical fact: namely, we show that it is reasonable to consider discrete fuzzy logics, it is reasonable to consider their interval-valued and set-valued extensions, and that a set-valued extension of the 3-values logic is naturally equivalent to the use of color combinations.

Original file UTEP-CS-23-03 in pdf

Updated version UTEP-CS-23-03a in pdf

Technical Report UTEP-CS-23-02, January 2023

Updated version UTEP-CS-23-02a, April 2023

Why Fractional Fuzzy

Mehran Mazandarani, Olga Kosheleva, and Vladik Kreinovich

Published in: Sebastia Massanet, Susana Montes,
Daniel Ruiz-Aguilera, and Manuel Gonzalez-Hidalgo (Eds.), *Fuzzy
Logic and Technology, and Aggregation Operators. Proceedings of
the 13th Conference of the European Society for Fuzzy Logic and
Technology EUSFLAT 2023 and 12th International Summer School on
Aggregation Operators AGOP 2023, Palma de Mallorca, Spain,
September 4-8, 2023*, Springer Lecture Notes in Computer Science,
2023, Vol. 14069, pp. 285-296, doi
https://doi.org/10.1007/978-3-031-39965-7_24

In many practical situation, control experts can only formulate their experience by using imprecise ("fuzzy") words from natural language. To incorporate this knowledge in automatic controllers, Lotfi Zadeh came up with a methodology that translate the informal expert statements into a precise control strategy. This methodology -- and its following modifications -- is known as fuzzy control. Fuzzy control often leads to a reasonable control -- and we can get an even better control results by tuning the resulting control strategy on the actual system. There are many parameters that can be changes during tuning, so tuning usually is rather time-consuming. many parameters. Recently, it was empirically shown that in many cases, quite good results can be attained by using a special 1-parametric tuning procedure called fractional fuzzy inference -- we get up to 40% improvements just by selecting the proper value of a single parameter. In this paper, we provide a theoretical explanation of why fractional fuzzy inference works so well.

Original file UTEP-CS-23-02 in pdf

Updated version UTEP-CS-23-02a in pdf

Technical Report UTEP-CS-23-01, January 2023

Designing an Optimal Medicine Cocktail Is NP-Hard

Luc Longpre and Vladik Kreinovich

In many cases, a combination of different drugs -- known as a medicine cocktail -- is more effective against a disease than each individual drug. It is desirable to find the most effective cocktail. This problem can be naturally formulated as a problem of maximizing a quadratic expression under the condition that all the unknowns (concentrations of different medicines) are non-negative. At first glance, it may seem that this problem is feasible -- since a similar economic problem of finding the optimal investment portfolio is known to be feasible. However, it turns out that the cocktail problem is different: it is NP-hard.