Published in: Raffaele Pisano (ed.), Nanoscience & Nanotechnologies. Critical Problems, Science in Society, Historical Perspectives, Springer, Cham, Switzerland, 2025, pp. 421-430.
There are many current and prospective positive aspects of nanotechnology. However, while we look forward to its future successes, we need to keep our eyes open and be prepared for what will really be a future shock: that quantum computing – an inevitable part of nanotechnology – will enable the future folks to read all our encrypted messages and thus, learn everything that we wanted to keep secret. This will be really the Judgement Day, when all our sins will be open to everyone. How we will react to it? Will this destroy our civilization? Let us hope that the civilization will survive – as it survived many calamities so far. But to survive, we need to be prepared, we need to know what lies ahead. The earlier we will start seriously thinking about this future shock, the better prepared we will be.
File UTEP-CS-22-128 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 401-421.
Interval computations usually deal with the case of epistemic uncertainty, when the only information that we have about a value of a quantity is that this value is contained in a given interval. However, intervals can also represent aleatory uncertainty -- when we know that each value from this interval is actually attained for some object at some moment of time. In this paper, we analyze how to take such aleatory uncertainty into account when processing data. We show that in case when different quantities are independent, we can use the same formulas for dealing with aleatory uncertainty as we use for epistemic one. We also provide formulas for processing aleatory intervals in situations when we have no information about the dependence between the inputs quantities.
File UTEP-CS-22-127 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 99-104.
In many practical situations, for each alternative i, we do not know the corresponding gain xi, we only know the interval [li,ui] of possible gains. In such situations, a reasonable way to select an alternative is to choose some value α from the interval [0,1] and select the alternative i for which the Hurwicz combination α*ui + (1 − α)*li is the largest possible. In situations when we do not know the user's α, a reasonable idea is to select all alternatives that are optimal for some α. In this paper, we describe a feasible algorithm for such a selection.
File UTEP-CS-22-126 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 423-440.
When we usually process data, we, in effect, implicitly assume that we know the exact values of all the inputs. In practice, these values comes from measurements, and measurements are never absolutely accurate. In many cases, the only information about the actual (unknown) values of each input is that this value belongs to an appropriate interval. Under this interval uncertainty, we need to compute the range of all possible results of applying the data processing algorithm when the inputs are in these intervals. In general, the problem of exactly computing this range is NP-hard, which means that in feasible time, we can, in general, only compute approximations to these ranges. In this paper, we show that, somewhat surprisingly, the usual standard algorithm for this approximate computation is not inclusion-monotonic, i.e., switching to more accurate measurements and narrower subintervals does not necessarily lead to narrower estimates for the resulting range.
File UTEP-CS-22-125 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 387-399.
Data that we process comes either from measurements or from experts -- or from the results of previous data processing that were also based on measurements and/or expert estimates. In both cases, the data is imprecise. To gauge the accuracy of the results of data processing, we need to take the corresponding data uncertainty into account. In this paper, we describe a new algorithm for taking fuzzy uncertainty into account, an algorithm that, for small number of inputs, leads to the same or even better accuracy than the previously proposed methods.
File UTEP-CS-22-124 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 377-386.
Theoretically, we can have membership functions of arbitrary shape. However, in practice, at any given moment of time, we can only represent finitely many parameters in a computer. As a result, we usually restrict ourselves to finite-parametric families of membership functions. The most widely used families are piecewise linear ones, e.g., triangular and trapezoid membership functions. The problem with these families is that if we know a nonlinear relation y = f(x) between quantities, the corresponding relation between membership functions is only approximate -- since for piecewise linear membership functions for x, the resulting membership function for y is not piecewise linear. In this paper, we show that the only way to preserve, in the fuzzy representation, all relations between quantities is to limit ourselves to piecewise constant membership functions, i.e., in effect, to use a finite set of certainty degrees instead of the whole interval [0,1].
File UTEP-CS-22-123 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 49-53.
Depression is a serious medical problem. If diagnosed early, it can usually be cured, but if left undetected, it can lead to suicidal thoughts and behavior. The early stages of depression are difficult to diagnose. Recently, researchers found a promising approach to such diagnosis -- it turns out that depression is correlated with low heart rate variability. In this paper, we show that the general systems approach can explain this empirical relation.
File UTEP-CS-22-122 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Deep Learning and Other Soft Computing Techniques: Biomedical and Related Applications, Springer, Cham, Switzerland, 2023, pp. 217-226. appear.
Based on the rotation of the stars around a galaxy center, one can estimate the corresponding gravitational acceleration -- which turns out to be much larger than what Newton's theory predicts based on the masses of all visible objects. The majority of physicists believe that this discrepancy indicates the presence of "dark" matter, but this idea has some unsolved problems. An alternative idea -- known as Modified Newtonian Dynamics (MOND, for short) is that for galaxy-size distances, Newton's gravitation theory needs to be modified. One of the most effective versions of this idea uses so-called simple interpolating function. In this paper, we provide a possible explanation for this version's effectiveness. This explanation is based on the physicists' belief that out of all possible theories consistent with observations, the true theory is the simplest one. In line with this idea, we prove that among all the modifications which explain both the usual Newton's theory for usual distance and the observed interactions for larger distances, this so-called "simple interpolating function" is indeed the simplest -- namely, it has the smallest computational complexity.
File UTEP-CS-22-121 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Deep Learning and Other Soft Computing Techniques: Biomedical and Related Applications, Springer, Cham, Switzerland, 2023, pp. 253-284.
Traditional analysis of uncertainty of the result of data processing assumes that all measurement errors are independent. In reality, there may be common factor affecting these errors, so these errors may be dependent. In such cases, the independence assumption may lead to underestimation of uncertainty. In such cases, a guaranteed way to be on the safe side is to make no assumption about independence at all. In practice, however, we may have information that a few pairs of measurement errors are indeed independent -- while we still have no information about all other pairs. Alternatively, we may suspect that for a few pairs of measurement errors, there may be correlation -- but for all other pairs, measurement errors are independent. In both cases, unusual pairs can be naturally represented as edges of a graph. In this paper, we show how to estimate the uncertainty of the result of data processing when this graph is small.
File UTEP-CS-22-120 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Deep Learning and Other Soft Computing Techniques: Biomedical and Related Applications, Springer, Cham, Switzerland, 2023, pp. 209-214.
Pavement must be adequate for all the temperatures, ranging from the winter cold to the summer heat. In particular, this means that for all possible temperatures, the viscosity of the asphalt binder must stay within the desired bounds. To predict how the designed pavement will behave under different temperatures, it is desirable to have a general idea of how viscosity changes with temperature. Pavement engineers have come up with an empirical approximate formula describing this change. However, since this formula is purely empirical, with no theoretical justification, practitioners are often somewhat reluctant to depend on this formula. In this paper, we provide a theoretical explanation for this empirical formula -- namely, we should that this formula can be naturally derived from natural invariance requirements.
File UTEP-CS-22-119 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Deep Learning and Other Soft Computing Techniques: Biomedical and Related Applications, Springer, Cham, Switzerland, 2023, pp. 235-241.
While most physical processes are localized -- in the sense that each event can only affect events in its close vicinity -- many physicists believe that some processes are non-local. These beliefs range from more heretic -- such as hidden variables in quantum physics -- to more widely accepted, such as the non-local character of energy in General Relativity. In this paper, we attract attention to the fact that non-local processes bring in the possibility of drastically speeding up computations.
File UTEP-CS-22-118 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Deep Learning and Other Soft Computing Techniques: Biomedical and Related Applications, Springer, Cham, Switzerland, 2023, pp. 227-233, https://doi.org/10.1007/978-3-031-29447-1_20 appear.
In this paper, we show that requirements that computations be fast and noise-resistant naturally lead to what we call color-based optical computing.
File UTEP-CS-22-117 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 15-20.
To analyze the effect of pollution on marine life, it is important to know how exactly the concentration of toxic substances decreases with time. There are several semi-empirical formulas that describe this decrease. In this paper, we provide a theoretical explanation for these empirical formulas.
File UTEP-CS-22-116 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 3-6.
Predatory birds play an important role in an ecosystem. It is therefore important to study their hunting behavior, in particular, the distribution of their waiting time. A recent empirical study showed that the waiting time is distributed according to the power law. In this paper, we use natural invariance ideas to come up with a theoretical explanation for this empirical dependence.
File UTEP-CS-22-115 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 143-148.
One of the most prospective aerospace engines is a Dielectric Barrier Discharge (DBD) thruster -- an effective electric engine without moving parts. Originally designed by NASA for flights over other planets, it has been shown to be very promising for Earth-based flights as well. The efficiency of this engine depends on the proper selection of the corresponding electric field. To make this selection, we need to know, in particular, how its thrust depends on the atmospheric pressure. At present, for this dependence, we only know an approximate semi-empirical formula. In this paper, we use natural invariance requirements to come up with a theoretical explanation for this empirical dependence, and to propose a more general family of models that can lead to more accurate description of the DBD thruster's behavior.
File UTEP-CS-22-114 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 75-78.
For multi-tasking optimization problems, it has been empirically shown that the most effective resource allocation is attained when we assume that the gain of each task logarithmically depends on the computation time allocated to this task. In this paper, we provide a theoretical explanation for this empirical fact.
File UTEP-CS-22-113 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 295-299.
In many practical situations -- e.g., when preparing examples for a machine learning algorithm -- we need to label a large number of images or speech recordings. One way to do it is to pay people around the world to perform this labeling; this is known as crowdsourcing. In many cases, crowd-workers generate not only answers, but also their degrees of confidence that the answer is correct. Some crowd-workers cheat: they produce almost random answers without bothering to spend time analyzing the corresponding image. Algorithms have been developed to detect such cheaters. The problem is that many crowd-workers cannot describe their degree of confidence by a single number, they are more comfortable providing an interval of possible degrees. To apply anomaly-detecting algorithms to such interval data, we need to select a single number from each such interval. Empirical studies have shown that the most efficient selection is when we select the arithmetic average. In this paper, we explain this empirical result by showing that arithmetic average is the only selection that satisfies natural invariance requirements.
File UTEP-CS-22-112 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 149-155.
In many practical situations, we need to measure the value of a cumulative quantity, i.e., a quantity that is obtained by adding measurement results corresponding to different spatial locations. How can we select the measuring instruments so that the resulting cumulative quantity can be determined with known accuracy -- and, to avoid unnecessary expenses, not more accurately than needed? It turns out that the only case where such an optimal arrangement is possible is when the required accuracy means selecting the upper bounds on absolute and relative error components. This results provides a possible explanation for the ubiquity of such two-component accuracy requirements.
File UTEP-CS-22-111 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 183-187.
There is a reasonably accurate empirical formula that predicts, for two words i and j, the number Xij of times when the word i will appear in the vicinity of the word j. The parameters of this formula are determined by using the weighted least square approach. Empirically, the predictions are the most accurate if we use the weights proportional to a power of Xij. In this paper, we provide a theoretical explanation for this empirical fact.
File UTEP-CS-22-110 in pdf
To appear in Asian Journal of Economics and Banking
Usually, people's interests do not match perfectly. So when several people need to make a joint decision, they need to compromise. The more people one has to coordinate the decision with, the fewer chances that each person's preferences will be properly taken into account. Therefore, when a large group of people need to make a decision, it is desirable to make sure that this decision can be reached by dividing all the people into small-size groups so that this decision can reach a compromise between the members of each group. In this paper, we use a recent mathematical result to describe the smallest possible group size for which such a joint decision is always possible.
File UTEP-CS-22-109 in pdf
To support machine learning of cross-language prosodic mappings and other ways to improve speech-to-speech translation, we present a protocol for collecting closely matched pairs of utterances across languages, a description of the resulting data collection, and some observations and musings. This report is intended for 1) people using this corpus, 2) people extending this corpus, and 3) people designing similar collections of bilingual dialog data.
File UTEP-CS-22-108 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 165-175.
In many practical situations, we want to estimate a quantity y that is difficult -- or even impossible -- to measure directly. In such cases, often, there are easier-to-measure quantities x1, ..., xn that are related to y by a known dependence y = f(x1,...,xn). So, to estimate y, we can measure these quantities xi and use the measurement results to estimate y. The two natural questions are: (1) within limited resources, what is the best accuracy with which we can estimate y, and (2) to reach a given accuracy, what amount of resources do we need? In this paper, we provide answers to these two questions.
File UTEP-CS-22-107 in pdf
To appear in: Evgeny Dantsin and Vladik Kreinovich (eds.), Uncertainty Quantification and Uncertainty Propagation under Traditional and AI-Based Data Processing (and Related Topics): Legacy of Grigory Tseytin, Springer, Cham, Switzerland.
In this paper, we use AI ideas to provide a theoretical explanation for semi-empirical formulas that describe how the pavement strength changes with time, and how we can predict the pavement lifetime.
File UTEP-CS-22-106 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 63-67.
Already ancients noticed that decrease is usually faster than growth -- whether we talk about companies or empires. A modern researcher Ugo Bardi confirmed that this phenomenon is still valid today. He called it Seneca effect, after the ancient philosopher Seneca -- one of those who observed this phenomenon. In this paper, we provide a natural explanation for the Seneca effect.
File UTEP-CS-22-105 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 29-32.
A recent study has shown that the temperature threshold -- after which even young healthy individuals start feeling the effect of heat on their productivity -- is 30.5 ± 1 C. In this paper, we use decision theory ideas to provide a theoretical explanation for this empirical finding.
File UTEP-CS-22-104 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 39-43.
In this paper, we show that many recent experimental medical results about the effect of different factors on our health can be explained by common sense ideas.
File UTEP-CS-22-103 in pdf
Published in: Chiranjibe Jana, Madhumangal Pal, G. Muhiuddin, and Peide Liu (eds.), Fuzzy Optimization, Decision-making and Operations Research -- Theory and Applications, Springer, Cham, Switzerland, 2023, pp. 33-49.
In practice, there is often a need to describe the relation y = f(x) between two quantities in algorithmic form: e.g., we want to describe the control value y corresponding to the given input x, or we want to predict the future value y based on the current value x. In many such cases, we have expert knowledge about the desired dependence, but experts can only describe their knowledge by using imprecise ("fuzzy") words from a natural language. Methodologies for transforming such knowledge into an algorithm y = f(x) are known as fuzzy methodologies. There exist several fuzzy methodologies, a natural question is: which of them is the most adequate? In this paper, we formulate the natural notion of adequacy: that if the expert rules are formulated based on some function y = f(x), then the methodology should reconstruct this function as accurately as possible. We show that none of the existing fuzzy methodologies is the most adequate in this sense, and we describe a new fuzzy methodology that is the most adequate.
Original file UTEP-CS-22-102 in pdf
Updated version UTEP-CS-22-102a in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 287-292.
A recent paper showed that the solar activity cycle has five clear stages, and that taking theses stages into account helps to make accurate predictions of future solar activity. Similar 5-stage models have been effective in many other application area, e.g., in psychology, where a 5-stage model provides an effective description of grief. In this paper, we provide a general geometric explanations of why 5-stage models are often effective. This result also explains other empirical facts, e.g., the seven plus minus two law in psychology and the fact that only five space-time dimensions have found direct physical meaning.
File UTEP-CS-22-101 in pdf
Published in Proceedings of the 11th IEEE International Conference on Intelligent Systems IS'22, Warsaw, Poland, October 12-14, 2022.
Entropy is a natural measure of randomness. It progresses from its smallest possible value 0 -- when we have a deterministic case in which one alternative i occurs with probability 1 (pi = 1), to the largest possible value which is attained at a uniform distribution p1 = ... = pn = 1/n. Intuitively, both in the deterministic case and in the uniform distribution case, there is not much variety in the distribution, while in the intermediate cases, when we have several different values pi, there is a strong variety. Entropy does not seem to capture this notion of variety. In this paper, we discuss how we can describe this intuitive notion.
Original file UTEP-CS-22-100 in pdf
Updated version UTEP-CS-22-100a in pdf
Published in Proceedings of the 11th IEEE International Conference on Intelligent Systems IS'22, Warsaw, Poland, October 12-14, 2022.
Most of our decisions are based on the notion of similarity: we use a decision that helped in similar situations. From this viewpoint, it is important to have, for each pair of situations or objects, a numerical value describing similarity between them. This is called a similarity measure. In some cases, the only information that we can use to estimate the similarity value is some natural distance measure d(a,b). In many such situations, empirical data shows that the similarity measure 1/(1+d) is very effective. In this paper, we provide two explanations for this effectiveness.
Original file UTEP-CS-22-99 in pdf
Updated version UTEP-CS-22-99a in pdf
Published in Proceedings of the 11th IEEE International Conference on Intelligent Systems IS'22, Warsaw, Poland, October 12-14, 2022.
In the late 1990s, researchers analyzed what distinguishes great companies from simply good ones. They found several features that are typical for great companies. Interestingly, most of these features seem counter-intuitive. In this paper, we show that from the algorithmic viewpoint, many of these features make perfect sense. Some of the resulting explanations are simple and straightforward, other explanations rely on complex not-well-publicized results from theoretical computer science.
Original file UTEP-CS-22-98 in pdf
Updated version UTEP-CS-22-98a in pdf
Published in Proceedings of the 11th IEEE International Conference on Intelligent Systems IS'22, Warsaw, Poland, October 12-14, 2022.
In many econometric situations, we can predict future values of relevant quantities by using an empirical formula known as exponential Almon lag. While this formula is empirically successful, there have been no convincing theoretical explanation for this success. In this paper, we provide such a theoretical explanation based on general invariance ideas.
Original file UTEP-CS-22-97 in pdf
Updated version UTEP-CS-22-97a in pdf
Published in Proceedings of the 11th IEEE International Conference on Intelligent Systems IS'22, Warsaw, Poland, October 12-14, 2022.
In many cases, experts are much more accurate when they estimate the ratio of two quantities than when they estimate the actual values. For example, if it difficult to accurately estimate the height of a person on a photo, but if we have two people standing side by side, we can easily estimate to what extent one of them is taller than the other one. To get accurate estimates, it is therefore desirable to use such ratio estimates. Empirical analysis shows that to obtain the most accurate results, we need to compare all the objects with either the "best" object -- i.e., the object with the largest value of the corresponding quantity -- or the "worst" object -- i.e., the object with the smallest value of this quantity. In this paper, we provide a theoretical explanation for this empirical observation.
Original file UTEP-CS-22-96 in pdf
Updated version UTEP-CS-22-96a in pdf
Published in Proceedings of the 11th IEEE International Conference on Intelligent Systems IS'22, Warsaw, Poland, October 12-14, 2022.
At first glance, the larger the object, the larger should be its effect on the surroundings -- in particular, the larger should be its effect on the surrounding flow. However, in many practical situations, we observe the opposite effect: smaller-size particles affect the flow much more than larger-size particles. This seemingly counterintuitive phenomena has been observed in many situations: lava flow in the volcanoes, air circulation in tornadoes, blood flow in a body, the effect of fish on water circulation in the ocean, and the effect of added particles on seeping water that damages historic buildings. In this paper, we show that all these phenomena can be explained in natural geometric terms.
Original file UTEP-CS-22-95 in pdf
Updated version UTEP-CS-22-95a in pdf
Published in Proceedings of the 11th IEEE International Conference on Intelligent Systems IS'22, Warsaw, Poland, October 12-14, 2022.
In many applications of intelligent computing, we need to choose an appropriate function -- e.g., an appropriate re-scaling function, or an appropriate aggregation function. In applications of intelligent techniques, the problem of selecting an optimal function is usually too complex or too imprecise to be solved analytically, so the best functions are found empirically, by trying a large number of alternatives. In this paper, we show that in many such cases, the resulting empirical choice can be explained by natural invariance ideas. Example range from applications to building blocks of intelligent techniques -- such as aggregation (including hierarchical aggregation) and averaging -- to method-specific (polynomial fuzzy approach, pooling and averaging in deep learning) and domain-specific application, such as describing relative position of 2D and 3D objects, gauging segmentation quality, and perception of delay in public transportation.
Original file UTEP-CS-22-94 in pdf
Updated version UTEP-CS-22-94a in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 253-255 .
If a person has a small number of good alternatives, this person can usually make a good decision, i.e., select one of the given alternatives. However, when we have a large number of good alternatives, people take much longer to make a decision -- sometimes so long that, as a result, no decision is made. How can we explain this seemingly no-optimal behavior? In this paper, we show that this "decision paralysis" can be naturally explained by using the usual decision making ideas.
File UTEP-CS-22-93 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 57-61.
A reasonably small inflation helps economy as a whole -- by encouraging spending, but it also hurts people by decreasing the value of their savings. It is therefore reasonably to come up with an optimal (and fair) level of inflation, that would stimulate economy without hurting people too much. In this paper, we describe how this can be potentially done.
File UTEP-CS-22-92 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 29-32.
Experimental data shows that people placed in orderly rooms donate more to charity and make healthier food choices that people placed in disorderly rooms. On the other hand, people placed in disorderly rooms show more creativity. In this paper, we provide a possible explanation for these empirical phenomena.
File UTEP-CS-22-91 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 7-13.
At present, the most efficient machine learning techniques is deep learning, with neurons using Rectified Linear (ReLU) activation function s(z) = max(0,z), in many cases, the use of Rectified Power (RePU) activation functions (s(z))^p -- for some p -- leads to better results. In this paper, we explain these results by proving that RePU functions (or their "leaky" versions) are optimal with respect that all reasonable optimality criteria.
File UTEP-CS-22-90 in pdf
Published in: In: Vladik Kreinovich, Songsak Sriboonchitta, and Woraphon Yamaka (eds.), Machine Learning for Econometrics and Related Topics, Springer, Cham, Switzerland, 2024, pp. 169-174.
Shapley value -- a useful way to allocate gains in cooperative games -- has been very successful in machine learning (and in other applications beyond cooperative games). This success is somewhat puzzling, since the usual derivation of the Shapley value is based on requirements like additivity that are natural in cooperative games and but not ents like additivity and is, thus, applicable in the machine learning case as well.
File UTEP-CS-22-89 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 95-98.
Strangely enough, investors invest in high-risk low-profit enterprises as well. At first glance, this seems to contradict common sense and financial basics. However, we show that such investments make perfect sense as long as the related risks are independent from the risks of other investments. Moreover, we show that an optimal investment portfolio should allocate some investment to this enterprise.
File UTEP-CS-22-88 in pdf
Published in Mathematical Structures and Modeling, 2022, Vol. 63, pp. 28-31.
The fact that the kinetic energy of a particle cannot exceed its overall energy implies that the velocity -- i.e. the derivative of the trajectory -- should be bounded. This means, in effect, that all the trajectories are differentiable (smooth). However, at first glance, there seems to be no direct requirement that the velocities continuously depend on time. In this paper, we show that the properties of electromagnetic field necessitate that the velocities are continuous functions of time -- moreover, that they are at least as continuous as the Brownian motion.
File UTEP-CS-22-87 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 441-450.
In a computer, all the information about an object is described by a sequence of 0s and 1s. At any given moment of time, we only have partial information, but as we perform more measurements and observations, we get longer and longer sequence that provides a more and more accurate description of the object. In the limit, we get a perfect description by an infinite binary sequence. If the objects are similar, measurement results are similar, so the resulting binary sequences are similar. Thus, to gauge similarity of two objects, a reasonable idea is to define an appropriate metric on the set of all infinite binary sequences. Several such metrics have been proposed, but their limitation is that while the order of the bits is rather irrelevant -- if we have several simultaneous measurements, we can place them in the computer in different order -- the distance measured by current formulas change if we select a different order. It is therefore natural to look for permutation-invariant metrics, i.e., distances that do not change if we select different orders. In this paper, we provide a full description of all such metrics. We also explain thee limitation of these new metrics: that they are, in general, not computable.
File UTEP-CS-22-86 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 79-83.
Starting with the 1980s, a popular US satirical radio show described a fictitious town Lake Wobegon where ``all children are above average'' -- parodying the way parents like to talk about their children. This everyone-above-average situation was part of the fiction since, if we interpret the average in the precise mathematical sense, as average over all the town's children, then such a situation is clearly impossible. However, usually, when parents make this claim, they do not mean town-wise average, they mean average over all the kids with whom their child directly interacts. Somewhat surprisingly, it turns out that if we interpret average this way, then the everyone-above-average situation becomes quite possible. But is it good? At first glance, this situation seems to imply fairness and equality, but, as we show, in reality, it may lead to much more inequality than in other cases.
File UTEP-CS-22-85 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 257-261.
It is known that our perception of time depends on our level of happiness: time seems to pass slower when we have unpleasant experiences and faster if our experiences are pleasant. Several explanations have been proposed for this effect. However, these explanations are based on specific features of human memory and/or human perception, features that, in turn, need explaining. In this paper, we show that this effect can be explained on a much more basic level of decision theory, without utilizing any specific features of human memory or perception.
File UTEP-CS-22-84 in pdf
Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 353-361..
Psychologists have shown that most information about the mood and attitude of a speaker is carried by the lowest (fundamental) frequency. Because of this frequency's importance, even when the corresponding Fourier component is weak, the human brain reconstruct this frequency based on higher harmonics. The problems is that many people lack this ability. To help them better understand moods and attitudes in social interaction, it is therefore desirable to come up with devices and algorithms that would reconstruct the fundamental frequency. In this paper, we show that ideas from soft computing and computational complexity can be used for this purpose.
File UTEP-CS-22-83 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 157-164.
In general, the more measurements we perform, the more information we gain about the system and thus, the more adequate decisions we will be able to make. However, in situations when we perform measurements to check for safety, the situation is sometimes opposite: the more additional measurements we perform beyond what is required, the worse the decisions will be: namely, the higher the chance that a perfectly safe system will be erroneously classified as unsafe and therefore, unnecessary additional features will be added to the system design. This is not just a theoretical possibility: exactly this phenomenon is one of the reasons why the construction of a world-wide thermonuclear research center has been suspended. In this paper, we show that the reason for this paradox is in the way the safety standards are formulated now -- what was a right formulation when sensors were much more expensive is no longer adequate now when sensors and measurements are much cheaper. We also propose how to modify the safety standards so as to avoid this paradox and make sure that additional measurements always lead to better solutions.
File UTEP-CS-22-82 in pdf
Published in Advances in AI and Machine Learning, 2022, Vol. 2, No. 3, pp. 456-468.
In many application areas, there are effective empirical formulas that need explanation. In this paper, we focus on two such challenges: deep learning, where a so-called softplus activation function is known to be very efficient, and pavement engineering, where there are empirical formulas describing the dependence of the pavement strength on the properties of the underlying soil. We show that similar scale-invariance ideas can explain both types of formulas -- and, in the case of pavement engineering, invariance ideas can lead to a new formula that combines the advantages of several known ones.
Original file UTEP-CS-22-81 in pdf
Updated version UTEP-CS-22-81c in pdf
Published in: In: Vladik Kreinovich, Songsak Sriboonchitta, and Woraphon Yamaka (eds.), Machine Learning for Econometrics and Related Topics, Springer, Cham, Switzerland, 2024, 181-186.
To get a better picture of the future behavior of different economics-related quantities, we need to be able to predict not only their mean values, but also their distribution. For example, it is desirable not only to predict future average income, but also to predict the future distribution of income. One of the convenient ways to describe a probability distribution is by using alpha-quantiles such as medians (corresponding to alpha = 0.5), quartiles (corresponding to alpha = 0.25 and alpha = 0.75), etc. In principle, an alpha-quantile of the desired future quantity can depend on beta-quantiles of current distributions corresponding to all possible values beta. However, in many practical situations, we can get very good predictions based only on current quantiles corresponding to beta = alpha; this is known as quartile regression. There is no convincing explanation of why quantile regression often works. In this paper, we use an agriculture-related case study to provide a partial explanation for this empirical success -- namely, we explain it in situations when the inputs used for prediction are highly correlated.
File UTEP-CS-22-80 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 345-352.
A known relation between membership functions and probability density functions allows us to naturally extend statistical characteristics like central moments to the fuzzy case. In case of interval-valued fuzzy sets, we have several possible membership functions consistent with our knowledge. For different membership functions, in general, we have different values of the central moments. It is therefore desirable to compute, in the interval-valued fuzzy case, the range of possible values for each such moment. In this paper, we provide efficient algorithms for this computation.
File UTEP-CS-22-79 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 335-340.
When evolutionary computation techniques are used to solve continuous optimization problems, usually, convex combination is used as a crossover operation. Empirically, this crossover operation works well, but this success is, from the foundational viewpoint, a challenge: why this crossover operation works well is not clear. In this paper, we provide a theoretical explanation for this empirical success.
File UTEP-CS-22-78 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Deep Learning and Other Soft Computing Techniques: Biomedical and Related Applications, Springer, Cham, Switzerland, 2023, pp. 285-291.
In this paper, we consider two different practical problems that turned to be mathematically similar: (1) how to combine two expert-provided probabilities of some event and (2) how to estimate the frequency of a certain phenomenon (e.g., illness) in an intersection of two populations if we know the frequencies in each of these populations. In both cases, we use the maximum entropy approach to come up with a solution.
File UTEP-CS-22-77 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 127-133.
Traditional way to compute the overall grade for the class is to use the weighted sum of the grades for all the assignments and exams, including the final exam. In terms of encouraging students to study hard throughout the semester, this grading scheme is better than an alternative scheme, in which all that matters is the grade on the final exam: in contrast to this alternative scheme, in the weighted-sum approach, students are penalized if they did not do well in the beginning of the semester. In practice, however, instructors sometimes deviate from the weighted-sum scheme: indeed, if the weighted sum is below the passing threshold, but a student shows good knowledge on the comprehensive final exam, it makes no sense to fail the student and make him/her waste time re-learning what this student already learned. So, in this case, instructors usually raise the weighted sum grade to the passing level and pass the student. This sounds reasonable, but this idea has a limitation similar to the limitation of the alternative scheme: namely, it does not encourage those students who were initially low-performing to do as well as possible on the final exam. Indeed, within this idea, a small increase in the student's grade on the final exam will not change the overall grade for the class. In this paper, we provide a natural idea of how we can overcome this limitation.
File UTEP-CS-22-76 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 91-94.
What if someone bought a property in good faith, not realizing that this property was unjustly confiscated from the previous owner? In such situations, if the new owner decided to sell this property, Talmud recommended that a fair way is to return 1/4 of the selling price to the original owner. However, it does not provide any explanation of why exactly 1/4 -- and not any other portion -- is to be returned. In this paper, we provide an economic explanation for this recommendation, an explanation that fits well with other ancient recommendations about debts.
File UTEP-CS-22-75 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Deep Learning and Other Soft Computing Techniques: Biomedical and Related Applications, Springer, Cham, Switzerland, 2023, pp. 245-252.
In general, the more unknowns in a problem, the more computational efforts is necessary to find all these unknowns. Interestingly, in state-of-the-art machine learning methods like deep learning, computations become easier when we increase the number of unknown parameters way beyond the number of equations. In this paper, we provide a qualitative explanation for this computational paradox.
File UTEP-CS-22-74 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 69-74.
One of the new forms of investment is investing in so-called non-fungible tokens -- unique software objects associated with different real-life objects like songs, painting, photos, videos, characters in computer games, etc. Since these tokens are a form of financial investment, investors would like to estimate the fair price of such tokens. For tokens corresponding to objects that have their own price -- such as a song or a painting -- a reasonable estimate is proportional to the price of the corresponding object. However, for tokens corresponding to computer game characters, we cannot estimate their price this way. Based on the market price of such tokens, an empirical expression -- named rarity score -- has been developed. This expression takes into account the rarity of different features of the corresponding character. In this paper, we provide a theoretical explanation for the use of rarity score to estimate the prices of non-fungible tokens.
File UTEP-CS-22-73 in pdf
Published in Mathematical Structures and Modeling, 2022, Vol. 63, pp. 87-92.
Since the 1960s, biologists have shown that, contrary to the previous belief that ageing is irreversible, many undesirable biological effects of ageing can be reversed. First attempts to perform this reversal on living creatures were not fully successful: while mice achieved some rejuvenation, many of these rejuvenated mice developed cancer. Later experiments showed that these cancers can be avoided if we apply cyclic rejuvenation: a short period of rejuvenation followed by a longer pause. This modified strategy led to recent successes of mice that recovered their age-deteriorated vision and mice that recovered their heart tissue after a heart attack. However, why rejuvenation attempts often lead to cancer and why cyclic rejuvenation is better remained largely a mystery. In this paper, we provide a simple qualitative explanation of these two phenomena.
File UTEP-CS-22-72 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 237-249.
Often, we deal with black-box or grey-box systems where we can observe the overall system's behavior, but we do not have access to the system's internal structure. In many such situations, the system actually consists of two (or more) independent components: a) how can we detect this based on observed system's behavior? b) what can we learn about the independent subsystems based on the observation of the system as a whole? In this paper, we provide (partial) answers to these questions.
File UTEP-CS-22-71 in pdf
Published in: Nguyen Ngoc Thach, Vladik Kreinovich, Doan Thanh Ha, and Nguyen Duc Trung (eds.), Optimal Transport Statistics for Economics and Related Topics, Springer, Cham, Switzerland, 2024, pp. 174-177.
It is known that people feel better (and even work better) if someone pays attention to them; this is known as the Hawthorne effect. At first glance, it sounds counter-intuitive: this attention does not bring you any material benefits, so why would you feel better? If you are sick and someone gives you medicine, this will make you feel better, but if someone just pays attention, why does that make you feel better? In this paper, we use the general ideas of decision theory to explain this seemingly counterintuitive phenomenon.
File UTEP-CS-22-70 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Deep Learning and Other Soft Computing Techniques: Biomedical and Related Applications, Springer, Cham, Switzerland, 2023, pp. 35-38.
When we analyze biological tissue under the microscope, cells are directly neighboring each other, with no gaps between them. However, a more detailed analysis shows that in vivo, there are small liquid-filled gaps between the cells, and these gaps are important: e.g., in abnormal situations, when the size of the gaps between brain cells decreases, this leads to severe headaches and other undesired effects. At present, there is no universally accepted explanation for this phenomenon. In this case, we show that the analysis of correpsonding geometric symmetries empirical phenomenon leads to a natural explanation for this effect.
File UTEP-CS-22-69 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Deep Learning and Other Soft Computing Techniques: Biomedical and Related Applications, Springer, Cham, Switzerland, 2023, pp. 147-151.
Usually, a cancer radiotherapy session lasts between 10 to 20 minutes. Technically, it is possible to transmit the dose faster, but traditionally, medical doctors were reluctant to do it, since they were afraid of negative effects of such a speedy treatment. Recent experiments show, however, that these fears are unfounded; moreover, transmitting the whole radiation dose in a shorter time turns out to be more beneficial for the patients. In this paper, we provide a possible geometric explanation for this empirical phenomenon.
File UTEP-CS-22-68 in pdf
Published in: Nguyen Ngoc Thach, Vladik Kreinovich, Doan Thanh Ha, and Nguyen Duc Trung (eds.), Optimal Transport Statistics for Economics and Related Topics, Springer, Cham, Switzerland, 2024, pp. 169-173.
In econometrics, volatility of an investment is usually described by its Value-at-Risk (VaR), i.e., by an appropriate quantile of the corresponding probability distribution. The motivations for selecting VaR are largely empirical: VaR provides a more adequate description of what people intuitively perceive as risk. In this paper, we analyze this situation from the viewpoint of decision theory, and we show that this analysis naturally leads to the Value-at-Risk, i.e., to a quantile.
Interestingly, this analysis also naturally leads to an optimization problem related to quantile regression.
File UTEP-CS-22-67 in pdf
Published in Mathematics, 2022, Vol. 10, Paper 2361.
It is known that to more adequately describe expert knowledge, it is necessary to go from the traditional (type-1) fuzzy techniques to higher order ones: type-2, probably type-3 and even higher. Until recently, only type-1 and type-2 fuzzy sets were used in practical applications. However, lately, it turned out that type-3 fuzzy sets are also useful in some applications. Because of this practical importance, it is necessary to design efficient algorithms for data processing under such type-3 (and higher order) fuzzy uncertainty. In this paper, we show how we can combine known efficient algorithms for processing type-1 and type-2 uncertainty to come up with a new algorithm for the type-3 case.
Original file UTEP-CS-22-66 in pdf
Updated version UTEP-CS-22-66b in pdf
Final version UTEP-CS-22-66published in pdf
Published in: Nguyen Ngoc Thach, Vladik Kreinovich, Doan Thanh Ha, and Nguyen Duc Trung (eds.), Optimal Transport Statistics for Economics and Related Topics, Springer, Cham, Switzerland, 2024, pp. 186-192.
Research has shown that to properly understand people's economic behavior, it is important to take into account their emotional attitudes towards each other. Behavioral economics shows that different attitudes results in different economy-related behavior. A natural question is: where do these emotional attitudes come from? We show that, in principle, such emotions can be explained by people's objective functions. Specifically, we show it on the example of a person whose main objective is to increase his/her country's GDP: in this case, the corresponding optimization problem leads exactly to natural emotions towards people who contribute a lot or a little towards this objective.
File UTEP-CS-22-65 in pdf
Published in Notes on Intuitionistic Fuzzy Sets, 2022, Vol. 28, No. 3, pp. 203-210.
While modern computers are fast, there are still many important practical situations in which we need even faster computations. It turns out that, due to the fact that the speed of all communications is limited by the speed of light, the only way to make computers drastically faster is to drastically decrease the size of computer's components. When we decrease their size to sizes comparable with micro-sizes of individual molecules, it becomes necessary to take into account specific physics of the micro-world -- known as quantum physics. Traditional approach to designing quantum computers -- i.e., computers that take effect of quantum physics into account -- was based on using quantum analogies of bits (2-state systems). However, it has recently been shown that the use of multi-state quantum systems -- called qudits -- can make quantum computers even more efficient.
When processing data, it is important to take into account that in practice, data always comes with uncertainty. In this paper, we analyze how to represent different types of uncertainty by qudits.
Original file UTEP-CS-22-64 in pdf
Updated version UTEP-CS-22-64b in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 135-139.
Historically, people have used many ways to represent natural numbers: from the original "unary" arithmetic, where each number is represented as a sequence of, e.g., cuts (4 is IIII) to modern decimal and binary systems. However, with all this variety, some seemingly reasonable ways of representing natural numbers were never used. For example, it may seem reasonable to represent numbers as products -- e.g., as products of prime numbers -- such a representation was never used in history. So why some theoretically possible representations of natural numbers were historically used and some were not? In this paper, we propose an algorithm-based explanation for this different: namely, historically used representations have decidable theories -- i.e., for each such representation, there is an algorithm that, given a formula, decides whether this formula is true or false, while for un-used representations, no such algorithm is possible.
File UTEP-CS-22-63 in pdf
To appear in: Evgeny Dantsin and Vladik Kreinovich (eds.), Uncertainty Quantification and Uncertainty Propagation under Traditional and AI-Based Data Processing (and Related Topics): Legacy of Grigory Tseytin, Springer, Cham, Switzerland.
One of the main purposes of education is to teach skills needed in future life and future jobs. What is important and what is useful changes with time. Before the industrial revolution, routine mechanical work was an important part of human activity – now machines can do it (and do it better). Before printing, copying was an important activity – now copy machines do it. Before computers, humans were needed for computing – now computer do it better. With Wikipedia and Google, there is not much need for scholars being erudite. Even extracting dependencies from data – one of the most creative human activities – is now often done automatically, by deep learning techniques, and these techniques are getting better every day. Students that we teach now will be in the workforce for many decades. What should we teach them that will remain useful to them in decades to come? Our answer: ability generalize from a few facts, this is where we are still better than computers.
Original file UTEP-CS-22-62 in pdf
Updated version UTEP-CS-22-62a in pdf
To appear in Proceedings of the NAFIPS International Conference on Fuzzy Systems, Soft Computing, and Explainable AI NAFIPS'2024, South Padre Island, Texas, May 27-29, 2024
In elementary mathematics classes, students are often overwhelmed by different representations of numbers and corresponding operations: usual fractions, decimal representations, binary numbers, etc. What often helps is when students learn the history of these representations, see the limitations of seemingly reasonable representations like Roman numerals, and how other representations overcame these limitations. Still, history was developed somewhat randomly, so the historical sequence is still somewhat chaotic. We believe that providing a unified approach for all these representations would help describe their sequence in a more logical way and thus, help the students even more.
In our analysis, we explore the relation between the foundations of arithmetic, especially related definitions of numbers, and the resulting notations. For example, widely used Peano axioms describe natural numbers as containing 0 and containing, for each x, the next number x + 1. The corresponding definition naturally leads to the historically first representation of natural numbers as I, II, III, IIII, etc. Allowing addition and multiplication by 2 (and powers of 2) leads to binary numbers, allowing general multiplication to decimal numbers, etc. It turns out that a similar foundational description can be found for most historical representations of natural and fractions. For example, allowing the division of 1 by a natural number leads to Egyptian fractions, allowing generic division of integers leads to common fractions, etc.
We believe that exposing students (and teachers) to at least some of these results will help them better understand the relation between different number representations, and thus, will make it easier for them to master the corresponding techniques.
Original file UTEP-CS-22-61 in pdf
Updated version UTEP-CS-22-61a in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 177-180.
An accurate estimation of the road quality requires a lot of expertise, and there is not enough experts to provide such estimates for all the road segments. It is therefore desirable to estimate this quality based on easy-to-estimate and easy-to-measure characteristics. Recently, an empirical formula was proposed for such an estimate. In this paper, we provide a theoretical explanation for this empirical formula.
File UTEP-CS-22-60 in pdf
In the case of complete information, a reasonable solution to a negotiation process is Nash's bargaining solution, in which we maximize the product of all agents' utility gains. This is the only solution that does not depend on the order in which we list the agents, and does not change if we use a different scale for describing each agent's utility. In this paper, we apply similar invariance criteria to a situation when practically all information is absent, and all we know is the smallest and largest possible gains. We show that in this situation, the only invariant negotiation strategy is to offer, to each agent, a certain percentage of the original request -- and to select the percentage for which all such reduced requests can be satisfied.
File UTEP-CS-22-59 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 227-231.
We all know that we can make different decisions, decisions that change -- at least locally -- the state of the world. This is what is known as freedom of will. On the other hand, according to physics, the future state of the world is uniquely pre-determined by its current state, so there is no room for freedom of will. How can we resolve this contradiction? In this paper, we analyze this problem, and we show that many physical phenomena can help resolve this contradiction: fractal character of equations, renormalization, phase transitions, etc. Usually, these phenomena are viewed as somewhat exotic, but our point is if we want physics which is consistent with freedom of will, then these phenomena need to be ubiquitous.
File UTEP-CS-22-58 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 233-236.
In his famous saying, the Nobelist physicist Niels Bohr claimed that the sign of a deep theory is that while this theory is true, its opposite is also true. While this statement makes heuristic sense, it does not seem to make sense from a logical viewpoint, since, in logic, the opposite to true is false. In this paper, we show how a similar Talmudic discussion can help come up with an interpretation in which Bohr's statement becomes logically consistent.
File UTEP-CS-22-57 in pdf
In his recent book "Principles for Dealing with the Changing World Order", Ray Dalio considered many historical crisis situations, and came up with several data points showing how the probability of a revolution or a civil war depends on the number of economic red flags. In this paper, we provide a simple empirical formula that is consistent with these data points.
File UTEP-CS-22-56 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 33-38.
Eggs of different bird species have different shapes. There exists formulas for describing the usual egg shapes -- e.g., the shapes of chicken eggs. However, some egg shapes are more complex. A recent paper proposed a general formula for describing all possible egg shapes; however this formula is purely empirical, it does not have any theoretical foundations. In this paper, we use the theoretical analysis of the problem to provide an alternative -- theoretically justified -- general formula. Interestingly, the new general formula is easier to compute than the previously proposed one.
File UTEP-CS-22-55 in pdf
Published in: Nguyen Ngoc Thach, Vladik Kreinovich, Doan Thanh Ha, and Nguyen Duc Trung (eds.), Optimal Transport Statistics for Economics and Related Topics, Springer, Cham, Switzerland, 2024, pp. 178-185.
If the overall amount of the company's assets is smaller than its total debts, then a fair solution is to give, to each creditor, the amount proportional to the corresponding debt, e.g., 10 center for each dollar or 50 cents for each dollar. But what if the debt amounts are not known exactly, and for some creditors, we only know the lower and upper bounds on the actual debt amount? What division will be fair in such a situation? In this paper, we show that the only fair solution is to make payments proportional to an appropriate convex combination of the bounds -- which corresponds to Hurwicz optimism-pessimism criterion for decision making under interval uncertainty.
File UTEP-CS-22-54 in pdf
Published in: Shahnaz N. Shahbazova, Ali M. Abbasov, Vladik Kreinovich, Janusz Kacprzyk, and Ildar Batyrshin (eds.), " Recent Developments and the New Directions of Research, Foundations, and Applications", Springer, Cham, Switzerland, 2023, Vol. 2, pp. 203-211.
Fuzzy techniques -- techniques designed to convert imprecise human knowledge into precise computer-understandable terms -- have many successful applications. Traditional applications of fuzzy techniques use only important general features of human reasoning and, to make an implement more efficient, ignore subtle details, details which are not important for the corresponding application. But from the more fundamental viewpoint, it is desirable to understand, in all the detail, how people actually reason. In this paper, we use general ideas of fuzzy approach to answer this question. Interestingly -- and somewhat unexpectedly -- the resulting analysis leads to a natural explanation of existence of several distinct levels of certainty and to a natural appearance of quantum-like negative degrees of certainty.
File UTEP-CS-22-53 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 313-322.
Often, we are interested in a quantity that is difficult or impossible to measure directly, e.g., tomorrow's temperature. To estimate this quantity, we measure auxiliary easier-to-measure quantities that are related to the desired ones by a known dependence, and use the known relation to estimate the desired quantity. Measurements are never absolutely accurate, there is always a measurement error, i.e., a non-zero difference between the measurement result and the actual (unknown) value of the corresponding quantity. Often, the only information that we have about each measurement error is the bound on its absolute value. In such situations, after each measurement, the only information that we gain about the actual (unknown) value of the corresponding quantity is that this value belongs to the corresponding interval. Thus, the only information that we have about the value of the desired quantity is that it belongs to the range of the values of the corresponding function when its inputs are in these intervals. Computing this range is one of the main problems of interval computations.
Lately, it was shown that in many cases, it is more efficient to compute the range if we first re-scale each input to the interval [0,1]; this is one of the main ideas behind Constraint Interval Arithmetic. In this paper, we explain the empirical success of this idea and we show that, in some reasonable sense, this re-scaling is the best.
Original file UTEP-CS-22-52 in pdf
Updated version UTEP-CS-22-52a in pdf
Published in International Journal of Parallel, Emergent and Distributed Systems, DOI: 10.1080/17445760.2022.2070748
In many practical situations, deep neural networks work better than the traditional "shallow" ones, however, in some cases, the shallow neural networks lead to better results. At present, deciding which type of neural networks will work better is mostly done by trial and error. It is therefore desirable to come up with some criterion of when deep learning is better and when shallow is better. In this paper, we argue that this depends on whether the corresponding situation has natural symmetries: if it does, we expect deep learning to work better, otherwise we expect shallow learning to be more effective. Our general qualitative arguments are strengthened by the fact that in the simplest case, the connection between symmetries and effectiveness of deep learning can be theoretically proven.
File UTEP-CS-22-51 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Deep Learning and Other Soft Computing Techniques: Biomedical and Related Applications, Springer, Cham, Switzerland, 2023, pp. 71-75.
To effectively defend the population against future variants of Covid-19, it is important to be able to predict how it will evolve. For this purpose, it is necessary to understand the logic behind its evolution so far. At first glance, this evolution looks random and thus, difficult to predict. However, we show that already a simple game-theoretic model can actually explain -- on the qualitative level -- how this virus mutated so far.
File UTEP-CS-22-50 in pdf
Published in: In: Vladik Kreinovich, Songsak Sriboonchitta, and Woraphon Yamaka (eds.), Machine Learning for Econometrics and Related Topics, Springer, Cham, Switzerland, 2024, pp. 161-167.
In many applications, in particular, in econometric application, deep learning techniques are very effective. In this paper, we provide a new explanation for why rectified linear units -- the main units of deep learning -- are so effective. This explanation is similar to the usual explanation of why Gaussian (normal) distributions are ubiquitous -- namely, it is based on an appropriate limit theorem.
File UTEP-CS-22-49 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Deep Learning and Other Soft Computing Techniques: Biomedical and Related Applications, Springer, Cham, Switzerland, 2023, pp. 65-70.
A recent comparative analysis of biological reaction to unchanging vs. rapidly changing stimuli -- such as Covid-19 or flu viruses -- uses an empirical formula describing how the reaction to a similar stimulus depends on the distance between the new and original stimuli. In this paper, we provide a from-first-principles explanation for this empirical formula.
File UTEP-CS-22-48 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 281-286 .
It is known that self-esteem helps solve problems. From the algorithmic viewpoint, this seems like a mystery: a boost in self-esteem does not provide us with new algorithms, does not provide us with ability to compute faster -- but somehow, with the same algorithmic tools and the same ability to perform the corresponding computations, students become better problem solvers. In this paper, we provide an algorithmic explanation for this surprising empirical phenomenon.
File UTEP-CS-22-47 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp.45-48.
Many immunodepressive drugs have an unusual side effect on the patient's mood: they often make the patient happier. This side hae been observed for many different immunodepressive drugs, with different chemical composition. Thus, it is natural to conclude that there must be some general reason for this empirical phenomenon, a reason not related to the chemical composition of any specific drug -- but rather with their general functionality. In this paper, we provide such an explanation.
File UTEP-CS-22-46 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 217-221.
Recently, astronomers discovered spiral arms around a star. While their shape is similar to the shape of the spiral arms in the galaxies, however, because of the different scale, galaxy-related physical explanations of galactic spirals cannot be directly applied to explaining star-size spiral arms. In this paper, we show that, in contrast to more specific physical explanation, more general symmetry-based geometric explanations of galactic spiral can explain spiral arms around a star.
File UTEP-CS-22-45 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 341-344 .
Most practical problems lead either to solving a system of equation or to optimization. From the computational viewpoint, both classes of problems can be reduced to each other: optimization can be reduced to finding points at which all partial derivatives are zeros, and solving systems of equations can be reduced to minimizing sums of squares. It is therefore natural to expect that, on average, both classes of problems have the same computational complexity -- i.e., require about the same computation time. However, empirically, optimization problems are much faster to solve. In this paper, we provide a possible explanation for this unexpected empirical phenomenon.
File UTEP-CS-22-44 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 199-202.
One of the main motivations for using artificial neural networks was to speed up computations. From this viewpoint, the ideal configuration is when we have a single nonlinear layer: this configuration is computationally the fastest, and it already has the desired universal approximation property. However, the last decades have shown that for many problems, deep neural networks, with several nonlinear layers, are much more effective. How can we explain this puzzling fact? In this paper, we provide a possible explanation for this phenomena: that the universal approximation property is only true in the idealized setting, when we assume that all computations are exact. In reality, computations are never absolutely exact. It turns out that if take this non-exactness into account, then one-nonlinear-layer networks no longer have the universal approximation property, several nonlinear layers are needed -- and several layers is exactly what deep networks are about.
File UTEP-CS-22-43 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 123-126.
Usual derivation of many formulas of elementary mathematics -- such as the formulas for solving quadratic equation -- often leave un unfortunate impression that mathematics is a collection of unrelated unnatural trick. In this paper, on the example of formulas for solving quadratic and cubic equations, we show that these derivations can be made much more natural if we take physical meaning into account.
File UTEP-CS-22-42 in pdf
To appear in: Evgeny Dantsin and Vladik Kreinovich (eds.), Uncertainty Quantification and Uncertainty Propagation under Traditional and AI-Based Data Processing (and Related Topics): Legacy of Grigory Tseytin, Springer, Cham, Switzerland.
Fuzzy control methodology transforms the experts' if-then rules into a precise control strategy. From the logical viewpoint, an if-then rule means implication, so it seems reasonable to use fuzzy implication in this transformation. However, this logical approach is not what the first fuzzy controllers used. The traditional fuzzy control approach -- first proposed by Mamdani -- transforms the if-then rules into a statement that only contains and's and or's, and does not use fuzzy implication at all. So, a natural question arises: shall we use logical approach or the traditional approach? In this paper, we analyze this question on the example of a simple system of if-then rules. It turns out that the answer to this question depends on what we want: if we want the smoothest possible control, we should use logical approach, but if we want the most stable control, then the traditional (Mamdani) approach is better.
Original file UTEP-CS-22-41 in pdf
Updated version UTEP-CS-22-41b in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 117-122.
Our intuition about physics is based on macro-scale phenomena, phenomena which are well described by non-quantum physics. As a result, many quantum ideas sound counter-intuitive -- and this slows down students' learning of quantum physics. In this paper, we show that a simple analysis of measurement uncertainty can make many of the quantum ideas much less counter-intuitive and thus, much easier to accept and understand.
File UTEP-CS-22-40 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023. pp. 195-198.
The main idea behind artificial neural networks is to simulate how data is processed in the data processing devoice that has been optimized by million-years natural selection -- our brain. Such networks are indeed very successful, but interestingly, the most recent successes came when researchers replaces the original biology-motivated sigmoid activation function with a completely different one -- known as rectified linear function. In this paper, we explain that this somewhat unexpected function actually naturally appears in physics-based data processing.
File UTEP-CS-22-39 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 211--215.
In probability theory, rare events are usually described as events with low probability p, i.e., events for which in N observations, the event happens n(N) ~ p*N times. Physicists and philosophers suggested that there may be events which are even rarer, in which n(N) grows slower than N. However, this idea has not been developed, since it was not clear how to describe it in precise terms. In this paper, we propose a possible precise description of this idea, and we use this description to answer a natural question: when two different functions n(N) lead to the same class of possible "truly rare" sequences.
File UTEP-CS-22-38 in pdf
Published in: In: Vladik Kreinovich, Songsak Sriboonchitta, and Woraphon Yamaka (eds.), Machine Learning for Econometrics and Related Topics, Springer, Cham, Switzerland, 2024, pp. 175-179.
Many economic situations -- and many situations in other application areas -- are well-described by a special asymmetric generalization of normal distributions -- known as skew-normal. However, there is no convincing theoretical explanation for this empirical phenomenon. To be more precise, there are convincing explanations for the ubiquity of normal distributions, but not for the transformation that turns normal into skew-normal. In this paper, we use the analysis of hydraulic fracturing-induced seismicity to show explain the ubiquity of such a transformation.
File UTEP-CS-22-37 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 205-208.
It is known that often, after it is proven that a new statement is equivalent to the original definition, this new statement becomes the accepted new definition of the same notion. In this paper, we provide a natural explanation for this empirical phenomenon.
File UTEP-CS-22-36 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 105-108.
In the ideal world, the number of seats that each region or each community gets in a representative body should be exactly proportional to the population of this region or community. However, since the number of seats allocated to each region or community is whole, we cannot maintain the exact proportionality. Not only this leads to a somewhat unfair situation, when residents of one region get more votes per person than residents of another one, it also leads to paradoxes -- e.g., sometimes a region that gained the largest number of people loses a number of seats. To avoid this unfairness (and thus, to avoid the resulting paradoxes), we propose to assign, to each representative, a fractional number of votes, so that the overall number of votes allocated to a region will be exactly proportional to the region's population. This approach resolves the fairness problem, but it raises a new problem: in a secret vote, if we -- as it is usually done -- disclose the overall numbers of those who voted For and those who voted Against, we may reveal who voted how. In this paper, we propose a way to avoid this disclosure.
File UTEP-CS-22-35 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 275-279.
At first glance, from the general decision-theory viewpoint, hate (and other negative feelings towards each other) makes no sense, since they decrease the utility (i.e., crudely speaking, level of happiness) of the person who experiences these feelings. Our detailed analysis shows that there are situations when such negative feelings make perfect sense: namely, when you have a large group of people almost all of whom are objectively unhappy. In such situations -- e.g., on the battlefield -- negative feelings help keep their spirits high in spite of the harsh situation. This explanation leads to recommendations on how to decrease the amount of negative feelings.
File UTEP-CS-22-34 in pdf
Published in Proceedings of the 15th International Workshop on Constraint Programming and Decision Making CoProD'2022, Halifax, Nova Scotia, Canada, May 30, 2022; detailed version published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 459-466.
Predictions are rarely absolutely accurate. Often, the future values of quantities of interest depend on some parameters that we only know with some uncertainty. To make sure that all possible solutions satisfy desired constraints, it is necessary to generate a representative finite sample, so that if the constraints are satisfied for all the functions from this sample, then we can be sure that these constraints will be satisfied for the actual future behavior as well. At present, such a sample is selected based by Monte-Carlo simulations, but, as we show, such selection may underestimate the danger of violating the constraints. To avoid such an underestimation, we propose a different algorithms that uses interval computations.
File UTEP-CS-22-33 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 189-192.
In linguistics, there is a dependence between the length of the sentence and the average length of the word: the longer the sentence, the shorter the words. The corresponding empirical formula is known as the Menzerath's Law. A similar dependence can be observed in many other application areas, e.g., in the analysis of genomes. The fact that the same dependence is observed in many different application domains seems to indicate there should be a general domain-independent explanation for this law. In this paper, we show that indeed, this law can be derived from natural invariance requirements.
File UTEP-CS-22-32 in pdf
Published in Proceedings of the 2022 Annual Conference of North American Fuzzy Information Processing Society, Halifax, Nova Scotia, Canada, May 31 - June 3, 2022, pp. 101-107.
In many real-life systems, the state at the next moment of time is uniquely (and continuously) determined by the current state. In mathematical terms, such systems are called deterministic dynamical systems. In the analysis of such system, continuity is usually understood in the usual mathematical sense. However, as many formal definitions, the mathematical definition of continuity does not always adequately capture the commonsense notion of continuity: that small changes in the input should lead to small changes in the output. In this paper, we provide a natural fuzzy-based formalization of this intuitive notion, and analyze how the requirement of commonsense continuity affects the properties of dynamical systems. Specifically, we show that for such systems, the set of stationary states is closed and convex, and that the only such systems for which we can both effectively predict the future and effectively reconstruct the past are linear systems.
Original file UTEP-CS-22-31 in pdf
Updated version UTEP-CS-22-31a in pdf
Published in Proceedings of the 15th International Workshop on Constraint Programming and Decision Making CoProD'2022, Halifax, Nova Scotia, Canada, May 30, 2022; detailed version published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 451-457.
In many practical situations, we need to find the range of a given function under interval uncertainty. For nonlinear functions -- even for quadratic ones -- this problem is, in general, NP-hard; however, feasible algorithms exist for many specific cases. In particular, recently a feasible algorithm was developed for computing the range of the absolute value of a Fourier coefficient under uncertainty. In this paper, we generalize this algorithm to the case when we have a function of a few linear combinations of inputs. The resulting algorithm also handles the case when, in addition to intervals containing each input, we also know that these inputs satisfy several linear constraints.
File UTEP-CS-22-30 in pdf
Published in Proceedings of the 11th International Scientific-Practical Conference "Mathematical Education in Schools and Universities" MATHEDU'2022, Kazan, Russia, March 28 - April 2, 2022, pp. 201-204.
If most students comment that the course was too fast, a natural idea is to slow it down. If most students comment that the course was too slow, a natural idea is to speed it up. But what if half the students think the speed was too fast and half that the speed was too slow? A frequent reaction to such a situation is to conclude that the speed was just right and not change the speed the next time, but this may not be the right reaction: under the same speed, half of the students will struggle and may fail. A better reaction is to provide additional help to struggling students, e.g., in the form of extra practice assignments. How can we do it without adding more work to instructors – who are usually already overworked? A natural idea is to explicitly make some assignments required only for those who did not do well on the last test or quiz – this way, good students will have fewer required tasks and thus, we can keep the same amount of grading.
File UTEP-CS-22-29 in pdf
Published in Proceedings of the 15th International Workshop on Constraint Programming and Decision Making CoProD'2022, Halifax, Nova Scotia, Canada, May 30, 2022; detailed version published in: In: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 323-327.
How can we describe relative approximation error? When the value b approximate a value a, the usual description of this error is the ratio |b − a|/|a|. The problem with this approach is that, contrary to our intuition, we get different numbers gauging how well a approximates b and how well b approximates a. To avoid this problem, John Gustafson proposed to use the logarithmic measure |ln(b/a)|. In this paper, we show that this is, in effect, the only regular scale-invariant way to describe the relative approximation error.
File UTEP-CS-22-28 in pdf
Published in Proceedings of the 11th International Scientific-Practical Conference "Mathematical Education in Schools and Universities" MATHEDU'2022, Kazan, Russia, March 28 - April 2, 2022, pp. 197-200.
Some students – which are, in terms of pop-psychology – more left-brain – prefer linear exposition, others – more right-brain ones – prefer 2-D images and texts with visual emphasis (e.g., with bullets). At present, instructors try to find a middle grounds between these two audiences, but why not prepare each material in two ways, aimed at both audiences?
File UTEP-CS-22-27 in pdf
Published in Proceedings of the 2022 Annual Conference of North American Fuzzy Information Processing Society, Halifax, Nova Scotia, Canada, May 31 - June 3, 2022, pp. 108-112.
In many real-life situations, deviations are caused by a large number of independent factors. It is known that in such situations, the distribution of the resulting deviations is close to Gaussian, and thus, that the copulas -- that describe the multi-D distributions as a function of 1-D (marginal) ones -- are also Gaussian. In the past, these conclusions were also applied to economic phenomena, until the 2008 crisis showed that in economics, Gaussian models can lead to disastrous consequences. At present, all economists agree that the economic distributions are not Gaussian -- however, surprisingly, Gaussian copulas still often provide an accurate description of economic phenomena. In this paper, we explain this surprising fact by using fuzzy-related arguments.
Original file UTEP-CS-22-26 in pdf
Updated version UTEP-CS-22-26a in pdf
Published in Proceedings of the 2022 IEEE World Congress on Computational Intelligence IEEE WCCI'2022, Padua, Italy, July 18-23, 2022.
In many practical situations, we need to process data under fuzzy uncertainty: we have fuzzy information about the algorithm's input, and we want to find the resulting information about the algorithm's output. It is known that this problem can be reduced to computing the range of the algorithm over alpha-cuts of the input. Since the fuzzy degrees are usually known with accuracy at best 0.1, it is sufficient to repeat this range-computing procedure for 11 values alpha = 0, 0.1, ..., 1.0. However, a straightforward application of this idea requires 11 times longer computation time than each range estimation -- and for complex algorithms, each range computation is already time-consuming. In this paper, we show that when all inputs are of the same time, we can compute all the desired ranges much faster.
Original file UTEP-CS-22-25 in pdf
Updated version UTEP-CS-22-25a in pdf
Published in Mathematical Structures and Modeling, 2022, Vol. 61, pp. 59-65.
Teaching is not easy. One of the main reasons why it is not easy is that the existing descriptions of the teaching process are not very precise -- and thus, we cannot use the usual optimization techniques, techniques which require a precise model of the corresponding phenomenon. It is therefore desirable to come up with a precise description of the learning process. To come up with such a description, we notice that on the set of all possible states of learning, there is a natural order s ≤ s' meaning that we can bring the student from the state s to the state s'. This relation is similar to the causality relation of relativity theory, where a ≤ b means that we can move from point a to point b. In this paper, we use this analogy with relativity theory to come up with the basics of such an order-based description of learning. We hope that future studies of these basics will help to improve the teaching process.
File UTEP-CS-22-24 in pdf
Published in Proceedings of the 11th International Scientific-Practical Conference "Mathematical Education in Schools and Universities" MATHEDU'2022, Kazan, Russia, March 28 - April 2, 2022, pp. 193-196.
Students majoring in mathematics or computer science also have to take additional classes in language, history, philosophy, etc. These classes – that all students have to take, irrespective of their major – are known as core curriculum. Students are often not happy with the need to study subjects outside their major – they view it as waste of time – but empirical evidence shows, surprisingly, that these classes help students be more successful in their majors. In this paper, we provide a mathematical explanation for this unexpected phenomenon. The main idea behind this ex-planation also helps explain why, e.g., art and nature often enhance the creativity of math and computer science students and professionals.
File UTEP-CS-22-23 in pdf
Published in Proceedings of the 11th International Scientific-Practical Conference "Mathematical Education in Schools and Universities" MATHEDU'2022, Kazan, Russia, March 28 - April 2, 2022, pp. 189-192.
Traditionally, subjects are taught in sequential order: e.g., first, students study algebra, then they use the knowledge of algebra to study the basis ideas of calculus. In this traditional scheme, teachers usually do not explain any calculus ideas before students are ready – since they believe that this would only confuse students. However, lately, empirical evidence has shows that, contrary to this common belief, pre-teaching – when students get a brief introduction to the forthcoming new topic before this topic starts – helps students learn. In this paper, we provide a geometric explanation for this unexpected empirical phenomenon.
File UTEP-CS-22-22 in pdf
To appear in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland.
Business gurus recommend that an organization should have, in addition to clearly described realistic goals, also additional aspirational goals -- goals for which we may not have resources and which most probably will not be reached at all. At first glance, adding such a vague goal cannot lead to a drastic change in how the company operates, but surprisingly, for many companies, the mere presence of such aspirational goals boosts the company's performance. In this paper, we show that a simple geometric model of this situation can explain the unexpected success of aspirational goals.
File UTEP-CS-22-21 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 305-309dc .
Often, there is a need to migrate software to a new environment. The existing migration tools are not perfect. So, after applying such a tool, we need to test the resulting software. If a test reveals an error, this error needs to be corrected. Usually, the test also provides some warnings. One of the most typical warnings is that a certain statement is unreachable. The appearance of such warnings is often viewed as an indication that the original software developer was not very experienced. In this paper, we show that this view oversimplifies the situation: unreachable statements are, in general, inevitable. Moreover, a wide use of the above-mentioned frequent view can be counterproductive: developers who want to appear more experienced will skip potentially unreachable statements and thus, make the software less reliable.
File UTEP-CS-22-20 in pdf
Published in Proceedings of the 15th International Workshop on Constraint Programming and Decision Making CoProD'2022, Halifax, Nova Scotia, Canada, May 30, 2022; detailed version published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 301-304.
While theoreticians have been designing more and more efficient algorithms, in the past, practitioners were not very interested in this activity: if a company already owns computers that provide computations in required time, there is nothing to gain by using faster algorithms. We show the situation has drastically changed with the transition to cloud computing: many companies have not yet realized this, but with the transition to cloud computing, any algorithmic speed up leads to immediate financial gain. This also has serious consequences for the whole computing profession: there is a need for professionals better trained in subtle aspects of algorithmics.
File UTEP-CS-22-19 in pdf
Published in Proceedings of the 2022 Annual Conference of North American Fuzzy Information Processing Society, Halifax, Nova Scotia, Canada, May 31 - June 3, 2022, pp. 279-285.
To a lay person reading about history of physics, it may sound as if the progress of physics comes from geniuses whose inspiration leads them to precise equations that -- almost magically -- explain all the data: this is what Newton did with mechanics, this is what Schroedinger did with quantum physics, this is what Einstein did with gravitation. However, a deeper study of history of physics shows that in all these cases, these geniuses did not start from scratch -- they formalized ideas that first appeared in imprecise ("fuzzy") form. In this paper, we explain -- on the qualitative level -- why ideas usually first appear in informal, imprecise form. This explanations enables us to understand other seemingly counterintuitive facts -- e.g., that it is much more difficult for a person to know him/herself than to know others. We also provide some general recommendations based on this explanation.
Original file UTEP-CS-22-18 in pdf
Updated version UTEP-CS-22-18a in pdf
Published in Proceesings of the 11th International Scientific-Practical Conference "Mathematical Education in Schools and Universities" MATHEDU'2022, Kazan, Russia, March 28 - April 2, 2022, pp. 185-188.
Students often start working on their assignments late and, as a result, turn them in late. This procrastination makes grading more difficult. It also delays posting correct solutions that could help students understand their mistakes – and this hinders the students’ progress in studying following topics. At first glance, motivation seems to be a solution to all pedagogical problems: a motivated student eagerly collaborates with the instructor to learn more. Motivation indeed increases students’ knowledge, but, unfortunately, it does not decrease procrastination. So what can we do? We can institute heavy penalties for late submissions, but this would unfairly punish struggling students who need encouragement and not punishment. To solve this problem, we propose to institute differentiated late penalty, heavy for good students and small for struggling ones. This may sound new, but, as we show, this is, in effect, already being done by many instructors. The main difference between the usual practice and what we propose is that we propose to make such differentiated penalty clearly and precisely described in the class syllabus. This will avoid subjectivity and student misunderstandings which are unavoidable if this policy continues to be informal.
File UTEP-CS-22-17 in pdf
Published in Proceedings of the 11th International Scientific-Practical Conference "Mathematical Education in Schools and Universities" MATHEDU'2022, Kazan, Russia, March 28 - April 2, 2022, pp. 182-184.
Empirical studies show that online teaching amplifies the differences between instructors: more successful instructors become even more successful, while the results of the instructors who were not very successful becomes even worse. There is a simple explanation for why the performance of not-perfect instructors decreases: in online teaching, there is less feedback, so these instructors get an indication that their teaching strategies do not work well even later than usual and thus, have fewer time to correct their teaching. However, the fact that the efficiency of good instructors rises is a mystery. In this paper, we provide a possible explanation for this mystery.
File UTEP-CS-22-16 in pdf
Published in: Yuriy P. Kondratenko, Vladik Kreinovich, Witold Pedrycz, Arkadiy A. Chikrii, Anna M. Gil Lafuente (eds.), Artificial Intelligence in Control and Decision-Making Systems, Springer, 2023, pp. 67-74.
In many practical situations, we need to determine the dependence between different quantities based on the empirical data. Several methods exist for solving this problem, including neural techniques and different versions of fuzzy techniques: type-1, type-2, etc. In some cases, some of these techniques work better, in other cases, other methods work better. Usually, practitioners try several techniques and select the one that works best for their problem. This trying often requires a lot of efforts. It would be more efficient if we could have a priori recommendations about which technique is better. In this paper, we use the first-approximation model of this situation to provide such a recommendation.
File UTEP-CS-22-15 in pdf
Published in: Roman Wyrzykowski, Jack Dongarra, Ewa Deelman, and Konrad Karczewski (eds.), Parallel Processing and Applied Mathematics, Revised Selected Papers from the 14th International Conference on Parallel Processing and Applied Mathematics PPAM'2022, Gdansk, Poland, September 11-14, 2022, Springer, Cham, Switzerland, 2023, Part II, pp. 405-414, https://doi.org/10.1007/978-3-031-30445-3_34.
In high performance computing, when we process a large amount of data, we do not have much information about the dependence between measurement errors corresponding to different inputs. To gauge the uncertainty of the result of data processing, the two usual approaches are: the interval approach, when we consider the worst-case scenario, and the probabilistic approach, when we assume that all these errors are independent. The problem is that usually, the interval approach leads to too pessimistic, too large uncertainty estimates, while the probabilistic approach -- that assumes independence of measurement errors -- sometimes underestimates the resulting uncertainty. To get realistic estimates, it is therefore desirable to have techniques intermediate between interval and probabilistic ones. In this paper, we propose such techniques based on the assumption that, in each practical situation, there is an upper bound 0 ≤ b ≤ 1 on the absolute value of all correlations between measurement errors -- the bound that needs to be experimentally determined. The assumption that measurement errors are independent corresponds to b = 0; for b = 1, we get interval estimates, and for intermediate values b, we get the desired intermediate techniques. We also provide efficient algorithms for implementing the new techniques.
Original file UTEP-CS-22-14 in pdf
Updated version UTEP-CS-22-14a in pdf
Published in Mathematical Structures and Modeling, 2022, Vol. 62, pp. 139-147.
It is known that for Minkowski space-times of dimension larger than 2, any causality-preserving transformation is linear. It is also known that in a 2-D space-time, there are many nonlinear causality-preserving transformations. In this paper, we show that for 2-D space-times, if we restrict ourselves to discrete space-times, then linearity is retained: only linear transformation preserve causality.
File UTEP-CS-22-13 in pdf
Published in Mathematical Structures and Modeling, 2022, Vol. 61, pp. 115-121.
In data processing, it is important to gauge how input uncertainty affects the results of data processing. Several techniques have been proposed for this gauging, from interval to affine to Taylor techniques. Some of these techniques result in more accurate estimates but require longer computation time, others' results are less accurate but can be obtained faster. Sometimes, we do not have enough time to use more accurate (but more time-consuming) techniques, but we have more time than needed for less accurate ones. In such cases, it is desirable to come up with intermediate techniques that would utilize the available additional time to get somewhat more accurate estimates. In this paper, we formulate the problem of selecting the best intermediate techniques, and provide a solution to this optimization problem.
File UTEP-CS-22-12 in pdf
Published in Mathematical Structures and Modeling, 2022, Vol. 61, pp. 34-38.
It is known that, in the space-time of Special Relativity, causality implies Lorenz group, i.e., if we know which events can causally influence each other, then, based on this information, we can uniquely reconstruct the affine structure of space-time. When the two events are very close, quantum effects, with their probabilistic nature, make it difficult to detect causality. So, the following question naturally arises: can we uniquely reconstruct the affine structure if we only know causality for events which are sufficiently far away from each other? Several positive answers to this question were provided in a recent paper by Alexander Guts. In this paper, we describe a very simple answer to this same question.
File UTEP-CS-22-11 in pdf
Published in Proceedings of the 2022 Annual Conference of North American Fuzzy Information Processing Society, Halifax, Nova Scotia, Canada, May 31 - June 3, 2022, pp. 1-11.
In the traditional fuzzy logic, an expert's degree of certainty in a statement is described by a single number from the interval [0,1]. However, there are situations when a single number is not sufficient: e.g., a situation when we know nothing and a situation in which we have a lot of arguments for a given statement and an equal number of arguments against it are both described by the same number 0.5. Several techniques have been proposed to distinguish between such situations. The most widely used is interval-valued technique, where we allow the expert to describe his/her degree of certainty by a subinterval of the interval [0,1]. Eliciting an interval-valued degree is straightforward. On the other hand, in many practical application, another technique has been useful: the technique complex-valued fuzzy degrees. For this technique, there is no direct way to elicit such degrees. In this paper, we explain a reasonable natural approach to such an elicitation.
Original file UTEP-CS-22-10 in pdf
Updated version UTEP-CS-22-10a in pdf
Published in: Shahnaz N. Shahbazova, Ali M. Abbasov, Vladik Kreinovich, Janusz Kacprzyk, and Ildar Batyrshin (eds.), " Recent Developments and the New Directions of Research, Foundations, and Applications", Springer, Cham, Switzerland, 2023, Vol. 2, pp. 69-92.
In many practical situations, the quantity of interest is difficult to measure directly. In such situations, to estimate this quantity, we measure easier-to-measure quantities which are related to the desired one by a known relation, and we use the results of these measurement to estimate the desired quantity. How accurate is this estimate?
Traditional engineering approach assumes that we know the probability distributions of measurement errors; however, in practice, we often only have partial information about these distributions. In some cases, we only know the upper bounds on the measurement errors; in such cases, the only thing we know about the actual value of each measured quantity is that it is somewhere in the corresponding interval. Interval computation estimates the range of possible values of the desired quantity under such interval uncertainty.
In other situations, in addition to the intervals, we also have partial information about the probabilities. In this paper, we describe how to solve this problem in the linearized case, what is computable and what is feasibly computable in the general case, and, somewhat surprisingly, how physics ideas -- that initial conditions are not abnormal, that every theory is only approximate -- can help with the corresponding computations.
File UTEP-CS-22-09 in pdf
Published in Proceedings of the 2022 Annual Conference of North American Fuzzy Information Processing Society, Halifax, Nova Scotia, Canada, May 31 - June 3, 2022, pp. 83-89.
In applications of fuzzy techniques to several practical problems -- in particular, to the problem of predicting passenger flows in the airports -- the most efficient membership function is a sine function; to be precise, a portion of a sine function between the two zeros. In this paper, we provide a theoretical explanation for this empirical success.
Original file UTEP-CS-22-08 in pdf
Updated file UTEP-CS-22-08a in pdf
Published in: Raffaele Pisano (ed.), "A History of Physics: Phenomena, Ideas and Mechanisms", History of Mechanism and Machine Science, Vol. 42, Springer, Cham, Switzerland, 2024, pp. 129-143, https://doi.org/10.1007/978-3-031-26174-9_10
In his 2000 seminal book, Silvo D'Agostino provided the detailed overview of the history of ideas underlying 19th and 20th century physics. Now that we are two decades into the 21st century, a natural question is: how can we extend his analysis to the 21st century physics -- and, if possible, beyond, to try to predict how physics will change? To perform this analysis, we go beyond an analysis of what happened and focus more on why para-digm changes happened in the history of physics. To better understand these paradigm changes, we analyze now only what were the main ideas and re-sults of physics, but also on how (and why) the objectives of physics changed with time.
File UTEP-CS-22-07 in pdf
Published in Information, 2022, Vol. 13, No. 2, Paper 82.
Among many research areas to which Ron Yager contributed are decision making under uncertainty (in particular, under interval and fuzzy uncertainty) and aggregation -- where he proposed, analyzed, and utilized ordered weighted averaging (OWA). The OWA algorithm itself provides only a specific type of data aggregation. However, it turns out that if we allow several OWA stages, one after another, we obtain a scheme with a universal approximation property -- moreover, a scheme which is perfectly equivalent to modern ReLU-based deep neural networks. In this sense, Ron Yager can be viewed as a (grand)father of ReLU-based deep learning. We also recall that the existing schemes for decision making under uncertainty are also naturally interpretable in OWA terms.
Original file UTEP-CS-22-06 in pdf
Updated version UTEP-CS-22-06c in pdf
Published in Proceedings of the 15th International Workshop on Constraint Programming and Decision Making CoProD'2022, Halifax, Nova Scotia, Canada, May 30, 2022; detailed version published in Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 223-226.
Many physical dependencies are described by power laws y=A*xa, for some exponent a. This makes perfect sense: in many cases, there are no preferred measuring units for the corresponding quantities, so the form of the dependence should not change if we simply replace the original unit with a different one. It is known that such invariance implies a power law. Interestingly, not all exponents are possible in physical dependencies: in most cases, we have power laws with rational exponents. In this paper, we explain the ubiquity of rational exponents by taking into account that in many case, there is also no preferred starting point for the corresponding quantities, so the form of the dependence should also not change if we use a different starting point.
File UTEP-CS-22-05 in pdf
Published in Proceedings of the 15th International Workshop on Constraint Programming and Decision Making CoProD'2022, Halifax, Nova Scotia, Canada, May 30, 2022; detailed version appear in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 371-375.
While many data processing techniques assume that we know the probability distributions, in practice, we often only have a partial information about these probabilities -- so that several different distributions are consistent with our knowledge. Thus, to apply these data processing techniques, we need to select one of the possible probability distributions. There is a reasonable approach for such selection -- the Maximum Entropy approach. This approach selects a uniform distribution if all we know is that the random variable if located in an interval; it selects a normal distribution if all we know is the mean and the variance. In this paper, we show that the Maximum Entropy approach can also be applied if what we do not know is a continuous function. It turns out that among all probability distributions on the class of such functions, this approach selects the Wiener measure -- the probability distribution corresponding to Brownian motion.
File UTEP-CS-22-04 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 329-334.
In many real-life situations, we know that one of several objects has the desired property, but we do not know which one. To find the desired object, we need to test these objects one by one. In situations when we have no additional information, there is no reason to prefer any testing order and thus, a usual recommendation is to test them in any order. This is usually interpreted as ordering the objects in the increasing value of some seemingly unrelated quantity. A possible drawback of this approach is that it may turn out that the selected quantity is correlated with the desired property, in which case we will need to test all the given objects before we find the desired one. This is not just an abstract possibility: this is exactly what happened for the research efforts that led to the 2021 Nobel Prize in Medicine. To avoid such situations, we propose to use randomized search. Such a search would have cut in half the multi-year time spent on this Nobel-Prize-winning research efforts.
Original file UTEP-CS-22-03 in pdf
Updated version UTEP-CS-22-03a in pdf
Published in Proceedings of the 15th International Workshop on Constraint Programming and Decision Making CoProD'2022, Halifax, Nova Scotia, Canada, May 30, 2022; detailed version published in Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 263-267.
In many practical situations when we need to select the best submission -- the best paper, the best candidate, etc. -- there are so few experts that we cannot simply dismiss all the experts who have conflict of interest: we do not want them to judge their own submissions, but we would like to take into account their opinions of all other submissions. How can we take these opinions into account? In this paper, we show that a seemingly reasonable idea can actually lead to bias, and we explain how to take these opinions into account without biasing the final decision.
File UTEP-CS-22-02 in pdf
Published in Proceedings of the 19th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems IPMU'2022, Milan, Italy, July 11-15, 2022, Vol. 1, pp. 485-493.
It is known that, in general, people overestimate the probabilities of joint events. In this paper, we provide an explanation for this phenomenon -- as explanation based on Laplace Indeterminacy Principle and Maximum Entropy approach.
Original file UTEP-CS-22-01 in pdf
Updated version UTEP-CS-22-01a in pdf